Skip to content

ANRGUSC/AffectEval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


AffectEval

A Python-based framework to jumpstart your affective computing applications!

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Citing
  5. Contributing
  6. Contact
  7. Acknowledgments

About The Project

AffectEval is a modular and customizable affective computing framework, aimed to facilitate the development of end-to-end affective computing pipelines from raw signals to model training and evaluation. It is Python-based and designed in an object-oriented paradigm, with separate classes for each component in the affective computing pipeline (see Fig. 1).

AffectEval is designed with mental healthcare-focused affective computing applications in mind, and primarily supports time-series data at the moment. We plan to expand our scope to include other types of signals as well, such as text, video, and audio (see the documentation for more details on what signals we currently support, and what methods are pre-implemented).

This repository is also a platform for community contributions. Please open pull requests to add methods and models you find useful so that others can use them as well! See Contributing for more details.

Affective Computing Pipeline

The affective computing pipeline, originally described in [1].

(back to top)

Getting Started

Prerequisites

Installation

Install AffectEval by cloning the repository:

git clone https://github.com/ANRGUSC/AffectEval.git

(back to top)

Usage

AffectEval was designed to reduce the amount of manual work required to implement affective computing pipelines. It is composed of Python classes that represent individual blocks, which can be modified and switched out independently if needed.

Formatting your data to be compatible with AffectEval

In order to take the workload off the user and enable automatic message-passing between components, we impose requirements on the format of input data:

Data standard format

Specifically, files should be in the Comma Separated Value file format (.csv), named with the subject ID, name of the experimental phase (e.g., Baseline, Stress, Recovery, etc.), and the signal type (e.g., ECG, EDA, EMG, etc.). Refer to the documentation to see the exact abbreviations for each signal type.

Examples

Example scripts and Jupyter Notebook files can be found in the examples directory. Specifically, we created end-to-end pipelines using AffectEval for the following:

  • Binary stress detection using the Wearable Stress and Affect Detection (WESAD) dataset [2] (link)
  • Binary stress detection using the Anxiety Phases Dataset (APD) [3] (link)
  • Replication of binary stress detection experiments in Zhou et al. [4] (link)
  • Replication of binary and three-class stress classification [2] (link)

You will need to download the APD and/or WESAD datasets to run the corresponding scripts. WESAD is publicly available, while APD is available upon request.

(back to top)

Contributing

We greatly welcome community contributions to AffectEval! If you would like to contribute custom subclasses and methods for your specific affective computing application, check out a new branch:

git checkout -b <your-new-branch-name>

(back to top)

Contact

Emily Zhou - [email protected]

(back to top)

Citing

Please use the following to cite AffectEval:

E. Zhou, K. Khatri, Y. Zhao, and B. Krishnamachari, "AffectEval: A Modular and Customizable Framework for Affective Computing," 2025, arXiv:2504.21184.

@article{affecteval,
    title = {AffectEval: A Modular and Customizable Framework for Affective Computing},
    author = {Emily Zhou and Khushboo Khatri and Yixue Zhao and Bhaskar Krishnamachari},
    year={2025},
    eprint={2504.21184},
    archivePrefix={arXiv},
    primaryClass={cs.AI},
    url={https://arxiv.org/abs/2504.21184}
}

(back to top)

References

[1] J. Oliveira, S. M. Alarcão, T. Chambel, and M. J. Fonseca, "MetaFERA: A meta-framework for creating emotion recognition frameworks for physiological signals", Multimedia Tools and Applications 83, 4 (Jun 2023), 9785–9815, doi: 10.1007/s11042-023-15249-5.

[2] P. Schmidt, A. Reiss, R. Duerichen, C. Marberger, and K. Van Laerhoven, "Introducing WESAD, a Multimodal Dataset for Wearable Stress and Affect Detection", In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI '18). Association for Computing Machinery, New York, NY, USA, 400–408, doi: 10.1145/3242969.3242985.

[3] H. Senaratne, L. Kuhlmann, K. Ellis, G. Melvin, and S. Oviatt, "A Multimodal Dataset and Evaluation for Feature Estimators of Temporal Phases of Anxiety"," Association for Computing Machinery, New York, NY, USA, doi: 10.1145/3462244.3479900.

[4] E. Zhou, M. Soleymani and M. J. Matarić, "Investigating the Generalizability of Physiological Characteristics of Anxiety", 2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Istanbul, Turkiye, 2023, pp. 4848-4855, doi: 10.1109/BIBM58861.2023.10385292.

(back to top)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages