EMOTION RECOGNITION

Teachers:

Denis Lalanne Fabien Ringeval

Description

The goal of this seminar is to overview the domain of "Emotion Recognition" and to deepen on several sub-domains. With the advances in multimodal analysis and pattern recognition, human emotions (acted, spontaneous or natural) can be automatically recognized by computers using speech analysis, video analysis or physiological sensors. This seminar will review the current recognition techniques and the applications in which emotion recognition is used.

Another objective of this seminar is to develop skills in reading, writing and reviewing academic papers. In this context, students will be asked to write and present a state of the art of a sub-domain of "Emotion Recognition" research field.

Each student will be asked to choose a theme within the domain of "Emotion Recognition" (as listed below), read the corresponding research articles, look for state-of-the-art references relevant to the chosen thematic, synthesize these references and present them orally in a presentation, done during one of the final seminar sessions in a written report, of 4 pages, authored in LaTeX following ACM Strict format.

Learning outcomes

At the end of the seminar, students will know how to do a bibliographic research, how to judge the quality of a scientific paper, and how to write a scientific article. Further, they will know what is "Emotion recognition" and the current techniques and trends, and will deepen their knowledge on a particular subtopic.

Participants

If you are willing to participate to this seminar, please contact the organizers. Max. 6 students will be accepted.

Semester: Autumn 2011

Language: Français, Anglais

Time: 5 sessions, Friday morning. The first session will be held Friday, September 23, 11 at 12:30 p.m. in the meeting room B 420 (Perolles II).

Slides of the talk: "Non-verbal communication: from cognition to affective computing".

Themes & References

Followings sections present the proposed themes. A list of key questions is listed for each of them. Students will be asked to answer those questions by using examples from the literature. As a starting point, two references are given for each theme (some of then are only available from the University). However, further bibliographic researches will be necessary to cover appropriately the key questions.

Theme 1: Designing a corpus of affective communication

  • Goal?
  • Types of affective states?
  • Dimensionality of the emotions?
  • Methods to elicit an emotion?
  • Modalities to record? / materials to use?
  • Categories of participants / data to balance?
  • Types of information to transcribe?
  • Validating the relevance of the data?

Student paper: Mr. Orphée Religieux

References:

C. Busso and S. Narayanan, «Recording audio-visual emotional databases from actors: a closer look», in proc. of Languages Ressources and Evaluation (LREC), Marrakech, Morocoo, pp. 17-22, May 2008.

E. Douglas-Cowie and WP5 members, «D5c, Preliminary plans for exemplars: Databases», in humaine emotion-research.net, May 2004.

Theme 2: Extracting emotional relevant features from speech signal

  • Goal?
  • Types of existing dimensionality / cues?
  • Sorts of variabilities?
  • Timings for encoding / decoding emotions?
  • Descriptors (static / dynamic) for the linguistic information?
  • Descriptors (static / dynamic) for the acoustic and prosodic levels?
  • Performances (independent / combined cues) in emotion recognition?
  • Challenges?

Student paper: Mr. Hervé Sierro

References:

D. Ververidis and C. Kotropoulos, «Emotional speech recognition: Ressources, features and methods», in Speech Communication, vol. 48(9), pp. 1162–1181, Sep. 2006.

B. Schuller, A. Batliner, D. Seppi, S. Steidl, T. Vogt, J. Wagner, L. Devillers, L. Vidrascu, N. Amir, L. Kessous and Vered Aharonson, «The relevance of feature type for the automatic classification of emotional user states: Low level descriptors and functionals», in proc. of Interspeech, Antwerp, Belgium, pp. 2253-2256, 2007.

Theme 3: Extracting emotional relevant features from visual signals

  • Goal?
  • Types of existing dimensionality / cues?
  • Sorts of variabilities?
  • Facial / Hand / Body (posture) action coding system?
  • Detecting regions of interests from the visual data?
  • Descriptors (static / dynamic) for these signals?
  • Performances (independent / combined cues) in emotion recognition?
  • Challenges?

Student paper: Mr. Stefan Egli

References:

H. Gunes and M. Piccardi, "Bi-modal emotion recognition from expressive face and body gestures", in Journal of Network and Computer Applications, vol. 30, no. 4, pp. 1334-1345, Nov. 2007.

D. Glowinski, A. Camurri, G. Volpe, N. Dael, K. Scherer, "Technique for automatic emotion recognition by body gesture analysis", in Computer Vision and Pattern Recognition Workshop, pp. 1-6, 2008.

Theme 4: Extracting emotional relevant features from physiological signals

  • Goal?
  • Types of existing signals / cues?
  • Sorts of variabilities?
  • Devices / methods to capture the signals?
  • Descriptors (static / dynamic) for these signals?
  • Performances (independent / combined cues) in emotion recognition?
  • Challenges?

Student paper: Ms. Muthusamy Revathi priya

References:

F. Nasoz, C. L. Lisetti, K. Alvarez and N. Finkelstein, "Emotion recognition from physiological signals for presence technologies", in International Journal of Cognition, Technology, and Work - Special Issue on Presence, vol. 6, 2003.

C. Zong and M. Chetouani, "Hilbert-Huang transform based physiological signals analysis for emotion recognition", in proc. of IEEE International Symposium on Signal Processing and Information Technology, pp. 334-339, 2009.

Theme 5: Using multimodal features to assess emotion

  • Interests of multimodal vs. unimodal?
  • Pertinent modalities to take into account?
  • Problems of synchronising the streams?
  • Characterising the signals?
  • Fusing the information?
  • Performance of multimodal vs unimodal?
  • Challenges?

Student paper: Mr. Simon Brunner

References:

M. A. Nicolaou, H. Gunes and M. Pantic, "Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space", in IEEE Transactions on Affective Computing, vol. 2(2), April-June 2011.

S. Kollias and K. Karpouzis, "Multimodal emotion recognition and expressivity analysis", in proc. of ICME 2005, Amsterdam, Netherlands, pp. 779-783, Oct. 2005.

Theme 6: Emotion-sensitive Human-Computer Interaction (HCI)

  • Why, what and how?

    • What application areas make ideal research settings for exploring affective technologies in the real world?
    • What value might affective applications, affective systems, and affective interaction have?
    • How do emotions relate to hot topics in HCI such as engagement, motivation, well-being?
  • User Centered Design:

    • How can we measure and evaluate the success of affective interactions?
    • Is there some Design guidelines to help us develop affective HCI?
    • What makes applications that support affective interactions successful, and how can we measure this success?
  • Examples:

    • Illustrate the state of the art of affective HCI with selected projects using emotion recognition and/or synthesis to build interactive systems.
  • Ethics:

    • What are the opportunities, risks and ethics entailed in developing affective systems?

Student paper: Ms. Caroline Voeffray

References:

I. Cristescu, "Emotions in human-computer interaction: the role of nonverbal behavior in interactive systems", in Revista Informatica Economicà, vol. 2(46), 2008.

M. Pantic and L. J. M. Rothkrantz, "Toward an affect-sensitive multimodal Human-Computer Interaction", in proc. of the IEEE vol. 91(9), pp. 1370-1390, Sep. 2003.

Theme 7: Synthesising emotions (from machines to Humans)

  • Goal?
  • Relevant modalities to synthesise emotions?
  • Identifying the cues?
  • Rules based systems?
  • Data-driven systems (e.g., HMM)?
  • Perception of synthesised emotional states?
  • Challenges?

Student paper: Mr. Adel Rizaev

References:

R. Niewiadomski, S. J. Hyniewska and C. Pelachaud, "Constraint-based model for synthesis of multimodal sequential expressions of emotions", in IEEE Trans. on Affective Computing, vol. 2(3), pp. 134-146, Sep. 2011.

M. Bulut, S. Lee and S. Narayanan, "Recognition for synthesis: Automatic parameter selection for resynthesis of emotional speech from neutral speech", in proc. of ICASSP, 2008, pp. 4629-4632.

Theme 8: Emotions in robotics

  • Goal?
  • Types of applications?
  • Constrainsts due to robotics?
  • Recognition and synthesis?
  • Status from the state-of-the-art?
  • Ethical issues?
  • Challenges?

Student paper: Mr. Brice Courtet

References:

R. Gockley, R. Simmons and J. Forlizzi, "Modeling affect in socially interactive robots", in proc. of the 15th IEEE Int. Symp. on Robot and Human Interactive Communication, Sep. 2006, pp. 558-563.

C. L. Bethel and R. R. Murphy, "Survey of non-facial/non-verbal affective expressions for appearance-constrained robots", in IEEE Trans. on Systems, Man and Cybernetics - Part C: Applications and reviews, vol. 38(1), Jan. 2008, pp. 83-92.

Date: 2011, Fall