Simulation of conversational groups of virtual agents expressing interpersonal attitudes

The system presented in the following video is aimed at simulating conversational groups by virtual agents. Each agent is endowed with a computational model that allows it to modulate its nonverbal behaviors (orientation of the body, distance with other agents, gaze, facial expression and gestures) and to adapt its turn-taking strategies depending on the attitudes […]

A new acted emotional body behavior database

We collected a new corpus for the analysis and the synthesis of emotional body movements. The movements of 11 actors (6 female and 5 male) were captured while they expressed 8 emotional states (Joy, Anger, Panic Fear, Anxiety, Sadness, Shame, Pride and Neutral) described with 3 scenarios and performed through 7 different movements tasks. Each […]

Expressive rendering and animation

Facial expression is a rather complex system. It results from muscle contraction. Wrinkles often appear on the face and tend to occur as the results of some facial expressions. They are definitely part of the expression. FACS describes, for each AU, not only the changes due to muscle contraction but also wrinkle appearance. As facial […]

Modeling Multimodal Behaviors From Speech Prosody

To synthesize the head and eyebrow movements of virtual characters, we have developed a fully parameterized Hidden Markov Model (FPHMM), an extension of a Contextual HMM (CHMM). We have first learned a FPHMM that takes speech features as contextual variables and that produces motion features observation. During the training phase, motion and speech streams are […]

Demo of Nao with Expressive Gesture

Within the French project ANR GV-LeX, we have developed an expressive communicative gesture model for the humanoid robot Nao. The goal of this project aims at equipping a humanoid robot with a capacity of producing gestures while telling a story for children. To reach this objective, we have extended and developed our existing virtual agent […]

Demo E-SmilesCreator and GenAttitude

One of the key challenges in the development of social virtual actors is to give them the capability to display socio-emotional states through their non-verbal behavior. Based on studies in human and social sciences or on annotated corpora of human expressions, di fferent models to synthesize virtual agent’s non-verbal behavior have been developed. One of the […]

Snowball effect

Agents start the interaction di-synchronised and after a while stabilise in synchrony

Demonstrations of SSPNet project

SSPNet (Social Signal Processing Network) is an European Network of Excellence (NoE) which addresses Social Signal Processing.

SSPNet activities revolve around two research foci selected for their primacy in our everyday life:

* Social Signal Processing in Human-Human Interaction * Social Signal Processing in Human-Computer Interaction

Hence,the main focus of the SSPNet is on […]

Demo of GretAR

The GretAR project aims at developing an framework to study the interaction between humans and virtual agent in augmented reality. The research in the collaboration with HitLab NZ is focused on the spatial nonverbal behaviors, the sense presence, the proxemics and other social behaviors in that environment.


Demo of the Semaine project

The Semaine project is an european STREP project and aims to build a SAL, a Sensitive Artificial Listener, a multimodal dialogue system which can:

interact with humans with a virtual character sustain an interaction with a user for some time react appropriately to the user’s non-verbal behaviour

The SAL-system is released to a large extent […]