Simulation of conversational groups of virtual agents expressing interpersonal attitudes

The system presented in the following video is aimed at simulating conversational groups by virtual agents. Each agent is endowed with a computational model that allows it to modulate its nonverbal behaviors (orientation of the body, distance with other agents, gaze, facial expression and gestures) and to adapt its turn-taking strategies depending on the attitudes […]

A new acted emotional body behavior database

We collected a new corpus for the analysis and the synthesis of emotional body movements. The movements of 11 actors (6 female and 5 male) were captured while they expressed 8 emotional states (Joy, Anger, Panic Fear, Anxiety, Sadness, Shame, Pride and Neutral) described with 3 scenarios and performed through 7 different movements tasks. Each […]

Expressive rendering and animation

Facial expression is a rather complex system. It results from muscle contraction. Wrinkles often appear on the face and tend to occur as the results of some facial expressions. They are definitely part of the expression. FACS describes, for each AU, not only the changes due to muscle contraction but also wrinkle appearance. As facial […]

Modeling Multimodal Behaviors From Speech Prosody

To synthesize the head and eyebrow movements of virtual characters, we have developed a fully parameterized Hidden Markov Model (FPHMM), an extension of a Contextual HMM (CHMM). We have first learned a FPHMM that takes speech features as contextual variables and that produces motion features observation. During the training phase, motion and speech streams are […]

Demo of Nao with Expressive Gesture

Within the French project ANR GV-LeX, we have developed an expressive communicative gesture model for the humanoid robot Nao. The goal of this project aims at equipping a humanoid robot with a capacity of producing gestures while telling a story for children. To reach this objective, we have extended and developed our existing virtual agent […]

Demo E-SmilesCreator and GenAttitude

One of the key challenges in the development of social virtual actors is to give them the capability to display socio-emotional states through their non-verbal behavior. Based on studies in human and social sciences or on annotated corpora of human expressions, di fferent models to synthesize virtual agent’s non-verbal behavior have been developed. One of the […]

Snowball effect

Démonstration de GretAR

Dans le projet GretAR, en collaboration avec HitLab NZ, nous étudions les interactions entre humains et agents virtuels en réalité augmentée. Cette recherche est focalisée sur les comportements spatiaux, la sensation de présence, la proxémique et d’autres comportements sociaux.



Obaid, M., Niewiadomski, R., Pelachaud, C., Perception of Spatial Relations and […]

Démonstration du projet SSPNet

SSPNet (Social Signal Processing Network) est un réseau européen d’excellence visant l’étude et le traitement (“processing”) des signaux sociaux.

Les activités du réseau SSPNet tournent autour de deux foci de recherche, sélectionnés pour leur importance dans notre quotidien à l’heure actuelle :

* Le Traitement des Signaux Sociaux dans les interactions Humain-Humain * Le Traitement […]

Démonstration du projet GVLEX

Le but du projet ANR GVLEX est de doter le robot NAO et l’agent virtuel Greta de la capacité de lire une histoire avec une voix et des gestes expressifs, pendant plusieurs minutes, sans provoquer la lassitude de l’auditeur.

Dans ce projet, le système proposé prend en entrée un texte qui sera dit par l’agent. […]