Demo E-SmilesCreator and GenAttitude

One of the key challenges in the development of social virtual actors is to give them the capability to display socio-emotional states through their non-verbal behavior. Based on studies in human and social sciences or on annotated corpora of human expressions, di fferent models to synthesize virtual agent’s non-verbal behavior have been developed. One of the major issues in the synthesis of behavior using a corpus-based approach is collecting datasets, which can be difficult, time consuming and expensive to collect and annotate. A growing interest in using crowdsourcing to collect and annotate datasets has been observed in recent years. In this paper, we have implemented a toolbox to easily develop online crowdsourcing tools to build a corpus of virtual agent’s non-verbal behaviors directly rated by users. We present two developed online crowdsourcing tools that have been used to construct a repertoire of virtual smiles and to de ne virtual agents’ non-verbal behaviors associated to social attitudes.

 


YouTube  


YouTube  

 

A crowdsourcing method for a user-perception based design of social virtual actors
Magalie Ochs, Brian Ravenet, and Catherine Pelachaud
International Workshop “Computers are Social Actors” (CASA), Intelligent Virtual Agent Conference (IVA),  Edinburgh, Scotland, 2013

From a User-Created Corpus of Virtual Agent’s Non-Verbal Behavior to a Computational Model of Interpersonal Attitudes
Brian Ravenet, Magalie Ochs and Catherine Pelachaud
Intelligent Virtual Agent Conference (IVA), Edinburgh, Scotland, 2013

Socially Aware Virtual Characters: The Social Signal of Smiles
Magalie Ochs et Catherine Pelachaud
IEEE Signal Processing Magazine, Vol 30 (2), p. 128-132, March 2013

Smiling Virtual Agent in Social Context
Magalie Ochs, Radoslaw Niewiadomski, Paul Brunet, and Catherine Pelachaud
Cognitive Processing, Special Issue on “Social Agents. From Theory to Applications” (impact factor: 1.754), vol. 13 (22), pages 519-532, 2012.

 

 

Séminaire d’Edoardo Provenzi

Edoardo Provenzi (currently postdoc at TSI) will present his work about:

“Perceptually-inspired enhancement of color LDR and HDR images: a variational perspective”
On Tuesday October 15th at 10:30 in  room DB312 (Dareau site).
Abstract:
The seminar will be devoted to discuss a recently proposed variational framework, both in the spatial and in the wavelet domain, that can embed several existing perceptually-inspired color enhacement algorithms. It can be proven that the human visual system properties are satisfied only by a class of energy functionals, which are given by the balance between a local and illumination-invariant contrast enhancement and an entropy-like adjustment to the average radiance. Within this framework, new measures of perceived contrast are proposed, however, while their mathematical definition is firm, their psychophysical validation is still lacking. Rigurous experiments performed with high dynamic range screens may provide a solution to this problem.
Short bio:
Edoardo Provenzi got the Master Degree in Physics from the University of Milano, Italy, in 2000 and the PhD in Mathematics from the University of Genova, Italy, in 2004. His works in computer vision span different discipline: mathematical foundation of perceptually-inspired color correction algorithms, variational and wavelet analysis of perceived contrast, high dynamic range imaging, motion segmentation and optimal histogram transportation. At the moment, he is a post-doc researcher at Telecom ParisTech.

Snowball effect

Agents start the interaction di-synchronised and after a while stabilise in synchrony

Demonstrations of SSPNet project

SSPNet (Social Signal Processing Network) is an European Network of Excellence (NoE) which addresses Social Signal Processing.

SSPNet activities revolve around two research foci selected for their primacy in our everyday life:

* Social Signal Processing in Human-Human Interaction
* Social Signal Processing in Human-Computer Interaction

Hence,the main focus of the SSPNet is on developing and validating the scientific foundations and engineering principles (including resources for experimentation) required to address the problems of social behaviour analysis,interpretation,and synthesis. The project focuses on multimodal approaches aimed at:(i) interpreting information for better understanding of human social signals and implicit intentions,and (ii) generating socially adept behaviour of embodied conversational agents. It will consider how we can model,represent,and employ human social signals and behaviours to design autonomous systems able to know,either through their design or via a process of learning,how to understand and respond to human communicative signals and behaviour.

Different types of virtual characters smiles

In this context, our focus of research is to study how humans-virtual characters communication can be facilitated by the appropriate non-verbal behaviours of the virtual characters. As a first step, we have studied different types of virtual characters smiles (amused smile, polite smile, etc). In the following videos, you can see different types of virtual characters smiles, as well as example of virtual characters which express different smiles when they tell a funny story.



Details:

Ochs, M., Niewiadomski, R., Pelachaud, C., How a Virtual Agent Should Smile? – Morphological and Dynamic Characteristics of Virtual Agent’s Smiles, in Proceedings of the 10th International Conference on Intelligent Virtual Agents, Philadelphia, USA, pp. 427-440, 2010.

Synchrony emergence between dialog partners

Another aspect of making humans-conversationnal agents communication easier, is to enable dynamics linked to the quality of the interaction to emerge within the dyad (or within the group of interactants). Among other, during dialog, synchrony between non-verbal behaviours of agents is characteristic of the quality of their communication, i.e. it depends on their mutual understanding and on the amount of information they exchange.

In the following videos, you can see synchrony appear between agents just by the settling of a coupling between them when they both understand and perceive each other.

Agents start the interaction di-synchronised and after a while stabilised in synchrony

Up-right agent does not understand what is said (impossible coupling), down-right agent understands what is said, sees the speaker but is not seen back (impossible coupling), up-left agent understands what is said, perceives and is perceived by the speaker (coupling and synchrony constitute the stable state of the dyad they form).

Details:

K. Prépin and C. Pelachaud, Shared Understanding and Synchrony Emergence: Synchrony as an Indice of the Exchange of Meaning between Dialog Partners, Third International Conference on Agents and Artificial Intelligence, ICAART2011, Rome, Italie, January 2011, pp. 1-10 [pdf]

Demo of GretAR

The GretAR project aims at developing an framework to study the interaction between humans and virtual agent in augmented reality. The research in the collaboration with HitLab NZ is focused on the spatial nonverbal behaviors, the sense presence, the proxemics and other social behaviors in that environment.

Continue reading Demo of GretAR

Demo of the Semaine project

The Semaine project is an european STREP project and aims to build a SAL, a Sensitive Artificial Listener, a multimodal dialogue system which can:

  • interact with humans with a virtual character
  • sustain an interaction with a user for some time
  • react appropriately to the user’s non-verbal behaviour

The SAL-system is released to a large extent as an open source research tool to the community. It can be downloaded from:   http://semaine.sourceforge.net/


YouTube  


YouTube  

Demo of the ABCD protocol

In this demo, we present a cross-layer protocol for video streaming in mobile ad-hoc networks. The protocol, called ABCD,  builds an maintains a multi-tree overlay network, optimised w.r.t. the underlying wireless network; then, the video descriptions are sent independently, one on each tree. The proposed protocol allows video streaming with negligible loss rate and short delay, even under high churn, thanks to the path independence among the descriptions.

The following videos show the trees build-up for the cases of 30 and 100 node MANETs. Two description are used in both cases, resulting in a double distribution tree.


Download Video


Download Video

The detail of this protocol can be found in the following articles:

  1. C. Greco, M. Cagnazzo. « A cross-layer protocol for cooperative content delivery over mobile ad-hoc networks ». In International Journal of Communication Networks and Distributed Systems, vol. 7, no. 1, pp. 49-63, July 2011.
  2. C. Greco, M. Cagnazzo, B. Pesquet-Popescu. « ABCD : Un protocole cross-layer pour la diffusion vidéo dans des réseaux sans fil ad-hoc ». In Colloque GRETSI – Traitement du Signal et des Images, September 2011. Bordeaux, France.

Arrival of Giuseppe Valenzise

Giuseppe Valenzise has joined the team since July 1st as a post-doc. He carried out his PhD thesis at Politecnico di Milano (Italy). His activity will focus on multiview video compression and the relationship with streaming and real-time access.

Seminar by Dr. Iole Moccagatta

Dr. Iole Moccagatta, from NVIDIA Corporation, will give  a seminar on Monday March 21 2011 at 10h00 (room E200, Barrault), on Graphics Processing Units and the CUDA technology.

Title: “Using CUDA to unleash computational power of GPU”

Abstract:
Inspired by computational power and potential growth of the Graphics Processing Units (GPUs) used in gaming, pioneering programmers began to explore the use of GPUs for General-Purpose computation known as GPGPU Computing.

In the early days, mapping general purpose computation onto GPUs was difficult and required painstaking efforts. GPU Architecture designers at NVIDIA realized potential benefits of GPUs in solving general purpose problems, establishing GPUs place as a processor to solve non-graphics problems. To achieve this goal, engineers at NVIDIA designed C/C++ compiler, SDK and runtime software making GPGPU programming available to C/C++ programmers. Exposing parallel processing capability of GPU architecture namely CUDA enabled programmers to readily design and implement on GPUs.

In this talk, we will first present the history of GPU computing to explain the rationale behind the modern programmable GPU architecture. Then, we will introduce the CUDA programming model including an overview of tools and libraries. Finally, we will conclude with a showcase of various GPU computing applications developed by programmers, scientists and researchers around the world on CUDA architecture.

Biography:
Iole Moccagatta is a Senior Video Architect at NVIDIA’s Mobile Division where she is currently working on the Tegra family of computer-on-a-chip. Prior to NVIDIA, she held positions at Texas Instruments, Rockwell Science Center, LSI Logic (previously C-Cube Microsystems), and IMEC. She has also actively participated in various image and video standards (MPEG, JPEG, ITU-T, and others).

Seminar by Prof. Dah Ming Chiu

Prof. Dah Ming Chiu from Chinese University of Hong Kong (CUHK) will give a seminar on Friday March 4 2011 at 10h00 (room DA-006) presenting his research activities on P2P networking.

Title: Models and replication algorithms for P2P-assisted VoD

Abstract:
We consider a P2P-assisted Video-on-Demand system where each peer can store a relatively small number of movies to offload the server when these movies are requested. How much local storage is needed? How does this depend on the other system parameters, such as number of peers, number of movies, the relative uploading capacity from peers relative to playback rate, and the skewness in movie popularity? How many copies should the system keep for each movie, and how does it depend on movie popularity? Should all movies be replicated in the P2P system, or should some be kept at the server only? If the latter, which movies should be kept at the server? Once we have an understanding of these issues, can we come up with a distributed and robust algorithm to achieve the desired replication? Will the system adapt to changes in system parameters over time? We will describe our work in trying to answer these kinds of questions.

Biography:
Dah Ming Chiu is a professor of the Department of Information Engineering at the Chinese University of Hong Kong (CUHK). He is currently serving as the department chairman. Dah Ming received his BSc from Imperial College London, and PhD from Harvard University. He worked in industry before joining CUHK in 2002.