MAD-EEG: an EEG dataset for decoding auditory attention to a target instrument in polyphonic music

MAD-EEG contains 20-channel surface electroencephalographic (EEG) signals recorded from 8 subjects while they were listening to polyphonic music and attending to a particular instrument.

The dataset will be soon publicly available on this webpage.

UE-HRI dataset

Pepper robot

Atef Ben-Youssef, Chloé Clavel and Slim Essid

UE-HRI is a new dataset collected for the study of user engagement in spontaneous Human-Robot Interactions (HRI).

UE-HRI dataset consists of recordings of humans interacting with the social robot Pepper. It targets the diverse social signals that are involved in user engagement, considering a wide range of heterogeneous […]

EMOEEG: a New Multimodal Dataset for Dynamic EEG-based Emotion Recognition with Audiovisual Elicitation

EMOEEG is a multimodal dataset where physiological responses to both visual and audiovisual stimuli were recorded, along with videos of the subjects, with a view to developing affective computing systems, especially automatic emotion recognition systems. The experimental setup involves various physiological sensors, especially electroencephalographic sensors.

This dataset will soon be made publicly available through this […]

QUASI Database – A musical audio signal Database for Source Separation

QUASI, stands for QUaero Audio SIgnals, is a database dedicated to research on Music Information Retrieval.

This database has been gathered inside the QUAERO Project funded by the French agency OSEO.


This database is a joint work between the METISS team at INRIA/IRISA and the AAO team at Telecom-ParisTech.

The METISS research team […]

Onset_Leveau: a database for Onset detection

Onset_Leveau is a smal annotated database for Onset detection that was in particular used for MIREX 2005 onset detection task.

Details concerning this database are given in the following paper: [pdf] P. Leveau, L. Daudet et G. Richard ,”Methodology and Tools for the evaluation of automatic onset detection algorithms in music”, International Symposium on Music […]

Romeo-HRTF: A Multimicrophone Head Related Transfer Function Database

We propose Romeo-HRTF, a Head Related Transfer Functions (HRTF) database. We extend the usual concept of binaural HRTF to the context of the audition of a humanoid robot where the robot head is equipped with an array of microphones. A first version of the HRTF database is proposed, that is based upon a dummy that […]

MAPS Database – A piano database for multipitch estimation and automatic transcription of music

MAPS, standing for MIDI Aligned Piano Sounds, is a piano sound database dedicated to research on multi-F0 estimation and automatic transcription:

MAPS is composed of about 31 GB of CD-quality recordings in .wav format. High quality audio is obtained by means of Virtual Piano softwares and a Yamaha Disklavier. Nine settings of different pianos and […]

ENST-Drums: an extensive audio-visual database for drum signals processing

The ENST-Drums database is a large and varied research database for automatic drum transcription and processing:

Three professional drummers specialized in different music genres were recorded. Total duration of audio material recorded per drummer is around 75 minutes. Each drummer played his own drum kit. Each sequence used either sticks, rods, brushes or mallets […]