>
TCTS Lab Staff
 
 

 

Jérôme Urbain

[AVLaughterCycle Database] [Groups] [AVLaughterCycle Database Phonetic annotation] [Examples of acoustic laughter synthesis] [Publications]

Researcher, PhD  

University of Mons (UMONS) 
Engineering Faculty of Mons (FPMs) 

Numediart Research Institute  
Infortech Research Institute  
TCTS Lab  
31, Boulevard Dolez  
B-7000 Mons (Belgium)  

phone: +32 65 37 47 73  
fax: +32 65 374729  
jerome.urbain


Jerome Urbain holds an electrical Engineering degree from FPMs (2006). He did his master's thesis at Lehigh University (USA) in the field of sleep study.

He obtained a PhD degree in 2014 for his thesis on acoustic laughter processing (modeling, characterization, synthesis), in the framework of the CALLAS and ILHAIRE projects. 

Short bio

Jérôme Urbain received the Master degree in Electrical Engineering from FPMs in 2006. During his studies, he was involved in a Research project at UPC in Barcelona, aiming at finding prominent words in speech. He did his Master thesis work at Lehigh University (Pennsylvania, USA) in the field of sleep processing and in particular about the discrimination between sleep and wake using actigraphy.
In september 2006 he joined the Circuit Theory and Signal Processing Lab of FPMs, which is now part of UMONS. He first worked on sleep stages classification in the framework of the DREAMS project.
He then moved in 2007 to the CALLAS project, which aims were to develop affective new media, with a focus on arts and entertainement. Within the CALLAS project, Jérôme was principally involved in the integration of a Speech recognition component and started research on acoustic laughter processing (classification, recognition, copy-synthesis), which was the subject of his PhD Thesis.
He is now since 2011 involved in the ILHAIRE project, which is dealing with various aspects of laughter (recognition, classification, synthesis, perception) and involves experts in related domains (psychology, embodied conversational agents, body movements, dialog management, data collection, real-time classification, ...).
Jérôme received a PhD degree in applied sciences in 2014 for his works on acoustic laughter processing.

AVLaughterCycle Database:
 The AVLaughterCycle database has been recorded in the framework of the eNTERFACE'09 AVLaughterCycle project. Its objective was to build an audiovisual laughter database, combining audio and accurate facial motion tracking. It is the first laughter database combining these 2 signals.
To obtain a robust facial motion tracking, it was decided to use facial motion tracking techniques using markers placed on the subject's face. In consequence, subjects knew they were being recorded. An induction method was used to elicit spontaneous laughter. It consisted in a funny video lasting around 10 minutes. The subjects' reactions while watching the video were recorded. At the end of the video, the subject was instructed to perform one acted laughter.
The database contains about 1000 laughs of variable shapes (length, acoustic content, etc.). It also contains other sounds (speech, breath, ...) but in lower quantities.
For the moment, audio recordings as well as the first round of annotation for each file can be downloaded from this page. Video recordings can be obtained upon request. To get a DVD copy, please contact me.
The data can only be used for reserch purposes. Data may not be redistributed nor publicly broadcasted.
The database can be useful for a broad range of research topics: audio/video laughter recognition, modeling or synthesis; the synchronization between the 2 modalities; analyses of the differences between spontaneous and acted laughter (even if the number of acted laughter is much lower than the numpber of spontaneous ones); studies about the succession of laughter during the video; mesuring diffrences between individuas; etc.
More information can be obtained from the AVLaughterCycle Database README.
The database can be downloaded from this link.

^ Top ^   

AVLaughterCycle Database Phonetic annotation:

Added on April 19, 2011.
You can now access the detailed phonetic annotation of the AVLaughterCycle database.
We suggest you first read this README file.
The files can be downloaded with the following links:
Examples of non IPA labels (hum, cackle, groan, snore, vocal fry and grunt).
Database phonetic annotations.
Segmented laughs.

^ Top ^   

Examples of acoustic laughter synthesis:

Added on Feburary 28, 2014.
Here are some links to synthesized laughs:

Examples of laughs synthesized with Hidden Markov Models, from phonetic transcriptions, with various parameters (F0, rhythm, number of syllables). Note that the animations have been synthesized by our partnes in ILHAIRE (Telecom-ParisTech for the female character, La Cantoche for the male character).

Examples of laughs synthesized with Hidden Markov Models, from phonetic transcriptions as well as from arousal curves (only the arousal curve is used as input).

^ Top ^   

Groups :

^ Top ^   

Publications :

PhD Dissertation

2014
J. URBAIN, 2014, "Acoustic Laughter Processing", PhD thesis supervised by Prof. T. Dutoit, May 2014.

Regular Papers in Journals

2014
J. URBAIN, H. CAKMAK, A. CHARLIER, M. DENTI, T. DUTOIT, S. DUPONT, 2014, "Arousal-driven Synthesis of Laughter", IEEE Journal of Selected Topics in Signal Processing, volume 8, issue 2, pp. 273-284, doi:10.1109/JSTSP.2014.2309435.
2010
J. URBAIN, R. NIEWIADOMSKI, E. BEVACQUA, T. DUTOIT, A. MOINET, C. PELACHAUD, B. PICART, J. TILMANNE, J. WAGNER, 2010, "AVLaughterCycle - Enabling a virtual agent to join in laughing with a conversational partner using a similarity-driven audiovisual laughter animation", Journal on Multimodal User Interfaces (JMUI), Volume 4, Number 1, pp. 47-58, DOI: 10.1007/s12193-010-0053-1.
2009
J. TILMANNE, J. URBAIN, M.V. KOTHARE, A. VANDE WOUWER, S.V. KOTHARE, 2009, "Algorithms for sleep-wake identification using actigraphy: a comparative study and new results", Journal of Sleep Research, 2009, Volume 18, pp 85-98.

Papers in Conference Proceedings

2014
H. CAKMAK, J. URBAIN, J. TILMANNE, T. DUTOIT, 2014, "Evaluation of HMM-based visual laughter synthesis", Proceedings of the IEEE International Conference on Audio Speech and Signal Processing (ICASSP 2014), Florence, Italy, May 4-9.
H. CAKMAK, J. URBAIN, T. DUTOIT, 2014, "The AV-LASYN Database : A synchronous corpus of audio and 3D facial marker data for audio-visual laughter synthesis", Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland, May 26-31.
2013
J. URBAIN, H. CAKMAK, T. DUTOIT, 2013, "Automatic Phonetic Transcription of Laughter and its Application to Laughter Synthesis", Proceedings of the fifth bian­nual Humaine Asso­ci­a­tion Con­fer­ence on Affec­tive Com­put­ing and Intel­li­gent Inter­ac­tion (ACII2013), pp. 153-158 , Geneva, Switzerland, 2-5 September [Best Student Paper Award].
J. URBAIN, H. CAKMAK, T. DUTOIT, 2013, "Evaluation of HMM-Based laughter synthesis", Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2013), Vancouver, Canada, May 26-31.
M. MANCINI, L. ACH, E. BANTEGNIE, T. BAUR, N. BERTHOUZE, D. DATTA, Y. DING, S. DUPONT, H. GRIFFIN, F. LINGENFELSER, R. NIEWIADOMSKI, C. PELACHAUD, O. PIETQUIN, B. PIOT, J. URBAIN, G. VOLPE, J. WAGNER, 2013, "Laugh When You're Winning", Proceedings of the 9th International Summer Workshop on Multimodal Interfaces - eNTERFACE'13, in Innovative and Creative Developments in Multimodal Interaction Systems - IFIP Advances in Information and Communication Technology (IFIP AICT), Volume 425, pp. 50-79, Lisbon, Portugal, July 15 - August 9, doi:10.1007/978-3-642-55143-7_3.
R. NIEWIADOMSKI, J. HOFMANN, J. URBAIN, T. PLATT, J. WAGNER, B. PIOT, H. CAKMAK, S. PAMMI, T. BAUR, S. DUPONT, M. GEIST, F. LINGENFELSER, G. MCKEOWN, O. PIETQUIN, W. RUCH, 2013, "Laugh-aware virtual agent and its impact on user amusement", Proceedings of 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS2013), Saint Paul, Minnesota, USA, May 6 - 10.
J. URBAIN, R. NIEWIADOMSKI, M. MANCINI, H. GRIFFIN, H. CAKMAK, L. ACH, G. VOLPE, 2013, "Multimodal Analysis of laughter for an Interactive System", Proceedings of the 5th International Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN 2013), Mons, Belgium, July 3-5.
T. DRUGMAN, J. URBAIN, N. BAUWENS, R. CHESSINI BOSE, C. VALDERRAMA, P. LEBECQUE, T. DUTOIT, 2013, "Objective Study of Sensor Relevance for Automatic Cough Detection", IEEE Transactions on Information Technology in BioMedicine, vol. 17, issue 3, pp. 609-707, doi:10.1109/JBHI.2013.2239303.
2012
T. DRUGMAN, J. URBAIN, N. BAUWENS, R. CHESSINI BOSE, A.-S. AUBRIOT, P. LEBECQUE, T. DUTOIT, 2012, "Audio and Contact Microphones for Cough Detection", Proceedings of Interspeech 2012, Portland, Oregon, USA, September 9-13.
J. URBAIN, H. CAKMAK, T. DUTOIT, 2012, "Development of HMM-based acoustic laughter synthesis", Interdisciplinary Workshop on Laughter and other Non-Verbal Vocalisations in Speech, Dublin, Ireland, October 26-27.
T. PLATT, J. HOFMANN, W. RUCH, R. NIEWIADOMSKI, J. URBAIN, 2012, "Experimental standards in research on AI and humor when considering psychology", AAAI Technical Report FS-12-02 Artificial Intelligence of Humor, Washington DC.
R. NIEWIADOMSKI, J. URBAIN, C. PELACHAUD, T. DUTOIT, 2012, "Finding out the audio and visual features that influence the perception of laughter intensity and differ in inhalation and exhalation phases", Proceedings from 4th International Workshop on Corpora for Research on EMOTION SENTIMENT & SOCIAL SIGNALS, Satellite of LREC 2012, Istanbul, Turkey, 2012.
J. URBAIN, R. NIEWIADOMSKI, J. HOFMANN, E. BANTEGNIE, T. BAUR, N. BERTHOUZE, H. CAKMAK, R.T. CRUZ, S. DUPONT, M. GEIST, H. GRIFFIN, F. LINGENFELSER, M. MANCINI, M. MIRANDA, G. MCKEOWN, S. PAMMI, O. PIETQUIN, B. PIOT, T. PLATT, W. RUCH, A. SHARMA, G. VOLPE, J. WAGNER, 2012, "Laugh Machine", Proceedings of the 8th International Summer Workshop on Multimodal Interfaces - eNTERFACE'12, pp. 13-34, July, Metz, France.
J. URBAIN, T. DUTOIT, 2012, "Measuring instantaneous laughter intensity from acoustic features", Interdisciplinary Workshop on Laughter and other Non-Verbal Vocalisations in Speech, Dublin, Ireland, October 26-27.
2011
J. URBAIN, T. DUTOIT, 2011, "A Phonetic Analysis of Natural Laughter, for Use in Automatic Laughter Processing Systems", Proceedings of Fourth bi-annual International Conference of the HUMAINE Association on Affective Computing and Intelligent Interaction (ACII2011), Memphis, Tennesse, USA, 2011.
T. DRUGMAN, J. URBAIN, T. DUTOIT, 2011, "Assessment of Audio Features for Automatic Cough Detection", 19th European Signal Processing Conference (Eusipco11), August 29th - September 2nd, Barcelona, Spain.
S. DUPONT, C. FRISSON, J. LEROY, A. MOINET, T. RAVET, X. SIEBERT, J. URBAIN, 2011, "Loopjam", NEM 2011, Torino, Italy.
2010
J. URBAIN, E. BEVACQUA, T. DUTOIT, A. MOINET, R. NIEWIADOMSKI, C. PELACHAUD, B. PICART, J. TILMANNE, J. WAGNER, 2010, "La base de données AVLaughterCycle", Actes des 28emes Journées d'Etude sur la Parole (JEP 2010), pages 61-64, 25-28 mai, 2010, Mons, Belgique.
J. URBAIN, E. BEVACQUA, T. DUTOIT, A. MOINET, R. NIEWIADOMSKI, C. PELACHAUD, B. PICART, J. TILMANNE, J. WAGNER, 2010, "The AVLaughterCycle Database", Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10), pp. 2996-3001, European Language Resources Association (ELRA), May 19-21, Valletta, Malta.
2009
S. DUPONT, T. DUBUISSON, J. URBAIN, R. SEBBE, N. D'ALESSANDRO, C. FRISSON, 2009, "AudioCycle : Browsing Musical Loop Libraries", in Proc. of IEEE Content Based Multimedia Indexing Conference (CBMI09), Chania, Greece, June 2009.
J. URBAIN, T. DUBUISSON, S. DUPONT, C. FRISSON, R. SEBBE, N. D'ALESSANDRO, 2009, "Audiocycle: a similarity-based visualization of musical libraries", Proceedings of ICME 2009, pp. 1847-1848.
J. URBAIN, E. BEVACQUA, T. DUTOIT, A. MOINET, R. NIEWIADOMSKI, C. PELACHAUD, B. PICART, J. TILMANNE, J. WAGNER, 2009, "AVLaughterCycle: An audiovisual laughing machine", Proceedings of the 5th International Summer Workshop on Multimodal Interfaces - eNTERFACE'09, pp. 79-87, July 13th - August 7th, Genoa, Italy.
S. AL MOUBAYEB, M. BAKLOUTI, M. CHETOUANI, T. DUTOIT, A. MAHDHAOUI, J-C MARTIN, S. ONDAS, C. PELACHAUD, M YILMAZ, J. URBAIN, 2009, "Generating Robot/Agent Backchannels During a Storytelling Experiment", Proceedings of 2009 IEEE International Conference on Robotics and Automation, May 12 - 17, 2009, Kobe, Japan.
S.W. GILROY, M. CAVAZZA, M. NIRANEN, E. ANDRE, T. VOGT, J. URBAIN, M. BENAYOUN, H. SEICHTER, M. BILLINGHURST, 2009, "PAD-based multimodal affective fusion", In 2009 International Conference on Affective Computing and Intelligent Interaction, Amsterdam, The Netherlands, September 10-12, IEEE.
J. URBAIN, S. DUPONT, R. NIEWIADOMSKI, T. DUTOIT, C. PELACHAUD, 2009, "Towards a virtual agent using similarity-based laughter production", in Proc. of Interdisciplinary Workshop on Laughter and other Interactional Vocalisations in Speech, Berlin, February 27-28, 2009.
2008
S.W. GILROY, M. CAVAZZA, R. CHAIGNON, S.M. MAKELA, M. NIRANEN, E. ANDRE, T. VOGT, J. URBAIN, H. SEICHTER, M. BILLINGHURST, M. BENAYOUN, 2008, "An affective model of user experience for interactive art", In Proceedings of the 2008 international Conference on Advances in Computer Entertainment Technology (ACE'08), December 3-5, 2008, Yokohama, Japan. vol. 352. ACM, New York, NY, 107-110.
S.W. GILROY, M. CAVAZZA, R. CHAIGNON, S.M. MAKELA, M. NIRANEN, E. ANDRE, T. VOGT, J. URBAIN, M. BILLINGHURST, H. SEICHTER, M. BENAYOUN, 2008, "E-tree: emotionally driven augmented reality art", In Proc. ACM Multimedia, pages 945-948, Vancouver, BC, Canada.
S. AL MOUBAYEB, M. BAKLOUTI, M. CHETOUANI, T. DUTOIT, A. MAHDHAOUI, J-C MARTIN, S. ONDAS, C. PELACHAUD, J. URBAIN, M YILMAZ, 2008, "Multimodal Feedback from Robots and Agents in a Storytelling Experiment", Proc. eNTERFACE08, Paris, pp. 43-55.
2007
J. TILMANNE, J. URBAIN, M.V. KOTHARE, A. VANDE WOUWER, S.V. KOTHARE, 2007, "Actigraphy as a way of distinguishing sleep from wake", Proceedings of the 26th Benelux Meeting on Systems and Control, March 13-15, 2007, Lommel, Belgium, p 61.
F. CHARLES, S. LEMERCIER, T. VOGT, N. BEE, M. MANCINI, J. URBAIN, M. PRICE, E. ANDRE, C. PELACHAUD, M. CAVAZZA, 2007, "Affective Interactive Narrative in the CALLAS Project", ICVS2007, Saint-Malo, France, 5-7 Dec 2007.
J. TILMANNE, J. URBAIN, M.V. KOTHARE, A. VANDE WOUWER, S.V. KOTHARE, 2007, "Algorithms for sleep-wake identification using actigraphy: a comparative study and new results", Proceedings of the 21st annual meeting of the associated professional sleep societies, June 9-14, 2007, Minneapolis, USA.

Technical Reports

2011
S. DUPONT, C. FRISSON, J. URBAIN, S. MAHMOUDI, X. SIEBERT, 2011, "Medi­a­Blender: Inter­ac­tive Mul­ti­me­dia Seg­men­ta­tion", QPSR of the numediart research program, volume 4, n° 1, pp. 1-6, March 2011.
2009
J. URBAIN, E. BEVACQUA, T. DUTOIT, A. MOINET, R. NIEWIADOMSKI, C. PELACHAUD, B. PICART, J. TILMANNE, J. WAGNER, 2009, "AVLaughterCycle: An audiovisual laughing machine", QPSR of the numediart research program, volume 2, n° 3, pp. 97-104, September 2009.
S. DUPONT, T. DUBUISSON, J.A. MILLS III, A. MOINET, X. SIEBERT, D. TARDIEU, J. URBAIN, 2009, "LaughterCycle", QPSR of the numediart research program, volume 2, n° 2, pp. 23-32, June 2009.
2008
S. DUPONT, N. D'ALESSANDRO, T. DUBUISSON, C. FRISSON, R. SEBBE, J. URBAIN, 2008, "AudioCycle : Browsing Musical Loops Libraries", QPSR of the numediart research program, volume 1, n° 4, December 2008.
S. AL MOUBAYEB, M. BAKLOUTI, M. CHETOUANI, T. DUTOIT, A. MAHDHAOUI, J-C MARTIN, S. ONDAS, C. PELACHAUD, J. URBAIN, M YILMAZ, 2008, "Project #3.4: MultimodalFeedback from Robots and Agents in a Storytelling Experiment", QPSR of the numediart research program, volume 1, n° 3, September 2008.

^ Top ^