Joint AMI/PASCAL/IM2/M4 Workshop on Multimodal Interaction and Related Machine Learning Algorithms, Martigny 2004
AMI (Augmented Multiparty Interaction, http://www.amiproject.org) is a newly launched (January 2004) European Integrated Project (IP) funded under Framework FP6 as part of its IST program. AMI targets computer enhanced multi-modal interaction in the context of meetings. The project aims at substantially advancing the state-of-the-art, within important underpinning technologies (such as human-human communication modeling, speech recognition, computer vision, multimedia indexing and retrieval). It will also produce tools for off-line and on-line browsing of multi-modal meeting data, including meeting structure analysis and summarizing functions. The project also makes recorded and annotated multimodal meeting data widely available for the European research community, thereby contributing to the research infrastructure in the field.
PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning, http://www.pascal-network.org) is a newly lauched (December 2003) European Network of Excellence (NoE) as part of its IST program. The NoE brings together experts from basic research areas such as Statistics, Optimisation and Computational Learning and from a number of application areas, with the objective of integrating research agendas and improving the state of the art in all concerned fields.
IM2 (Interactive Multimodal Information Management, http://www.im2.ch) is a Swiss National Center of Competence in Research (NCCR) aiming at the advancement of research, and the development of prototypes, in the field of man-machine interaction. IM2 is particularly concerned with technologies coordinating natural input modes (such as speech, image, pen, touch, hand gestures, head and/or body movements, and even physiological sensors) with multimedia system outputs, such as speech, sounds, images, 3D graphics and animation. Among other applications, IM2 is also targeting research and development in the context of smart meeting rooms.
M4 (Multi-Modal Meeting Manager, http://www.m4project.org) is an EU IST project launched in March 2002 concerned with the construction of a demonstration system to enable structuring, browsing and querying of an archive of automatically analysed meetings. The archived meetings will have taken place in a room equipped with multimodal sensors.
Given the multiple links between AMI, PASCAL, IM2 and M4, it was decided to organize a join workshop in order to bring together researchers from the different communities around the common theme of advanced machine learning algorithms for processing and structuring multimodal human interaction in meetings.
Lectures
An Integrated framework for the management of video collection
Feb 25, 2007 2841 views
Lectures
Tandem Connectionist Feature Extraction for Conversational Speech Recognition
Feb 25, 2007 5069 views
Accessing Multimodal Meeting Data: Systems, Problems and Possibilities
Feb 25, 2007 3281 views
An Efficient Online Algorithm for Hierarchical Phoneme Classification
Feb 25, 2007 3597 views
Tandem Connectionist Feature Extraction for Conversational Speech Recognition
Feb 25, 2007 5377 views
Automatic pedestrian tracking using discrete choice models and image correlation...
Feb 25, 2007 4709 views
Towards Computer Understanding of Human Interactions
Feb 25, 2007 3174 views
Using Static Documents as Structured and Thematic Interfaces to Multimedia Meeti...
Feb 25, 2007 2936 views
Mountains, Exploration, Education, Rich Media and Design
Feb 25, 2007 5301 views
A Programming Model for Next Generation Multimodal Applications
Feb 25, 2007 3076 views
EU research initiatives in multimodal interaction
Feb 25, 2007 2932 views
Artificial Companions
Feb 25, 2007 3457 views
The NITE XML Toolkit meets the ICSI Meeting Corpus: import, annotation, and brow...
Feb 25, 2007 3738 views
Confidence Measures in Speech Recognition
Feb 25, 2007 9273 views
A Mixed-lingual Phonological Component in Polyglot TTS Synthesis
Feb 25, 2007 3718 views
S-SEER: A Multimodal Office Activity Recognition System with Selective Perceptio...
Feb 25, 2007 6244 views
Recognition of Isolated Complex Mono- and Bi-Manual 3D Hand Gestures using Discr...
Feb 25, 2007 3913 views
Mixture of SVMs for Face Class Modeling
Feb 25, 2007 4620 views
On the Adequacy of Baseform Pronunciations and Pronunciation Variants
Feb 25, 2007 3934 views
Zakim - A multimodal sofware system for large-scale teleconferencing
Feb 25, 2007 2760 views
Meeting Modelling
Feb 25, 2007 2965 views
Browsing Recorded Meetings With Ferret
Feb 25, 2007 3050 views
Immersive Conferencing Directions at FX Palo Alto Laboratory
Feb 25, 2007 2789 views
On the Adequacy of Baseform Pronunciations and Pronunciation Variants
Feb 25, 2007 3245 views
