Toward Adaptive Information Fusion in Multimodal Systems
Techniques for information fusion are at the heart of multimodal system design. In this talk, I'll summarize recent work on predictive modeling of users' multimodal integration patterns, including that (1) there are large individual differences in users' dominant speech and pen multimodal integration patterns, (2) these patterns can be identified almost immediately and remain highly consistent for individual users over time, (3) they are highly resistant to change, even when users are given strong selective reinforcement or explicit instructions to switch patterns, and (4) these distinct patterns appear to derive from enduring differences among users in cognitive style. I'll also discuss findings on systematic entrenchment of users' dominant multimodal integration pattern when under load, including as task difficulty increases and during error handling. I'll conclude by highlighting work we are now pursuing that combines predictive user modeling with machine learning techniques to accelerate, generalize, and improve the reliability of information fusion during multimodal system processing. Implications of this research will be discussed for the design of adaptive multimodal systems with substantially improved performance characteristics.