See, Hear, Smell, Touch, Taste: Multi-Modal Interaction in Virtual Environments
In this talk, I present an overview of technology and techniques used to stimulate the human senses in virtual reality applications. The visual and audio channels have received by far the most attention by researchers, to the point that we can now produce synthetic visual and audio experiences that are (arguably) indistinguishable from reality. Effective stimulation of the smell and taste senses have been more elusive, partially because of the intrusive nature of the stimulation, and partly because of a general lack of understanding of the composition of the stimuli. Touch has received a significant amount of attention from researchers, with the main impediments to widespread use being device weight and power requirements, as well as the number of cumber present in general solutions. I present work we have been doing to provide wearable, untethered, full-body haptic feedback systems, as well as a software framework for integrating visual, audio, and haptic rendering into a single system.
Robert W. Lindeman is an Assistant Professor in the Department of Computer Science at The George Washington University. He received a B.A. in Computer Science, cum laude, from Brandeis University in 1987, a M.S., in Systems Management from the University of Southern California in 1992, and a Doctor of Science in Computer Science from The George Washington University in 1999. Dr. Lindeman's research interests include Computer Graphics, Virtual Reality, Human-Computer Interaction, Wearable Computers, and Pervasive Computing. He has received support for his work from DARPA, NSF, ONR, NSA and America Online. He is a member of ACM, IEEE, and UPE.