Please note this is a half day tutorial starting at 1:00 PM.
Registration for this tutorial only is at 12:00 PM.
Humans experience the world with all their senses, including vision, touch, and hearing. Therefore, computer animation systems should also synthesize such correlated multisensory cues.
This tutorial will describe the state of the art and current research directions in multisensory simulation. The focus is on techniques suitable for interactive animation, such as in games and VR. A critical challenge is to achieve as much realism as possible, while adhering to real-time constraints to achieve low latency response to interaction. We will describe the fundamentals of interactive simulation of 3D objects, including techniques for dynamics simulation, contact interaction with haptic interfaces, and simulation of interaction sounds. We will also describe the new Rutgers Haptic, Auditory, and Visual Environment (HAVEN) for multisensory modeling and rendering.
Target audience and Prerequisites
Graduate students, researchers, developers of animation software and hardware, and game developers. The audience is assumed to be familiar with computer graphics at the undergraduate level and have some comfort with linear systems, numerical methods, and matrix algebra. Some exposure to physical modelling is useful.
Course length: 1/2 day
Detailed course outline
The tutorial will cover the following topics:
I. Introduction to Multisensory Simulation. Interaction devices, visual, haptic, and auditory displays. Human perception. Simulation of multiple physical models, including deformation models, contact sound models, and rigid body contact with friction. Synchronization of visual, haptic and auditory channels. II. Simulation techniques. Geometric techniques - multiresolution models, wavelets, subdivision. - collision detection. Broad phase and narrow phase techniques. Contact dynamics simulation - dynamics of a single rigid body - friction, surface roughness - smooth contact, rolling, sliding - multibody simulation of articulated figures - rendering forces for haptic interfaces Sound simulation - the sound generation pipeline - contact response models - environment acoustics - spatialization, HRTF III. Simulation Systems Efficient algorithms and implementation issues will be addressed. Precomputation and caching techniques. Real-time issues and latency. Live software demonstrations will be included. Available simulation software and applications in medicine, engineering, and entertainment will be discussed.
Dinesh K. Pai is a Professor in the Department of Computer Science at Rutgers, the State University of NJ. He moved to Rutgers in 2002 from the University of British Columbia, where he was a Professor and a fellow of the BC Advanced Systems Institute. He received his Ph.D. from Cornell University, Ithaca, NY. His research interests span the areas of graphics, robotics, and multisensory human-computer interaction. One current research focus is reality-based modeling, i.e., building multisensory computational models of the physical world from measurements. This includes a recent thrust in reconstruction from medical ultrasound images. Another focus is fast simulation with integrated sound, haptics, and graphics, especially simulation of contact. See http://www.cs.rutgers.edu/~dpa for more details.