[ Professor ] Satoshi Shioiri
Visual Cognition and Systems
[ Professor ] Shuichi Sakamoto
Auditory and Multisensory Information Systems
To create computational models of the process that the human brain integrates multiple sensory inputs from the outside world, we are investigating the visual and auditory functions in the human brain for implementing these functions in hardware under biologically plausible settings. Our approaches include psychophysics, brain wave measurements, and computer simulations.
Visual Cognition and Systems(Prof. Shioiri)
Modeling the processes of human vision based on the findings of the strategies that the visual system uses, we plan to propose appropriate methods for evaluation of image qualities, efficient way of image presentation and evaluation of visual environments in general. We also investigate dynamic selection process in vision with or without attention to realize prediction system of human perception and action in the future.
- Measurements of spatial and temporal characteristics of visual attention.
- Modeling control system of eye movements and visual attention
- Investigation of early, middle and late vision of 3D perception.
Advanced Acoustic Information Systems(Assoc. Prof. Sakamoto)
We study the mechanism of human multimodal processing, including hearing. In particular, we focus on speech perception as an audio-visual process, the judgment of auditory space during motion and the impression of a sense-of-reality in multimodal content. Such knowledge is crucial to develop advanced communication and information systems. Based on this knowledge, we are developing future auditory information systems.
- Mechanism of multisensory information processing including hearing.
- Development of high-definition 3D sound space acquisition systems
- Auditory information systems based on multisensory information processing.