A long range, interdisciplinary research is planned to deal with audio signals of every kind: human multilingual speech, music and other sounds, noise from man-made objects such as automobiles and machines, animal cries and atmospheric sounds such as thunder and rain. Unlike conventional speech recognition framework, the effect of selective attention will be modeled. The ultimate goal is to make sense of any sound that reaches the mike and remove traditional constraints such as only monolingual human speech, restricted by a vocabulary and devoid of cough and laugh!
Current knowledge of visual sensory perception, auditory perception, effect of attention on visual and auditory neurons and the rich cortical feedback will be put to effective use in approaching this complex research issue.
We are proud to collaborate with Dr. T V Ananthapadmabha, who is an accomplished researcher cum entrepreneur. He has worked in world renowned labs such as Royal Institute of Technology, Stockholm; Carnegie-Mellon University, Pittsburgh; AT&T Bell Labs, Murray Hill, NJ and MIT, Cambridge.