Enhances speech comprehension and perception leading to improved wearable technology and user experience in hearing aids, language learning, gaming, VR, sports performance, etc.

About

When people speak in a noisy room, we can understand them better by looking at their lips. Likewise, in sport, gaming, driving, and interacting with machines and software, visual events often correspond with sounds, such as the sound of a ball bouncing, or the click and flash of an indicator light. Integrating our senses effectively may benefit our performance and reaction time. However, our published research shows that audiovisual integration is naturally suboptimal in the majority of individuals. Our results also demonstrate that speech-reading and perception of multisensory correspondences can be substantially improved, by precisely adjusting, per individual, some very specific properties of the auditory and visual stimuli. The necessary adjustments should be quite simple to implement with existing technology, but a wide range of applications follow from just knowing when, and how much to apply them. Our research provides this knowhow, along with a systematic method for profiling and correcting for individual differences in multisensory perception. We envisage numerous applications: Hearing aids / speech-reading aids: Our research shows that optimising audiovisual integration can improve comprehension of speech in background noise, which may be particularly beneficial for in the case of mild to moderate hearing impairment. Current technological investment has only yielded incremental advances in sound quality, and reduced latency, which our research also suggests might not even be so important. By taking vision into account, our discovery opens a hitherto unanticipated dimension for substantially improving hearing aid technology. Reading skills: Our continuing research indicates that reading ability in normally-hearing adults correlates with individual differences in multisensory perception. This might be because, when learning to read, integrating visual speech cues may help to identify unfamiliar speech sounds, which may then be associated more readily with their written forms. Thus, optimisation of perception in childhood may thus benefit normally-hearing individuals in language acquisition, either early or later in life. Language learning:  Optimising audiovisual integration may benefit learning a new language, both early or later in life, by allowing unfamiliar speech sounds to be disambiguated more effectively by lip-movements.  After individual profiling, the necessary audiovisual adjustments could be programmed into personal speech-reading aids, or popular multimedia language training software. Gaming, VR, multimedia, and telecommunication:   Individual adjustments could be easily embedded in personal devices, videophone and multimedia playback software, and internet browsers, to improve user experience, performance and immersiveness. Optimised perception may also enhance the impact of on-line advertising. Our research suggests that greater improvements in experience of audiovisual media may be obtained via our methods, than by continuing to invest in technology for synchronising video and audio signals. Safety-critical and defence applications:   Simple individualised adjustments to user interfaces, control panels, and augmented reality displays could potentially quicken reactions and appropriate behaviour towards important events; feedback from other modalities, such as haptic and tactile could be individually adjusted using similar methods to the ones we have developed, with further potential benefits. We have expertise in design and running of precise behavioural and physiological tests, and analysis of results, as evidenced by our track record of research publication.  We are qualified to act as expert consultants for testing and optimisation of any products or innovations exploiting our discoveries. Our published research has furnished proof of the concept that simple individualised adjustments of auditory and visual signals can produce consistent benefits in speech integration and comprehension.  We have also shown benefits for perception of simple audiovisual animations (e.g. bouncing balls), relevant to sports and gaming performance. Future investment would initially be used to develop a working prototype that implements adjustments via a personal device, and systematic testing of its benefits in target user groups. We have a patent application currently pending.  We seek to license our intellectual property, and/or engage in knowledge transfer partnerships via consultancy and active applied research to develop commercial applications of the kinds outlined above.

Register for free for full unlimited access to all innovation profiles on LEO

  • Discover articles from some of the world’s brightest minds, or share your thoughts and add one yourself
  • Connect with like-minded individuals and forge valuable relationships and collaboration partners
  • Innovate together, promote your expertise, or showcase your innovations