Researchers at TU Dublin have developed a machine learning (AI) assisted hardware/software package or ecosystem that is run locally, via wireless or physically tethered connection.
Researchers at TU Dublin have developed a machine learning (AI) assisted hardware/software package or ecosystem that is run locally or via wireless or physically tethered connection on an Augmented/Virtual Reality Head Mounted Device (HMD) and worn by a neurosurgeon or training neurosurgeon, with networking capability to facilitate cloud computing and database retrieval. This ecosystem is intended for neurosurgical applications or training/education in the neurosurgery space.
The use of both brute force algorithms and Artificial Intelligence to inform surgeon routes and predict surgery results, including potential link to database of previous surgery information.
Self-contained processing capability within one Head Mounted Device system.
No requirement for external controls – voice and/or gesture activated
Image target recognition for alignment of medical scans accurately to patient or patient analogue
Allowing for overlays of medical scan information directly spatially correlated with their real-world counterpart and reducing the need for neurosurgeons to refer to separate screens to view scan information. Networked environments will also allow remote viewers to supervise and advise on the operation in realtime, marking and annotating areas of interest.
Training future neurosurgeons using either patient analogue dummies (manikins) or fully virtual/augmented overlay training, with simulated patient head and tumour
Allowing patients to view representations of their own MRI scans, and enabling doctors to explain upcoming procedures visually in greater detail than previously possible