Often, we only end up in hospital if something’s already wrong with us. But what if we were able to prevent these unpleasant trips by receiving preventative treatment through prior diagnoses. Whilst the aim of medicine is to ‘prevent disease and prolong life, the ideal of medicine is to eliminate the need of a physician.’
Until recently, this has not been possible, but with the next wave of technological progressions in AI, healthcare organisations are beginning to implement deep learning and machine learning methods to prevent illnesses, improve analysis, cure diseases, and predict future health issues.
In order to implement deep learning vast amounts of data are required, and whilst medical professionals have been collecting statistics for many years, the majority of these have only been inputted digitally since 2013, and ‘the amount of global healthcare data has been increasing 48 percent year-on-year’.
As well as using AI as a tool to help foresee potential diseases in patients, it’s also being used to improve accuracy in surgery and treatments of patients requiring urgent care.
At the Deep Learning in Healthcare Summit in Boston earlier this year we heard about the most cutting edge progressions in medical research and explored AI in drug development & clinical trials, medical imaging, e-health data, and trends and opportunities in the future of healthcare.
We heard from Assistant Professor at John Hopkins University, Muyinatu Bell, who is currently working with PULSE Lab on the implementation of machine learning to improve photoacoustic-guided surgery. Muyinatu designs medical imaging systems that link light, sound and robotics to produce clearer pictures and has been named as an ‘inventor’ who is ’building the stuff of the future’ by MIT Technology Review.
What is photoacoustic-guided surgery?
A major challenge of guiding surgeries is visualisation of point-like targets, such as the circular, cross sectional views of cylindrical needles, catheters, and brachytherapy seeds. These targets are often masked by highly echogenic structures (creating reflections and appearing brights on ultrasounds) and can lead to the mistaking of the point-like targets.
How can AI help overcome this?
In her presentation, Muyinatu spoke about her work with photoacoustic imaging that employs machine learning to use light and sound to visualise anatomical structures in the human body. She is using ML to identify the reflections and remove them from the images. Thus far, the research demonstrates strong promise to complete the task without requiring traditional signal processing. With a background in ultrasound imaging, when Muyinatu was first introduced to the idea of machine learning when she was recruited to organise a challenge to find the best algorithm to track ultrasound liver data and one system used ML to complete the task, which highlighted some promising opportunities. This project introduced her to machine learning, where she thought:
‘What if I combined machine learning and beam forming and put those two together.
The principal challenges in both ultrasound and photoacoustic imaging lie in the inability to see through reflection and behind bones and other tissue, and the PULSE Lab she is currently working with are using robotic systems and ML to develop new innovative imaging systems.
After her presentation at the Deep Learning in Healthcare Summit in Boston, Muyinatu was interviewed by Monique Boruillette, Science Reporter who asked some questions we had on the topic. You can watch the full interview here.
Whilst AI may be threatening certain professions in healthcare such as image analysts and technicians, doctors, surgeons, psychologists and many other roles are still paramount in providing the best care possible. The rise of AI will not only improve the care that patients receive, but also offers countless opportunities for employment in medical and AI research.
Interested in learning more about the progressions of healthcare with AI?