Discover Biotech Webinar Pharma FDA & EMA ONCOLife Contact

AI and VR – two little acronyms that will change the healthcare world this year

As we have seen in the last couple of years, AI is rapidly transforming our lives, one app at a time. But when it comes to healthcare and patient outcomes, there’s been a plethora of innovations that have delivered a true impact on the healthcare sector. Today we look at the top five trends brought about by AI and its older sibling, VR and how they will impact patient outcomes across the world.


Martin Sandhu, CEO at nuom, a consultancy focusing on improving digital health outcomes through human-centric design... 


1- Generative AI In Healthcare and medical diagnosis 

We predict that generative AI (GenAI) will be particularly impactful in the immediate future, thanks to its ability to create synthetic data that can be used to train medical AI algorithms without compromising patient privacy (or where there simply isn't enough real-world data). 

Large language models (LLMs) such as that used by ChatGPT, and now Google’s new equivalent Gemini, as well as Google’s Med-PaLM, are poised to transform health care, and could soon become essential tools for diagnosing and treating patients. 
LLMs are expected to be able to produce clinical notes, fill in forms, and assist physicians with making diagnoses and treatment plans, which frees up tome for clinicians allowing them to get on with the really important jobs. 

One such use comes from the already exceptional results AI has delivered in diagnosis, especially for radiology and pathology. In the UK, AI-driven platforms are being used for early cancer diagnosis, showing the power of AI in preventive care.

The NHS has partnered with Google's DeepMind to improve the speed and accuracy of disease detection and has announced a pilot scheme for a blood test that utilises machine learning to detect more than 50 types of cancer (including ovarian, pancreatic, oesophageal, bowel, and lung cancers) before clinical signs or symptoms emerge. This test created by the Californian company GRAIL was found to accurately detect cancer and predict its location by examining cell-free DNA in the bloodstream. The pilot could be expanded to one million participants in the UK between 2024 and 2025.

In the USA, AI is increasingly used in diagnostic procedures, with FDA approvals for AI-based diagnostic tools in radiology and pathology. Companies such as Zebra Medical Vision are at the forefront, providing AI solutions for detecting a range of medical conditions.

AI, including LLMs, can now analyse vast numbers of medical imaging reports (e.g., mammograms) to predict cancer progression, assist in devising targeted treatment strategies, and to advance precision medicine. In a recent study from Lund University in Sweden, AI use in breast cancer screening (the most prevalent cancer globally) was found to be as effective  as two radiologists (which is likely to help the shortage of radiologists in many countries).

2- AI in Drug Discovery

AI is now also becoming pivotal to drug discovery and development. In early 2023, the generative AI drug discovery firm Absci shared that it had designed and validated de novo therapeutic antibodies with zero-shot generative AI. Zero-shot learning refers to a model that can make accurate decisions — here, in the domain of antibody design — without any training data. This could cut the time in half to get new drug candidates to the clinic.

Another example is Sanofi, a company that has been working on a large-scale implementation of an AI-driven app known as Plai, developed with Aily Labs. The use of AI has enabled Sanofi to accelerate research processes, improve clinical trial design, optimise manufacturing and drive more efficiency in its supply chain.

And just recently Pharos iBio developed an anticancer drug using AI platform Chemiverse. The platform can simulate and analyse protein-ligand interactions, which are pivotal for understanding a drug’s potential safety and efficacy.

AstraZeneca, through its rare disease division Alexion, announced they will use Verge Genomics’ CONVERGE platform (which uses machine learning to analyse human tissue data) in the discovery of novel drug targets for rare neurodegenerative and neuromuscular diseases. In the UK, the pharmaceutical sector is increasingly utilising AI in drug discovery, with companies like Exscientia leading the way. In the USA, drug discovery is also booming, with companies like Atomwise carrying out innovative work within drug discovery.

3- Digital twins

Digital twins are computational models of physical objects or processes, updated using data from their real-world counterparts. They have been widely used for a more than a decade in engineering and manufacturing, with great results, especially when it comes to maintenance. In a similar vein, within medicine, they combine vast amounts of data about the workings of genes, proteins, cells and whole-body systems with patients’ personal data to create virtual models of their organs – and eventually, potentially their entire body.

“What a digital twin is doing is using your data inside a model that represents to how your physiology and pathology is working. It is not making decisions about you based on a population that might be completely unrepresentative…” says Professor Peter Coveney, the Director of the Centre for Computational Science at University College London and co-author of Virtual You.

The current state of the art model for digital twins can be found in cardiology. Companies are already using patient-specific heart models to help design medical devices, while the Barcelona-based start-up ELEM BioTech is offering companies the ability to test drugs and devices on simulated models of human hearts.

“Often surgeons will use an approach that works on average, but making patient-specific predictions and ensuring that they predict longer-term outcomes is really challenging” says Dr Caroline Roney, of Queen Mary University of London. “I think there are many applications in cardiovascular disease where we will see this sort of approach coming through, such as deciding what type of valve to use, or where to insert it during heart valve replacement.”

Cancer patients are also expected to benefit. Artificial intelligence experts at the drug company GSK are working with cancer researchers at King’s College London to build digital replicas of patients’ tumours by using images and genetic and molecular data, as well as growing patients’ cancer cells in 3D and testing how they respond to drugs. By applying machine learning to this data, scientists can predict how individual patients are likely to respond to different drugs, combinations of drugs, and dosing regimens.

4- Virtual Reality and Augmented Reality

VR is increasingly being used for therapeutic purposes, particularly for pain management, rehabilitation, and mental health. In the US and beyond, VR is used to treat phobias, ease anxiety, and decrease pain. When it comes to pain management VR allows patients to immerse themselves in calming and distracting experiences that has been transformative for many patients.

VR is being used to treat a host of conditions, including burn pain therapy through the use of a VR-enabled RAPAEL Smart Glove system). VR is also being used for cancer patients (offering settings to improve wellbeing and stress levels). And it has also been trialled during labour, as an alternative to pain medication at the Cedars-Sinai Hospital in the US, as well as a study in Turkey, both of which resulted in higher levels of satisfaction and lower pain for participants.

AppliedVR, a company from Los Angeles, is helping people deal with chronic pain relief, as a recent survey found that over 50 million adults in the US suffer from chronic pain. Another company,  EaseVRx uses cognitive behavioural therapy to ease chronic pain in adults, with users reporting a 50% pain reduction in a recent study.  

In the UK, a study by Sheffield Hallam University explored how VR can help children with upper limb injuries, such as fractures, that limit their motion. The study found that gaming environments which required children to use their arms (e.g. archery and climbing simulations) were less painful and more enjoyable for them than standard movement exercises for rehabilitation.

Augmented Reality (AR), the younger sibling of VR, is increasingly being used for medical education and training as well as surgery itself.  In the UK, initiatives like the HoloLens 2 are being used for surgical training.

In the US, AR is widely used to enhance the learning experience for medical students and professionals alike. Many other countries, such as Germany and Japan are also adopting AR for advanced medical training and advanced simulation.  

Within the operating room, AR is being used by surgeons via smart glasses, to access important information they need in a real-time, hands-free environment — without having to look away from the procedure to check a computer screen.

Proximie is using the smart glasses to record operations, putting them in a database and using AI to go back and compile all relevant information. This system then makes it all available for the next time the operation is being done, along with all key learnings from past surgeries.

Another key benefit of this is to allow collaboration between doctors in different locations. For example, a specialist in New York can assist a doctor in South Africa carrying out open-heart surgery for the first time.

The new Apple Vision Pro offering mixed-reality could give surgeons “superpowers”, according to Dr Rafael Grossmann (a surgeon with a background in robotic surgery and the first to live-stream a surgery using Google Glass):

"Within the operating room, you are gathering data in mixed reality that is helping you in real-time, in a synchronous fashion, to do the procedure. That allows you to not have to turn your head where you can actually bring the computer. So, that's what we call spatial computing".

The surgeon also hopes that Apple's new headset will improve on existing technology by moving data, notes, and displays from other medical devices to the virtual display.  

5- Robot-Assisted Surgery

There is now vastly increasing adoption of robotic systems for precision surgery across the globe. The most prolific device, the da Vinci Surgical System, by California-based Intuitive Surgical, is used for 1.5M operations each year worldwide. This means that surgeons who would otherwise stoop over patients to perform operations now stand at a console using two joysticks which control four robotic arms, thus removing awkward angles and physically demanding surgery.  

Combining AI and other novel technologies, engineers are developing advanced robotics to herald a new era for surgery - with the most important current step being their use for highly intricate keyhole surgery. This means that operations that might have required open surgery now allow for less invasive work, which allows patients to recover quicker.
Engineers are hoping that robotic systems combined with AI might be able to surpass the skills of human surgeons to produce more consistent results, with fewer errors.

Last year, engineers at Johns Hopkins University in the US carried out one of the most delicate procedures in the practice of surgery, on four pigs, using their Smart Tissue Autonomous Robot (Star). According to the engineers, it performed better than a human surgeon would have. “We can automate one of the most intricate and delicate tasks in surgery,” said Axel Krieger, an assistant professor of mechanical engineering, and the project’s director.

The Star’s procedure was not the first time a robot had performed with a level of autonomy, but it was the first time it performed its task using keyhole surgery – making it a world premiere.

Surgical robotics presents a good opportunity for engineers to introduce autonomy because of the vast volume of data that devices can collect. An intelligent system, once developed, can use this data to teach itself. In theory, it could become better with each operation that it performs as it gathers more and more information.

Current robots use cameras to provide surgeons with a 3D image, which they can view through a headset or console. Newer devices are enhancing the information with AR visuals. The latest Da Vinci robots, for example, offer a secondary “ultrasound” view. With AI, the robot may even be able to identify and highlight important information that the surgeon might have missed (as is already happening in radiology).

References


About Author: Martin Sandhu

Martin Sandhu is the CEO and founder of Nuom, a digital health transformation consultancy delivering service design to organisations in the healthcare industry. He aims to increase patient engagement and improve patient outcomes across healthcare services, by creating human-centric digital health solutions that are faster to develop, more effective for every user, and ultimately create better outcomes.

Related Articles



Comments

No Comments Yet!

Make a Comment!