Discover Biotech Digital Health Pharma FDA & EMA ONCOLife Contact

Study Reveals ChatGPT's Impressive Accuracy in Transforming Clinical Practice

AI

26 August 2023

In a pioneering research study led by investigators from Mass General Brigham, it has been found that the artificial intelligence chatbot, ChatGPT, may play a potentially transformative role in the realm of clinical decision-making. Demonstrating an impressive accuracy of approximately 72% across the board, the AI system not only efficiently identified potential diagnoses but also made conclusive determinations and care management decisions. 

This latest research offers a glimpse into the future of medical practices, where advanced AI like ChatGPT might seamlessly integrate into clinical evaluations, enhancing both speed and precision. ChatGPT being tested on 36 standardized clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual. 

The process involved presenting the artificial intelligence with patient information such as age, gender, symptoms, and case urgency to see how well it could handle a range of tasks - from coming up with potential diagnoses to reaching final conclusions and care management decisions. Dr. Marc Succi, the study's corresponding author and the associate chair of innovation at Mass General Brigham, mentioned that ChatGPT's performance might be likened to someone at the initial stages of their medical career, such as a newly graduated doctor. 

While it displayed a commendable 77% accuracy in making the final diagnosis, it faced some challenges, particularly with the initial differential diagnosis phase, where it scored 60%. 

"Differential diagnosis is the core of medicine," Dr. Succi emphasized. "It's during these early stages of patient interaction, with minimal information presented, that physicians truly showcase their expertise, generating a list of possible ailments." The study also underscored ChatGPT's potential in assisting throughout the entire clinical encounter. 

While feeding the system with patient data, ChatGPT not only came up with potential diagnoses but, as more data was provided, it also made critical management decisions and eventually arrived at final diagnoses. It showcased a 68% accuracy in determining the right clinical management actions, such as prescribing appropriate medications post-diagnosis. Despite the promising findings, experts believe it's crucial to proceed with caution. Before fully integrating AI tools like ChatGPT into everyday clinical practice, rigorous benchmark research and precise regulatory guidance are imperative. 

Adam Landman, MD, chief information officer at Mass General Brigham, shared his vision for the integration of such AI tools. "We're keenly exploring LLM solutions, especially those assisting in clinical documentation and patient communication. It's crucial to understand their accuracy and reliability deeply," he said. The rapid evolution of artificial intelligence in healthcare paves the way for reshaping patient care and support. With rigorous studies such as this, it becomes increasingly possible to gauge where AI can fit in, complementing the invaluable human touch in medicine.

Abstract of the research

Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study

Abstract / Background: Large language model (LLM)–based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. Objective: This study aimed to evaluate ChatGPT’s capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. Methods: We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT’s performance on clinical tasks. Results: ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=–15.8%; P<.001) and clinical management (β=–7.4%; P=.02) question types. Conclusions: ChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT’s training data set.

References

Related Articles



Comments

No Comments Yet!

Make a Comment!