Healthcare AI is Not Hype: Understanding and Using It Accurately, Effectively, and Prudently Is Harder Than It Seems

Healthcare AI is Not Hype: Understanding and Using It Accurately, Effectively, and Prudently Is Harder Than It Seems

In our previous message, we discussed LLMs, SLMs, and ILMs. Here’s a detailed explanation of each:

Large Language Models (LLMs):
LLMs are machine learning models trained on extensive text datasets to understand and generate human language. They utilize transformer architectures (such as GPT) to handle tasks like text generation, summarization, and translation.

Key Features:

  • Trained on massive datasets, often with billions of parameters.
  • Capable of performing a wide range of natural language processing (NLP) tasks.
  • Examples include GPT-4, BERT, and PaLM.

Small Language Models (SLMs):
SLMs are more compact models with fewer parameters compared to LLMs. They are designed for efficiency and are often optimized for use on edge devices with limited computational resources.

Key Features:

  • Smaller, faster, and less resource-intensive than LLMs.
  • Can be fine-tuned for specific tasks but may not offer the same level of generalization as LLMs.
  • Typically used in environments with constrained computation and storage, such as mobile devices or embedded systems.

Intermediate Language Models (ILMs):
ILMs are medium-sized models that balance performance and computational efficiency, sitting between LLMs and SLMs in terms of size and complexity.

Key Features:

  • More scalable than SLMs but less computationally demanding than LLMs.
  • Useful in applications requiring high performance with some resource limitations.
  • Provide a good compromise for specific use cases that do not need the full capabilities of LLMs.

Summary:

  • LLMs are powerful and large but resource-heavy.
  • SLMs are compact and efficient, suited for smaller tasks or devices.
  • ILMs strike a balance between the two, offering moderate size and performance.

In my discussions with investors, providers, and academics, I generally emphasize that ILMs are often better suited for healthcare applications than LLMs. This is partly because ILMs allow for more controlled inputs into AI platforms. Currently, many medical platforms (excluding radiology and pathology) show diagnostic accuracy ranging from 10% to 80% when using existing LLMs, with larger LLMs sometimes exhibiting lower accuracy. Although the reasons for this discrepancy are not fully understood, ILMs seem to be more effective in healthcare contexts.

Why ILMs Might Be Better for Healthcare:

  • Accuracy and Efficiency: ILMs offer a balance between accuracy and efficiency, which is crucial in healthcare where accuracy is critical but computational resources and data availability might be limited.
  • Specialization: ILMs can be more easily fine-tuned for specific healthcare tasks while maintaining computational efficiency for real-time or embedded applications.

We believe the most effective use of AI in healthcare involves aggregating consumer health records, analyzing extensive data that healthcare providers might not have the time to review, and suggesting wellness improvements to consumers. This approach empowers individuals with data to discuss with their doctors and provides valuable insights for pharmaceutical research and personalized healthcare.

AI is a powerful tool in healthcare, but it is essential to remember that it can sometimes generate inaccurate information. Sam Altman, the Co-Founder of OpenAI, has acknowledged that even though GPT has made rapid progress, its workings are not fully understood.

Sam Altman- OpenAI CEO said that the artificial intelligence company does not completely understand its product GPT in order to release new versions of the same. Talking to The Atlantic CEO Nicholas Thompson at the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, Switzerland, he reflected on AI safety and how the technology could benefit humanity at large.

Due to the more controlled application and known input of ILMs, we will have better visibility into how AI operates in these limited contexts. This reminds me of my math teacher’s advice to “show your work.” In healthcare, AI must demonstrate its workings to be trusted for diagnostics and treatments. Whether AI will fully achieve this remains to be seen.

I am confident that AI can significantly enhance the analysis of data-intensive Electronic Health Records (EHRs) and provide valuable insights using mortality, morbidity, actuarial data, and modern peer-reviewed research. AI has the potential to improve our understanding of health at both individual and societal levels, suggest lifestyle changes, and advance medical science. While AI in healthcare is not a hype, it is a promising tool at our disposal—just not one to expect miracles from.

-Noel J. Guillama, Chairman