How will Artificial Intelligence (AI) Impact Healthcare? (Part 9)

How will Artificial Intelligence (AI) Impact Healthcare? (Part 9)

In this week’s blog, we will take a break from discussing the beneficial uses of Artificial Intelligence (AI) to address the concerns and vulnerabilities that AI could bring. Our view, specifically to healthcare, is that if not carefully and ethically implemented, AI has some drawbacks. We are aware of many of them, as we focus our Company on how to use AI to save lives, life extension, and enhance life.

Though we are 100% convinced that AI will change everything, there is also hyperbole and rhetoric surrounding so many AI ‘stories’ that credibility is stretched to the limit. In some cases, the claims would be comparable to Nike® advertising a shoe that will allow people to fly! We see at least three (3) general areas of vulnerability we want to address in this blog that will hopefully ground us and set the stage for the more detailed issues in the next blog.

PROBLEM #1 – MATURITY

Although the basic concepts embodied in AI have been around for at least 15 years, the process of advanced algorithms needed for machine learning and understanding data analytics, as well as the advanced computers and programming languages that can quickly execute the complex AI programs have only recently come into the mainstream. So, in addition to the supporting infrastructure being new the AI industry is still in its developmental or infancy stage. Fortunately, the knowledge base of operational and developmental experience is building quickly, and we are beginning to see ‘Lessons-Learned’ and assemble a “Body of Knowledge.”

We have all read that some researchers have used AI to improve or enhance their reports, only to later be discovered that AI made up some of the information. We also have law firms fined for using face case references.[1]

Of course, there are similar technologies that offer some degree of guidance, but there is nothing that has the potential for as much reward and risk as AI. The steep learning curve that exists is still being climbed and will be for the next couple of years even with the development of self-programming AI programs. Generative AI, popularized in ChatGPT®, OpenAI®, and a plethora of knockoffs and spinoffs are still only the first rung on the ladder of developing truly useful AI applications and programs.

So, what does this say for those of us currently developing AI-healthcare platforms?

It says that we have to chart a very cautious course and that we cannot over plan or over test the algorithms. We must vet the outcomes with a wider variety of variables and ‘benchmark’ the results each step of the way. AI is an expensive proposition and adding these safeguards will certainly add cost to development; however, it will also provide a singularity in the answer that is needed to build trust and confidence in the use of the platform.

The reality is that as we observe the landscape in which some early-stage AI-healthcare applications have sprung up, we find that caregivers complain AI models are unreliable, inconsistent, and therefore are of limited value. Even evaluating them for accuracy, and susceptibility to bias, is still an unsettled science. How much value is an application that is not trusted. How would you, as a passenger, respond to a pilot flying you to a destination who says, ‘we have some new instruments and I’m not really comfortable with them yet, and in the past have not taken us to the right airport’?

PROBLEM #2 – LACK OF HEALTHCARE EXPERIENCE

The issues noted in the preceding paragraphs are not only a direct result of AI itself being in its infancy but from individuals, corporations, and agencies trying to apply technology to an industry that they have little to no experience in. Boeing Corporation builds good airplanes and John Deere builds good tractors, but we would laugh at Boeing building a farm tractor or John Deere a passenger jet, yet that in essence is what we have seen for more than a decade in healthcare. There are many examples of solutions in other industries being “bolted on” to healthcare processes with little, if any, understanding of the process or what the technology would do in the particular healthcare application – much different from a logistics or supply chain model for instance.

As we’ve said many times – too often we see nothing but ‘solutions looking for problems.’ The example we’ve used is the electronic medical record (EMR) – conceptually a great idea to utilize the computer to manage a patient medical record instead of the investment in the huge stacks to hold paper-based files. Unfortunately, it appears the designers misjudged the degree of a) resistance to change in the industry, as well as b) the actual process flow of a physician and patient. The result we have today is an industry that is struggling and other than the large legacy systems of Cerner/Oracle, McKesson and Epic, we find physicians abandoning the EMR to revert to the paper file system and we have a “lesson-learned” – IF we choose to see it.

Without maturity and experience in both healthcare and technology the rapid, meteoric, rise in the development of AI-healthcare applications will see the same fate only multiplied times worse!

#3 – EXACERBATE EXISTING SHORTCOMINGS

Those of us in healthcare are aware of the shortcomings and we generally acknowledge that some serious systemic changes need to be made. In general, the U.S. healthcare system relies upon decades old processes developed to ensure repeatable and consistent results in the delivery of care to patients.

The term “practice of medicine” clearly indicates that the delivery of care, diagnoses, and managing the course of a patients’ health is an ‘inexact’ science – after all, the term itself is indicative. Now, introduce into this subjective human process a computer application that provides detailed and supposedly “objective” analysis. Accuracy, consistency, and reliability become questioned and whether the AI analysis supports or disagrees with a physician diagnosis, what is to be trusted? It’s clear why we have complaints by caregivers when using certain early AI models that are unreliable, inconsistent, and therefore of limited value. More lessons learned and additions to our knowledge base.

Certainly, much of these issues are typical when adopting and adapting any new technology; however, the potentially far-reaching nature of AI drives the need for a much higher degree of scrutiny and care in the architecture, development, testing, and implementation. There are no standards to guide this technology or to ensure that the uniqueness of different healthcare systems is evaluated sufficiently.

Bias must be filtered out or identified yet is present in nearly every system and every person to some degree; however, this becomes even more significant in an AI-healthcare application when the bias cannot be seen but is part of an algorithm.

The challenges in implementing any new technology can be daunting and the challenges implementing an AI-healthcare platform are perhaps even more so. We need AI; however, we cannot move our healthcare system to the next plateau and serve the nations’ growing healthcare needs without it. We must step cautiously and smartly, as well as be willing to accept some setbacks along the way. Our motto: “Technology-Infused HealthcareTM” is even more appropriate with AI as evidenced by the discussion in this and preceding blog. We must carefully and thoughtfully embed AI-applications – the “bolt-on” concept will not work for very many healthcare applications as has been proven many, many times.

In our next blog, we will discuss specific types of bias that can affect an AI application as a specific topic, along with the issues accuracy, consistency, reliability, privacy and security, as well as costs and ethics. Further, we will continue to bring into focus the vulnerabilities that AI can create or exacerbate, as well as discuss specific areas that we need to be cautious of when implementing this new, untested, and still developmental technology called AI.

– Carl L. Larsen, President & Chief Operating Officer of OXIO Health, Inc.

[1] https://www.forbes.com/sites/brianbushard/2023/01/10/fake-scientific-abstracts-written-by-chatgpt-fooled-scientists-study-finds/

https://www.msn.com/en-us/money/other/a-law-firm-was-fined-5000-after-one-of-its-lawyers-used-chatgpt-to-write-a-court-brief-riddled-with-fake-case-references/ar-AA1cW2XZ

https://builtin.com/artificial-intelligence/ai-fake-science