AI Tools and the Delivery and Documentation of Care: Too Good To Be True (for Now)?

Jun 25, 2023 at 09:47 pm by Staff


 

By Anjali Dooley, Esq. MBA

 

For an industry that is chronically understaffed and in which providers bear enormous responsibility, artificial intelligence (AI) can sound like a godsend. Tools that can streamline, focus, and improve the work — and workflows — of physicians and other clinical practitioners could alleviate the burnout and administrative burden. As a lawyer advising physicians and nurses every day, I, as well as a significant number of healthcare professionals, look at AI with suspicion, seeing it as a sort-of wolf in sheep’s clothing that can give rise to as-yet unknown data privacy, diagnostic, medical, and industry risks that outweigh its potential benefits.

AI vs. ML: Is there a difference?

Before exploring the current and potential benefits of using AI in any medical practice, it is important to understand the difference between AI and ML (machine learning), which are often used interchangeably, but differ in scope of usability and applications. According to the Fu Foundation School of Engineering and Applied Science at Columbia University:

“[A]rtificial intelligence refers to the general ability of computers to emulate human thought and perform tasks in real-world environments, while machine learning refers to the technologies and algorithms that enable systems to identify patterns, make decisions, and improve themselves through experience and data.”

More directly, ML is the process through which a computer system “develops” its intelligence, while artificial intelligence simulates human cognition and reasoning. Although highly interconnected, for most users, AI is the interface through which they work and communicate with these sophisticated tools.

The case for AI in medical practice

For healthcare providers, AI can, among other tasks:

Taken together, AI has the potential to improve patient access to healthcare, create a better experience, increase diagnostic accuracy, identify more effective treatments, improve outcomes, and reduce the costs of healthcare, just like humans but with a more efficient thought process.

The case against AI in medical practice

While many of the potential uses and advantages of AI in healthcare appear obvious, some urge caution. They point to a number of shortcomings in current AI tools that may lead to one-off and systemic errors that could, at best, lead to confusion and clerical errors and, at worst, negatively impact patient health and outcomes leading to malpractice of the clinician using the tools.

For example, the vast amounts of personal medical data required to create effective algorithms make such databases a tempting target for cybercriminals, hackers, and other bad actors willing to attack vulnerabilities at every step of the data pipeline. Current AI tools often struggle to recognize and decipher the speech of patients with accents. ChatGPT, Bard, and other large-language-model and generative AI tools not only get facts wrong, they sometimes invent them out of whole cloth (in what are known as “hallucinations”).

Further, AI is only as bias-free as its developers and users; for example, slight differences in the words or sequence of words used to form a query can result in very different answers. (In some respects, AI is not built to be accurate, it is built to be convincing.) Complicating things further, given the extraordinary amount of data and lightning-fast speed of machine learning, even the most experienced AI researchers don’t always know what is happening inside the “black box.”

Even when AI becomes more reliable — which it will — such reliability brings the added risk that users will grow too trusting and less vigilant and clinicians will rely on AI to make decisions versus their own education, training, and experience.

Other real-world scenarios in which AI could create challenges for physicians, staff, and patients include the following:

To use or not use AI?

As with any new technology — or treatment, for that matter — the benefits of AI in healthcare must be weighed against the potential risks, and they must be regulated, just as the clinicians using them are. Medical licensing boards will need to drastically update the ways clinicians use AI, in order to shift licensing procedures and penalties for misuse and lack of oversight. Early adopters of healthcare AI tools suggest a balanced approach, in which new platforms and processes are balanced with a range of checks, including, for example, requiring a trained professional to review diagnostic decisions and treatment recommendations made by AI.

Similarly, where real-time language-translation tools are used, the presence of human translators may still be required in order to confirm the accuracy of what is being communicated between the patient and the physicians and nurses.

From the micro- to the macro-level, a number of possible solutions to the above-mentioned risks can be identified:

Without question, AI is here to stay, in healthcare and everywhere else. How it works — and how well it works — is up to us. 

Anjali Dooley is special counsel on the Healthcare Industry Team at Jones Walker LLP. She can be reached at adooley@joneswalker.com

Sections: Featured News Business/Tech