AI Tools and the Delivery and Documentation of Care: Too Good To Be True (for Now)?

Jun 25, 2023 at 09:47 pm by Staff


 

By Anjali Dooley, Esq. MBA

 

For an industry that is chronically understaffed and in which providers bear enormous responsibility, artificial intelligence (AI) can sound like a godsend. Tools that can streamline, focus, and improve the work — and workflows — of physicians and other clinical practitioners could alleviate the burnout and administrative burden. As a lawyer advising physicians and nurses every day, I, as well as a significant number of healthcare professionals, look at AI with suspicion, seeing it as a sort-of wolf in sheep’s clothing that can give rise to as-yet unknown data privacy, diagnostic, medical, and industry risks that outweigh its potential benefits.

AI vs. ML: Is there a difference?

Before exploring the current and potential benefits of using AI in any medical practice, it is important to understand the difference between AI and ML (machine learning), which are often used interchangeably, but differ in scope of usability and applications. According to the Fu Foundation School of Engineering and Applied Science at Columbia University:

“[A]rtificial intelligence refers to the general ability of computers to emulate human thought and perform tasks in real-world environments, while machine learning refers to the technologies and algorithms that enable systems to identify patterns, make decisions, and improve themselves through experience and data.”

More directly, ML is the process through which a computer system “develops” its intelligence, while artificial intelligence simulates human cognition and reasoning. Although highly interconnected, for most users, AI is the interface through which they work and communicate with these sophisticated tools.

The case for AI in medical practice

For healthcare providers, AI can, among other tasks:

  • Automate and streamline the scheduling of patient appointments, procedures, and follow-up care
  • Record, document, and analyze patient interactions, whether in person or via telehealth
  • Analyze data from electronic health records (EHR) and recorded interactions to provide clinical decision support
  • Read X-rays, ultrasounds, MRIs, CT scans, and other diagnostic images
  • Help emergency and urgent-care staff in rural areas identify and treat certain types of medical events without the on-site presence of specialist physicians
  • Identify factors that can shorten hospital visits and prevent readmissions
  • Conduct high-volume data analyses that support personalized and precision medicine, including the treatment of cancer, diabetes, and other life-threatening and chronic conditions
  • Manage human and other resources cost-effectively and identify revenue-maximizing practices
  • Enable public health agencies to quickly identify and respond to illness outbreaks and other emerging medical issues in communities

Taken together, AI has the potential to improve patient access to healthcare, create a better experience, increase diagnostic accuracy, identify more effective treatments, improve outcomes, and reduce the costs of healthcare, just like humans but with a more efficient thought process.

The case against AI in medical practice

While many of the potential uses and advantages of AI in healthcare appear obvious, some urge caution. They point to a number of shortcomings in current AI tools that may lead to one-off and systemic errors that could, at best, lead to confusion and clerical errors and, at worst, negatively impact patient health and outcomes leading to malpractice of the clinician using the tools.

For example, the vast amounts of personal medical data required to create effective algorithms make such databases a tempting target for cybercriminals, hackers, and other bad actors willing to attack vulnerabilities at every step of the data pipeline. Current AI tools often struggle to recognize and decipher the speech of patients with accents. ChatGPT, Bard, and other large-language-model and generative AI tools not only get facts wrong, they sometimes invent them out of whole cloth (in what are known as “hallucinations”).

Further, AI is only as bias-free as its developers and users; for example, slight differences in the words or sequence of words used to form a query can result in very different answers. (In some respects, AI is not built to be accurate, it is built to be convincing.) Complicating things further, given the extraordinary amount of data and lightning-fast speed of machine learning, even the most experienced AI researchers don’t always know what is happening inside the “black box.”

Even when AI becomes more reliable — which it will — such reliability brings the added risk that users will grow too trusting and less vigilant and clinicians will rely on AI to make decisions versus their own education, training, and experience.

Other real-world scenarios in which AI could create challenges for physicians, staff, and patients include the following:

  • The most glaring risks are patient injuries or death. These include individual errors. However, as the use of AI gains traction, mistakes could lead to mass events that affect hundreds, if not thousands, of patients.
  • Vulnerable populations could be identified and targeted by more sophisticated healthcare-related scams using AI to imitate real “clinicians” identifying clinical problems.
  • Certain medical professions, such as radiology, could experience tectonic shifts, as much of their work becomes automated. Medical students may select other specialties, reducing the number of well-trained, highly skilled professionals needed to identify and correct possible errors and drive the expansion of medical knowledge.

To use or not use AI?

As with any new technology — or treatment, for that matter — the benefits of AI in healthcare must be weighed against the potential risks, and they must be regulated, just as the clinicians using them are. Medical licensing boards will need to drastically update the ways clinicians use AI, in order to shift licensing procedures and penalties for misuse and lack of oversight. Early adopters of healthcare AI tools suggest a balanced approach, in which new platforms and processes are balanced with a range of checks, including, for example, requiring a trained professional to review diagnostic decisions and treatment recommendations made by AI.

Similarly, where real-time language-translation tools are used, the presence of human translators may still be required in order to confirm the accuracy of what is being communicated between the patient and the physicians and nurses.

From the micro- to the macro-level, a number of possible solutions to the above-mentioned risks can be identified:

  • Expand patient and provider education. The more informed individuals are about the benefits and risks of AI-supported healthcare tools, the better positioned they are to make appropriate medical decisions. Further, if AI can take over more mundane tasks, such as pre-authorizations, this may allow patients and physicians to spend more time together during exams and visits and discuss options more thoroughly.
  • As stated by the OpenAI CEO Sam Altman to the Senate Judiciary Committee, on May 16, 2023, “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls.” Critical oversight and regulations need to be adopted to implement AI, especially in healthcare. For instance, federal and state government agencies, along with private and for-profit hospitals and health systems, can and already are developing and implementing quality oversight panels to oversee the use of AI in their organizations.
  • The size, scale, diversity, and reliability of healthcare data can be improved. High-quality, comprehensive datasets can go a long way toward addressing many of the issues described above.

Without question, AI is here to stay, in healthcare and everywhere else. How it works — and how well it works — is up to us. 

Anjali Dooley is special counsel on the Healthcare Industry Team at Jones Walker LLP. She can be reached at adooley@joneswalker.com