Artificial Intelligence Can’t Replace Your Doctor
Artificial intelligence (AI) has recently been touted as a revolutionary advance in clinical medicine that might save our failing healthcare system. A google search for “artificial intelligence in medicine or healthcare” displays 99 pages of citations.
It’s true that, despite technical, security, financial, and regulatory issues of implementation, AI in medical practice can efficiently gather, organize, and statistically analyze quantitative data from multiple sources, viz., stationary monitors, wearables, smartphones, and even cardiac pacemakers. Furthermore, AI can list diagnostic as well as therapeutic options and proffer published clinical guidelines, advisories, and algorithms.
AI can be especially helpful in public health. Applying AI to the early pandemic data in 2021 could have prevented Washington’s ill-conceived, disastrous response plan to CoViD.
However, in the clinical practice of medicine, AI cannot substitute for human care providers (i.e., nurses, doctors, or therapists).
A recent article promoting AI claimed,
The implementation of augmented medicine [AI] is long-awaited by patients because it allows for a greater autonomy and a more personalized treatment, however, it is met with resistance from physicians which were not prepared for such an evolution of clinical practice.
In fact, both patients and physicians are extremely leery of AI in medicine.
Most people are afraid of change, any change. New technologies induce anxiety, particularly when they are related to our bodies. But there is another, stronger reason for public resistance to AI: the need for human connection.
Image: Medical provider and patient by freepik.
In-person communication, physical closeness, and direct physical contact are important components of the healing process. The aphorism “healing hands” is not merely wishful thinking. Any experienced (read: older) clinical physician will confirm that laying on hands can be therapeutic and that direct human-to-human connection is always helpful.
AI can’t provide that to patients, no matter how well programmed.
Providers have a more compelling reason for eschewing AI: the limits of medical knowledge and the nature of patients.
Medical science is not like the hard sciences, viz., math, physics, and chemistry. Two plus three always equals five. The speed of light is 299,792,458 meters per second, no more, no less. The boiling point of copper is 2595° centigrade. It doesn’t matter who does the addition, the color of light, or where the copper was mined.
In contrast to the knowledge base in hard sciences, medicine has no “always” answers. The precise mechanism of disease is not yet known. Doctors know that, in diabetics, the insulin-producing cells inside the pancreas malfunction, but they don’t know why they fail. Without knowing the root cause, no one treatment can work for all diabetics.
Patients are not commodities like numbers on a page, copper metal, or electric current. Each patient is unique, with a specific history, allergies, and specific responses to medications. Both ibuprofen and naproxen are NSAIDs (non-steroidal anti-inflammatory drugs). The former made my wife nauseous, and the latter worked well. The reason for these differing medication responses in different patients is unknown.
Gentamicin is a highly effective antibiotic against infection by Escherichia coli or Klebsiella pneumoniae. An AI, following a government-approved, evidence-based clinical algorithm, could reasonably prescribe the drug. But gentamicin can damage kidney function. If the patient has underlying renal disease, giving gentamicin could cause kidney failure.
The doctor must know his or her specific patient and must recognize the limits of medical knowledge. There is no treatment that has zero side effects. Nothing in medicine is risk-free—not aspirin, antibiotics, or appendectomy.
The good doctor uses judgment and intuition, advantages that only humans have. The good doctor knows her or his patient in a way no computer can. The good doctor takes all aspects of that one patient into account before recommending treatment: physical, psychological such as risk acceptance or avoidance, family situation, and financial status.
Doctors sometimes have an intuition. These gut feelings are reasoning below the conscious level, connecting seemingly unrelated facts. Experienced physicians and surgeons have learned to listen to that little voice whispering, “do this, don’t do that; wait; operate right this minute.” While such intuitions are not infallible, they sometimes can save a patient’s life.
As a teenager, while driving to dinner with my middle uncle, an obstetrician, he said, “Mrs. Jones is in the hospital, but I have a feeling I need to see her now. Do you mind if we stop off to check on her?” Of course, I said, “Sure.”
As we drove into the parking lot, his beeper went off. Entering through the ER doors, we heard the PA system blare, “Dr. Ratzan, STAT! Room 412. Dr. Ratzan, STAT! Room 412.”
When we got to room 412, running up three flights of stairs, the umbilical cord of Mrs. Jones’ baby had come out first. Her contractions were stopping blood flow to the baby. My uncle splashed antiseptic on her belly, put on gloves, and right there in her room, wearing suit and tie, he did an emergency Caesarean section to get the baby out before there was brain damage.
A doctor’s “gut” saved that baby. AIs do not have gut feelings. They have neither intuition nor judgment. And most assuredly, they lack compassion.
While there is a place for AI as an informational aide to clinical care providers, AI cannot, should not, and must not supplant the human touch and brain.
Deane Waldman, M.D., MBA is Professor Emeritus of Pediatrics, Pathology, and Decision Science; former Director of the Center for Healthcare Policy at Texas Public Policy Foundation; and author of the multi-award-winning book Curing the Cancer in U.S. Healthcare: StatesCare and Market-Based Medicine.