Skip to main content
Skip to footer
robot hand holding stethoscope

SERIES: Myths in AI - Will clinicians be replaced in health care decision-making?

Estimated read time: 6 minutes

by Tanuj Gupta, MD

Published on 7/13/2020

Artificial intelligence (AI) and machine learning (ML) are hot topics in health care that usher in great hope for the advancement of our industry. While they have the potential to transform patient care, quality and outcomes, there are also concerns about the negative impact this technology could have on human interaction, as well as the burden they could place on clinicians and health systems.

Through this blog series, Tanuj Gutpa, MD, vice president, Cerner Intelligence, addresses some of the most common myths he’s encountered during his conversations with health care leaders across the globe. His goal is to give you the facts so you can make informed decisions about how your organization maximizes AI and ML (read part 1 of the series here).

In this Q&A, Dr. Gupta discusses the possibility of AI and ML replacing human clinicians, and why these technologies have the potential to help doctors improve care delivery and patient outcomes.

Q: Let’s cut to the chase. Is there potential for AI and ML to take health care decision-making away from clinicians?

A: The short answer is no. AI and ML will not replace clinician judgement. Providers will always have to be involved in the decision-making process because we hold them accountable for patient care and outcomes. We already have some successful guardrails in other areas of health care that we’ll likely evolve to for AI and ML. For example, one parallel is verbal orders. If a doctor gives a nurse a verbal order for a medication, the nurse repeats it back to them before entering it in the chart and the doctor must sign off on it. If that medication ends up causing harm to the patient, the doctor can’t say the nurse is at fault.

Additionally, any standing protocol orders that a hospital wants to institute must be approved by a committee of physicians who then have a regular review period to ensure the protocols are still safe and effective. That way, if the nurse executes a protocol order and there’s a patient safety issue, that medical committee is responsible and accountable — not the nurse.

The same thing is going to be there with AI and ML algorithms. There won’t be an algorithm that arbitrarily runs on a tool or machine treating a patient without doctor oversight. If we throw a bunch of algorithms into the electronic health record (EHR) that say, “treat the patient this way” or “diagnose him with this,” we’ll have to hold the clinician — and possibly the algorithm maker if it becomes regulated by the U.S. Food and Drug Administration — accountable for the outcomes. I can’t imagine a situation where that would change.

“Manual override and human control are almost always necessary. For instance, there’s always the option for a crew member to override the autopilot system on an airplane if it fails. I don’t think we’ll get to a point where we trust an algorithm more than the physician’s decision.”

– Tanuj Gupta, MD, vice president, Cerner Intelligence

Q: How can the industry help providers become more comfortable with using AI and ML?

A: Providers want to have the ability to override algorithms they disagree with from their medical perspective. There must be rules within these algorithms that allow a clinician to say, “I’m making an exception, and here's why.” There are all kinds of reasons why an algorithm’s recommendation might need to be cancelled by a human, like database issues or electrical surges that could cause glitches. Manual override and human control are almost always necessary. It’s the same in other industries. For instance, there’s always the option for a crew member to override the autopilot system on an airplane if it fails. I don’t think we’ll get to a point where we trust an algorithm more than the physician’s decision.

Q: How does user experience design factor into the relationship between clinicians and AI and ML?

A: We must be careful in how we design the look and feel of these algorithms so that they don’t unintentionally cause a clinician to act a certain way. For example, if it flashes red, a doctor might feel pressured to follow the algorithm’s recommendation. The same could happen if an algorithm says, “order this test now” versus “we suggest this test for the following reasons.” There are certain design elements that must be considered to preserve the clinician’s authority to make the ultimate call for the patient.

Q: What does the current health care landscape look like for AI and ML?

A: Health systems are using machine learning now mostly for population health predictions tied to social determinants of health. For example, ordering an Uber for a patient who doesn’t have transportation to ensure they don’t miss an appointment or writing a prescription to a food pharmacy because a family is experiencing a nutritional deficit. Operational and financial predictions are also occurring around things like emergency department and inpatient volumes as well as cash flow.

Currently, AI and ML aren’t being leveraged as fervently to aid around diagnosis or treatment because of the risks involved and lack of regulation. In the clinical setting, integrating AI and ML comes down to a question of patient safety.

Q: As we continue to fight COVID-19, there’s concern about the health care system getting overwhelmed. Could AI and ML help alleviate some of the burden of fighting a pandemic?

A: The industry, and the world, wasn’t ready for COVID-19. However, this won’t be the last pandemic we see, and I believe we’ll be ready for the next one.

One way we can prepare for the next pandemic is to fast track algorithms that help us triage patients in the appropriate risk buckets. In the future, what if we could take a month or two of data and develop an early warning system? If you have that algorithm running alongside clinicians, then you’ve effectively increased the capacity of your health care system. There’s a lot of potential if we as a health care system empower digital therapeutics and diagnostics to help us respond better and faster during pandemics.

Q: How can AI and ML empower clinicians to provide better care beyond pandemics?

A: Clinicians can use, and are using, AI and ML to improve care — and maybe make health care even more human than it is today. If you think about a machine learning algorithm that predicts your risk of a disease in advance, it’s essentially performing a similar function of a blood-based lab test. If I have strep throat and I go in for a test where they swab my cheek for cells, it’s predicting the likelihood of me having strep throat. It’s the same with the machine learning algorithm, except it’s using data instead of blood or cells to predict something.

One of the demonstration projects Cerner did with Amazon Web Services was to see if we could predict heart failure 15 months in advance, which means the patient is probably showing minimal to no symptoms or symptoms that health care providers wouldn’t normally detect. It’s advantageous for a clinician to be able to use data to predict health problems far in advance without having to run a potentially invasive lab test on a patient. It allows clinicians to intervene sooner, be more proactive about diseases and move toward value-based care. Data-based lab tests and digital therapeutics could be a brand-new field for medicine. Doctors will no longer just prescribe you a drug or device, but also an algorithm that will adjust your drug or device usage based on other factors we see in the data.

AI and ML could also allow physicians to enhance the quality of time spent with patients. Data-based lab tests offer much faster results than blood-based tests. If a doctor can test for things right away and give results during the appointment, that 15-minute conversation between a patient and their provider could be much more meaningful and efficient.

Bottom line, I think we as a health care industry should embrace AI and ML technology. It won’t replace us; it will just become a new and effective toolset to use with our patients. And using this technology responsibly means always staying on top of any potential patient safety risks.

There is another risk we haven’t talked about yet, particularly with ML algorithms, and that’s the risk of discrimination or bias. Will AI/ML perpetuate racial bias or disparities in health care, and if so, how do we protect against this? In our next blog in this series, we will look at this question more deeply.