ChatGPT — Everything you want in a doctor?

Join-the-dots surgery?

Doctors have been using robots in surgical procedures for years, relying on their often greater level of precision. Advanced technology has its place in the practice of medicine — but how would the patient on the operating table feel if he noticed the surgeon asking ChatGPT how best to proceed?

John* is the father of premature triplets, two of which died in a UK hospital after a nurse allegedly injected air into their bloodstreams. He has described how staff in the NICU were looking up procedures online as they tried desperately to save the babies’ lives.

There was a doctor at what looked like a makeshift desk using a screen to look up how to perform the chest drain and where the incisions and tubes should go. It looked as though they were following a tutorial and not as if they really knew what they were doing.

Then he saw a nurse also apparently unable to do what was needed without computer guidance:

She was youngish. She had a PC screen in front of her, and as soon as I saw this come up on the screen, I panicked. I was confused as to why she was Googling this.
The procedure was a lung drain. On the screen, there was an image of a person with an arrow pointing to where the incision should be. It was a medical diagram...
I was angry at this point. I can remember other staff coming over to the computer to look at it. They all had a word with each other and did the procedure. This was worrying me, because it's an everyday procedure, one that hospital staff must do day in, day out.

 

Check-the-symptoms diagnosis?

This was hopefully just an extreme example of “incorporating” technology into hospitals. But computers in general and AI in particular are already making great inroads into hospitals and GP surgeries, even though there are no formal guidelines in place regarding when it is appropriate (and when it might be dangerous) to use the assistance of AI when treating patients.

A recent survey of over 1,000 GPs in the UK found that more than 1 in 4 admitted using AI tools to suggest what is called a differential diagnosis — a diagnosis made when a patient presents with symptoms that could indicate a number of different diseases. The doctors surveyed also said they used AI tools to suggest treatment options.

More than 1 in 4 GPs also admitted using AI tools to generate necessary documentation, which is probably a more innocuous way to incorporate technology and save valuable time and is unlikely to actually endanger patients. However, the AI tools that doctors are using to diagnose and treat, such as ChatGPT, Bing AI, and Google Bard, are not specifically designed to be used in a medical setting. If doctors are using such tools on a regular basis, this suggests that the AI tools are providing them with information that they don’t have at their fingertips — which also suggests that they may lack the breadth of knowledge to double-check the information that AI is providing them with.

Now you see it, now you don't cancer?

While professional guidance for using AI in medical settings does not yet exist, more specific uses for AI have been officially endorsed in some parts of the world. The UK has recently begun using what it calls a “C the Signs” AI tool, to help doctors detect cancer in patients.

C the Signs uses information drawn from medical records, test results and drug prescriptions, as well as family history, to assist in identifying cancer cases. According to an analysis performed earlier this year, around 15 percent of GP practices are already using C the Signs and it is claimed that the rate of cancer detection has risen from 59 percent to 66 percent as a result.

Detecting cancers that doctors miss could be an advantage if it can be effectively treated. However, determining that a person has cancer when in fact he does not is a very real concern with AI tools, as researchers have already noted on multiple occasions.

Such false detections are sometimes referred to as “hallucinations,” as they “see” what is not actually there. The team of researchers that investigated the use of C the Signs stressed that doctors need to be better educated in how to use AI effectively and responsibly:

The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarizing information but also the risks in terms of hallucinations [perception of non-existent patterns or objects], algorithmic biases, and the potential to compromise patient privacy.

The computer with a great bedside manner?

Meanwhile, a study published in JAMA (Journal of the American Medical Association) offers an intriguing reason for why AI might be welcomed by some patients. Apparently, bots such as ChatGPT and Bard are more empathetic than flesh-and-blood doctors, and have more time for their patients’ queries.

The study is based on a sample of 195 randomly chosen digital exchanges between patients and doctors or bots. It found that the replies generated by bots averaged 211 words in length, as opposed to just 52 words in length by physicians. It also found that the quality of the responses provided by bots was superior (as assessed by human evaluators) than that provided by doctors, and that bot responses were far more empathetic on average.

On average, physicians’ responses were “acceptable” whereas the average for bots was “good.”

27 percent of doctors’ responses were rated “less than acceptable” as opposed to just 3 percent for bots.

Only 22 percent of doctors’ responses were rated “good” or “very good,” while bots were rated either “good” or “very good” in 79 percent of cases.

 

When it came to empathy, doctors’ responses were on average “slightly empathetic” whereas bots averaged “empathetic.”

81 percent of doctors’ responses were rated “less than slightly empathetic,” as opposed to just 15 percent for bots.

And just 5 percent of doctors’ responses were rated “empathetic”or “very empathetic,” versus 45 percent of responses provided by bots.

What does it really take to heal?

The study’s authors concluded that more research is needed into the effect of AI in medical settings:

While this cross-sectional study has demonstrated promising results in the use of AI assistants for patient questions, it is crucial to note that further research is necessary before any definitive conclusions can be made regarding their potential effect in clinical settings. Despite the limitations of this study and the frequent overhyping of new technologies, studying the addition of AI assistants to patient messaging workflows holds promise with the potential to improve both clinician and patient outcomes.

The study’s authors did not comment on the fact that it takes a doctor a lot longer to write 52 words than it does for a bot to write 211 words. Similarly, they made no comment on the notion that a computer could truly be “empathetic” given that empathy implies genuine feeling which computers lack. Given that patients generally value empathy in their doctors and many believe that it contributes to healing, it might be interesting to research the different outcomes resulting from doctor empathy as opposed to AI “empathy.”

* not his real name