Back to Blog

Generative AI in Healthcare: Balancing AI and Humanism

Kevin
Kevin Toh

Generative AI in Healthcare: Balancing AI and Humanism

Imagine sitting in the doctor’s office anxiously waiting for the outcome of your health checkup. As the doctor goes through your medical history, you notice something unsettling. The doctor uses ChatGPT, a generative AI tool to summarize your medical history from the National Electronic Health Record system. As a patient, how would you react at that moment? Personally, I would be sceptical of the doctor’s abilities. After all, shouldn’t doctors possess the expertise to understand and interpret our medical conditions?

Imagine yourself being medically diagnosed by AI (Photo: CapeStart)

In a recent Forbes article [1], Dr Lance B. Elliot, an established AI expert, highlights both potential benefits and shortcomings associated with integrating generative AI (Gen-AI) into the summarization of medical notes. In his commentary, he advocates for a cautious approach to Gen-AI adoption in healthcare through prompt engineering. Expanding on the discussion, I implore you to consider how the use of Gen-AI in healthcare would affect human agency, which is defined as the ability for humans to act on their own will. While Gen-AI simplifies the tedious task of medical note summarization, it risks disregarding the complexities of medical decision making. Moreover, the relationship between doctors and patients is established on trust and empathy. When Gen-AI takes centre stage, these humanistic aspects may fade away, potentially hurting doctor-patient relationships.

The Swift Digital Scribe

Enter Gen-AI, is a swift, digital scribe. As a computer science student, ChatGPT has given me immense time savings when generating concise summaries. Likewise, I envision similar gains in the healthcare sector. This is especially so as physicians spend up to 2 additional hours per patient documenting and navigating Electronic Health Record system [2]. In a study published by Mayo Clinic, Surgeons used a large language model to generate clinical notes accurately in 5 seconds, a task that would have otherwise taken 7 minutes to write [3]. The efficiency of Gen-AI offers a beacon of hope, freeing up doctors from administrative burden, allowing them to redirect efforts on patient care. However, amidst our admiration for Gen-AI’s speed, we must also exercise caution and discernment. Gen-AI like any technological innovation is not flawless. It is susceptible to training biases, posing significant concern that warrants careful consideration. The Dangers Of Omitting Essential Information While Gen-AI excels at creating concise summaries, its reliance on pre-existing data may result in the inadvertent omission of crucial details [4]. This is due to machine bias as Gen-AI models learn from human generated datasets [5]. In the context of healthcare, such oversights can result in serious consequences, potentially leading to misdiagnosis and inappropriate treatment decisions. In fact in a study conducted by Cohen Children’s Medical Centre [6], it was found that ChatGPT failed to accurately diagnose 83% of paediatric cases. This example demonstrates that current Gen-AI technologies lack sophistication and contextual understanding of patient condition which is required for precise medical diagnosis.

Complexities of Medical Decision Making

Gen-AI tools such as ChatGPT lack the ability to understand the complexities of medical conditions. Medical decisions often rely on the multifaceted understanding of a patient and involves synthesizing information from multiple sources including lab test results, imaging studies, patient’s history, symptoms, living environment and clinical judgement [7]. The limitation of Gen-AI is its inability to directly observe and determine environment factors impacting the patient’s health. More importantly, medical decision making should be ethical, involving the patient themselves in the decision making process [8]. However, the complexity of medical decisions can lead to some patients feeling overwhelmed. As such, patients continue to seek guidance from their doctors. Therefore, doctors should not replace their expertise and judgement entirely with Gen-AI to maintain the humanistic aspects of healthcare, specifically the trust between patients and doctors.

Doctor-Patient Relationship

Traditionally, patient and doctors held close, enduring relations built on trust and deep understanding of a patient’s needs. These bonds were characterized by a strong sense of personal connection, as doctors had the time to listen to their patient’s concerns and provided not only medical treatment but emotional support [9]. However, the employment of Gen-AI to summarise the patient’s medical notes introduces a shift in dynamics. The lack of human interaction may result in patients feeling detached instead of feeling respected and understood as if their healthcare needs are being processed by a faceless algorithm rather than being addressed by a compassionate human being [10]. This reliance on technology has the potential to erode the relationship between patients and their healthcare providers, resulting in emotions of distrust and dissatisfaction. Patients worry if their healthcare needs are actually understood and met, resulting in breakdown in communication and collaboration.

Conclusion

In conclusion, while Gen-AI in healthcare holds promise for improving productivity, healthcare workers should be aware of its limitations and approach it cautiously. We should not overlook the humanistic aspects of healthcare which include the intricacies of medical decision making as well as trust and empathy shown in doctor-patient relationships. As we navigate the future of healthcare, let us prioritize human agency and ethical considerations to ensure that patient care remains paramount.

References

[1] L. Eliot, “Doctors Relying On Generative AI To Summarize Medical Notes Might Unknowingly Be Taking Big Risks,” Forbes, Feb. 05, 2024. https://www.forbes.com/sites/lanceeliot/2024/02/05/doctors-relying-on-generative-ai-to-summarize-medical-notes-might-unknowingly-be-taking-big-risks/?sh=4692f46046ed

[2] J. Budd, “Burnout Related to Electronic Health Record Use in Primary Care,” Journal of Primary Care & Community Health, vol. 14, p. 215013192311669-215013192311669, Apr. 2023, doi: https://doi.org/10.1177/21501319231166921.

[3] A. Abdelhady and C. R. Davis, “Plastic Surgery and Artificial Intelligence: How ChatGPT Improved Operation Note Accuracy, Time, and Education,” Mayo Clinic, vol. 1, no. 3, pp. 299–308, Sep. 2023, doi: https://doi.org/10.1016/j.mcpdig.2023.06.002.

[4] A. Lal, “Generative AI,” www.medicaleconomics.com, vol. 101, Mar. 2024, Accessed: Mar. 28, 2024. [Online]. Available: https://www.medicaleconomics.com/view/generative-ai

[5] K. Knapton, “Council Post: Navigating The Biases In LLM Generative AI: A Guide To Responsible Implementation,” Forbes, Sep. 06, 2023. https://www.forbes.com/sites/forbestechcouncil/2023/09/06/navigating-the-biases-in-llm-generative-ai-a-guide-to-responsible-implementation/?sh=12a3ff165cd2 (accessed Mar. 28, 2024).

[6] C. Dibenedetto, “ChatGPT fails at diagnosing child medical cases. It’s wrong 83 percent of the time.,” Mashable SEA | Latest Entertainment & Trending, Jan. 04, 2024. https://sea.mashable.com/tech/30257/chatgpt-fails-at-diagnosing-child-medical-cases-its-wrong-83-percent-of-the-time

[7] R. Sutton and D. Pincock, “An overview of clinical decision support systems: benefits, risks, and strategies for success,” NPJ Digital Medicine, vol. 3, no. 1, pp. 1–10, Feb. 2020, doi: https://doi.org/10.1038/s41746-020-0221-y.

[8] konstantinoskampanos, “Shared decision-making: A true patient-centric approach to care,” United States, Feb. 29, 2024. https://www.ipsen.com/us/improving-lives/shared-decision-making-a-true-patient-centric-approach-to-care/

[9] R. Pearl, “How generative AI will change the doctor-patient relationship,” www.linkedin.com, Oct. 23, 2023. https://www.linkedin.com/pulse/how-generative-ai-change-doctor-patient-relationship-pearl-m-d-/?trackingId=A5C3Wo6ZT6aO1IA%2BTJiwLg%3D%3D (accessed Mar. 28, 2024).

[10] A. Kerasidou, “Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare,” Bulletin of the World Health Organization, vol. 98, no. 4, pp. 245–250, Jan. 2020, doi: https://doi.org/10.2471/blt.19.237198.