Translate this page into:
RE: Debunking Palliative Care Myths: Assessing the Performance of Artificial Intelligence Chatbots (ChatGPT vs. Google Gemini)
*Corresponding author: Hinpetch Daungsupawong, Private Academic Consultant, Lak 52 Phonhong, Vinetiane 1000 Laos. hinpetchdaung@gmail.com
-
Received: ,
Accepted: ,
How to cite this article: Daungsupawong H, Wiwanitkit V. RE: Debunking Palliative Care Myths: Assessing the Performance of Artificial Intelligence Chatbots (ChatGPT vs. Google Gemini). Indian J Palliat Care. doi: 10.25259/IJPC_286_2024
Dear Editor,
We would like to comment on ‘Debunking Palliative Care Myths: Assessing the Performance of Artificial Intelligence Chatbots (ChatGPT vs. Google Gemini).[1]’ A study looking at the usefulness of an artificial intelligence (AI) chatbot in refuting palliative care myths revealed major problems concerning the chatbot’s content and methodological approach. While the study produced useful insights, the inclusion of 30 items that reflected prevalent beliefs was crucial to the study’s overall validity. A comprehensive and systematic study or survey of the current literature on palliative care myths could help with the selection process by ensuring that these statements appropriately reflect the prevalent beliefs amongst patients and caregivers.[2] Involvement in palliative care research is hindered by individual reluctance amongst professionals and myths suggesting that patients and caregivers are uninterested in participating.[2] Addressing these barriers and strengthening the evidence base highlight the need for a comprehensive study of palliative care myths to accurately reflect the beliefs of patients and caregivers regarding their involvement. Furthermore, a more diverse dataset, containing insights from patients and experts from many fields, will allow for a better understanding of misconceptions, thus leading to more tailored chatbot responses.
The evaluation measures used – sensitivity, positive predictive value, accuracy and precision – are common for assessing classification algorithms. However, the 3.3% genuine negative rate for ChatGPT warrants further investigation. This low percentage suggests that ChatGPT has trouble identifying false or deceptive claims, raising worries about its ability to detect subtle subtleties in complicated healthcare topics. Furthermore, the study lacked a strong qualitative analysis of the chatbot’s responses. Measuring chatbot effectiveness is critical. However, investigating small details in how these misconceptions are addressed, such as the clarity and empathy of the language employed, may provide a more complete picture of AI chatbots’ instructional potential in delicate contexts like palliative care.
Enquiries arise about the study’s limitations, such as how the assessment procedure differs from other forms of user engagement, such as real-time conversations or follow-up enquiries. Rather than just evaluating the initial correctness of a chatbot’s responses, integrating patient or caregiver feedback may help us better determine the impact on user experiences and education. Furthermore, this study did not address potential biases in AI training data that could impair a chatbot’s ability to comprehend and communicate information concerning culturally sensitive parts of palliative care. Potential biases in AI training data could hinder a chatbot’s ability to communicate culturally sensitive information in palliative care. Similar to commercial algorithms that exhibit racial bias, AI chatbots may rely on proxies like healthcare costs instead of direct measures of patient need, undermining support for diverse populations.[3]
Future research could build on this study by investigating longitudinal assessments of user interaction with AI chatbots in real-world contexts. Examining how these tools affect patients’ knowledge, decision-making and emotional well-being over time would provide useful information for practical applications. Furthermore, implementing an iterative design process, in which the chatbot’s algorithm learns from user interactions and modifies its material accordingly, may improve educational outcomes. The novelty of utilising artificial intelligence to combat misconceptions about palliative care stems from more than just the technology. However, it also has the potential to create customised, user-focused educational resources that meet the specific needs of patients and caregivers, thereby bridging the gap for improved communication about palliative care and, ultimately, significantly improving access and understanding in an underserved healthcare area.
Ethical approval
Institutional Review Board approval is not required.
Declaration of Patient consent
Patient’s consent is not required as there are no patients in this study.
Conflicts of interest
There are no conflicts of interest
Use of artificial intelligence (AI)-assisted technology for manuscript preparation
The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript and no images were manipulated using AI.
Financial support and sponsorship: Nil.
References
- Debunking Palliative Care Myths: Assessing the Performance of Artificial Intelligence Chatbots (ChatGPT vs. Google Gemini) Indian J Palliat Care. 2024;30:284-7.
- [CrossRef] [PubMed] [Google Scholar]
- Patient and Carer Involvement in Palliative Care Research: An Integrative Qualitative Evidence Synthesis Review. Palliat Med. 2019;33:969-84.
- [CrossRef] [PubMed] [Google Scholar]
- Dissecting Racial Bias in an Algorithm used to Manage the Health of Populations. Science. 2019;366:447-53.
- [CrossRef] [PubMed] [Google Scholar]