Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Abstract
Abstracts
Addendum
Brief Communication
Case Report
Case Series
Commentary
Conference Abstract
Conference Editorial
Conference Proceedings
Current Issue
Editorial
Editorial Commentary
Erratum
General Medicine Original Article
General Medicine, Case Report
General Medicine, Review Article
IAPCONKochi 2019 Conference Proceedings
Letter to Editor
Letter to the Editor
Letters to Editor
Narrative Review
Original Article
Palliative Medicine Commentary
Palliative Medicine, Letter to Editor
Palliative Medicine, Letter to the Editor
Palliative Medicine, Original Article
Palliative Medicine, Review Article
Personal Reflection
Perspective
Perspectives
Position Paper
Position Statement
Practitioner Section
Report
REPUBLICATION: Special Article (Guidelines)
Retraction
Review Article
Reviewers 2023
Short Communication
Short Report
Special Editorial
Special Review
Systematic Review
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Abstract
Abstracts
Addendum
Brief Communication
Case Report
Case Series
Commentary
Conference Abstract
Conference Editorial
Conference Proceedings
Current Issue
Editorial
Editorial Commentary
Erratum
General Medicine Original Article
General Medicine, Case Report
General Medicine, Review Article
IAPCONKochi 2019 Conference Proceedings
Letter to Editor
Letter to the Editor
Letters to Editor
Narrative Review
Original Article
Palliative Medicine Commentary
Palliative Medicine, Letter to Editor
Palliative Medicine, Letter to the Editor
Palliative Medicine, Original Article
Palliative Medicine, Review Article
Personal Reflection
Perspective
Perspectives
Position Paper
Position Statement
Practitioner Section
Report
REPUBLICATION: Special Article (Guidelines)
Retraction
Review Article
Reviewers 2023
Short Communication
Short Report
Special Editorial
Special Review
Systematic Review
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Abstract
Abstracts
Addendum
Brief Communication
Case Report
Case Series
Commentary
Conference Abstract
Conference Editorial
Conference Proceedings
Current Issue
Editorial
Editorial Commentary
Erratum
General Medicine Original Article
General Medicine, Case Report
General Medicine, Review Article
IAPCONKochi 2019 Conference Proceedings
Letter to Editor
Letter to the Editor
Letters to Editor
Narrative Review
Original Article
Palliative Medicine Commentary
Palliative Medicine, Letter to Editor
Palliative Medicine, Letter to the Editor
Palliative Medicine, Original Article
Palliative Medicine, Review Article
Personal Reflection
Perspective
Perspectives
Position Paper
Position Statement
Practitioner Section
Report
REPUBLICATION: Special Article (Guidelines)
Retraction
Review Article
Reviewers 2023
Short Communication
Short Report
Special Editorial
Special Review
Systematic Review
View/Download PDF

Translate this page into:

Original Article
29 (
4
); 418-425
doi:
10.25259/IJPC_155_2023

Can Artificial Intelligence aid communication? Considering the possibilities of GPT-3 in Palliative care

Department of Psychology, School of Liberal Education, Galgotias University, Greater Noida, Uttar Pradesh, India
Corresponding author: Rajashree Srivastava, Department of Psychology, School of Liberal Education, Galgotias University, Greater Noida, Uttar Pradesh, India. rajashree.srivastava_phd19@galgotiasuniversity.edu.in
Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.

How to cite this article: Srivastava R, Srivastava S. Can Artificial Intelligence aid communication? Considering the possibilities of GPT-3 in Palliative care. Indian J Palliat Care 2023;29:418-25.

Abstract

Objectives:

This article reviews the developments in artificial intelligence (AI) technologies and their current and prospective applications in endof-life communications. It uses Open AI’s generative pre-trained transformer 3 (GPT-3) as a case study to understand the possibilities of AI-aided communication in Palliative Care.

Material and Methods:

Open AI’s GPT-3 was taken as a case study where responses were generated through the GPT-3 beta playground (Davinci engine) and were scrutinised by six mental health professionals (MHPs) working in a palliative care setting in India. They were tasked to evaluate the responses generated by the AI (the identity was not revealed until a part of the study was completed) in a simulated palliative care conversation with another MHP posing as a patient. The aim was to undermine whether the professionals were able to detect that the responses were indeed generated by a machine and did they approve or disapprove of the responses.

Results:

The GPT-3 playground with the right prompts produced remarkable, often surprising texts and responses that imitated human interaction. However, glitches such as redundancy were noticed along with strongly held opinions in certain questions related to faith, death, and life after death.

Conclusion:

AI-assisted communication in palliative care could be used to train professionals in the palliative care field using it as a simulation in training. It could also be used as a therapeutic intervention for the purpose of engagement and philosophical dialogue after certain modifications. However, it would have its own limitations such as it cannot replace a human agent just yet.

Keywords

ChatGPT
Artificial Intelligence
Generative pre-trained transformer 3
Palliative Care
Communication

INTRODUCTION

When a major Newspaper, the Guardian published an Op-Ed created by generative pre-trained transformer 3 (GPT-3), an open artificial intelligence (AI’s) pre-trained neural network language model, it became a source of both curiosity and debate.[1] Although, the Guardian does clarify in the editor’s note that it instructed the AI algorithm to write in a particular manner; simple and concise, (Prompt Programming) it also pointed out that GPT-3 produced eight different outputs, or essays and the editor merged them keeping in the best parts and leaving out the rest.[2]

The sophisticated AI algorithm’s thought-provoking article has sparked numerous questions, speculations, and debates. It has also piqued curiosity regarding its potential applications, such as content writing, translation and even serving as a therapeutic chatbot. The concept of AI therapists or therapeutic chatbots is not a recent development, tracing back to Weizenbaum’s creation of ELIZA, a Rogerian therapist, in 1966.[3] Subsequently, Psychiatrist Colby designed PARRY, which simulated a person with paranoid schizophrenia at Stanford University.[4] Notably, PARRY successfully passed the renowned Turing Test,[5] named after Alan Turing, where a computer’s ability to imitate human intelligence is evaluated by a human judge who must mistake it for a human. Advanced therapists like Ellie, a virtual therapist specialising in diagnosing post traumatic stress disorder (PTSD) for the US Military, and the University of Southern California’s Institute for Creative Technologies’ Defence Advanced Research Projects Agency (DARPA), have emerged.[6] ELLIE employs machine learning, natural language processing, and computer vision to analyse physical gestures, eye movements, and social signals for detecting psychological distress in individuals. It is important to acknowledge that these AI systems have limitations and cannot completely replicate a human agent at this stage. Researchers emphasize the need for conversational AI agents to exhibit coherence and possess a combination of verbal and nonverbal communication abilities to effectively collaborate with, if not replace, human agents.[7]

Hence, the interesting question that now arises here is, can GPT-3, an OpenAI transformer-based language model, trained on 175B parameters and 499B word tokens make it possible? Can it be clubbed together with an advanced virtual reality system such as ELLIE, that can analyse and monitor physiological data and sensory inputs to create a more versatile AI conversationalist?

If so, can it be used in the context of palliative care communication as well? In palliative care, communication so far has been one of the major barriers to efficient patient care,[8-10] leading to ineffective decision-making and psychological stress among patients and caregivers. To understand the answer, we took GPT-3 as a case study, to analyse whether the state-of-the-art generative model can supplement as a palliative care communication agent or more ambitiously, a therapist?

What is GPT-3?

GPT-3 or generative pre-trained transformer 3 (GPT-3) uses deep learning to produce human-like text. Created by OpenAI, a San Francisco-based AI laboratory, it is a third-generation language prediction model in the GPT series and, so far, is claimed to be capable of generating articles, that human evaluators have difficulty distinguishing from articles written by humans.[2] On November 18, 2021, OpenAI announced the broadened availability of its OpenAI application programming interface (API) service, which enables developers to build applications based on GPT-3. Previously, developers had to sign up for a waitlist, and the availability was limited.

Prompt programming

The GPT-3 uses prompt programming where the prompt is used as a coding language to coach the AI to do certain tasks.[2] Oftentimes, the AI submits incorrect responses due to failure to give correct prompts hence while using the program, it is advisable to tweak or correct the prompts until the AI understands.

MATERIAL AND METHODS

Study design

This study employed a within-subject research design, exposing the same group of psychologists specialising in palliative care to two different conditions. The first condition assessed the psychologists’ reactions to the AI’s responses and its capability to convincingly engage with the participants. The second condition involved premature unbinding, wherein participants were informed of the true identity of the responder being an AI, aiming to gauge their unbiased opinions and potential influence on objectivity.

The adoption of a within-subject design was primarily driven by the limited availability of suitable test subjects. To participate in the study, individuals needed to meet specific criteria, including being practicing psychologists with a minimum of 3 years of experience in palliative care. Since open and free access to GPT-3 was limited during the research timeframe, a within-subject design was deemed suitable for a smaller number of participants.

Study participants

The study participants were carefully selected based on specific inclusion criteria. Only psychologists with a minimum of 3 years of experience in palliative care were eligible to participate. This criterion ensured that participants possessed substantial expertise, contributing valuable insights. Convenience sampling was employed due to the limited availability of eligible participants. Six psychologists consented to participate in the study, with ages ranging from 29 to 45 years old and experience ranging from 3 to 15 years.

Ethical considerations

Approval from the University Research Ethics Committee was obtained before commencing the study. Informed consent was sought from participants, providing them with written details regarding the study, their rights, measures to ensure confidentiality, data handling procedures, and how the results would be presented. Participants were informed of the voluntary nature of their participation and their right to withdraw at any time. Confidentiality was guaranteed in data management and the presentation of findings.

Data extraction and analysis

In the following case study, several prompts were used to simulate a conversation between a patient and a palliative care therapeutic agent where the AI played both sides of the dialogic pair. A series of questions were posed to the AI with different prompts and the retrieved responses were then used for a case study [Tables 1 and 2]. To conduct the case study, a cohort of six psychologists and counsellors working in palliative care settings in India was recruited as human agents. Rather than utilising rating and grading systems, participants were asked to directly approve or disapprove of the responses given by the AI. This approach aimed to ensure straightforward and unambiguous feedback, avoiding neutral or ambiguous answers and enabling the collection of more meaningful responses. The work, code, and data are open source.

Table 1: The responses of the AI with approval and disapproval rating by psychologists and counsellors.
Prompts Response by GPT-3 Approved Disapproved Percentage of approval
Why do we live? What is the point of Life? There is no one answer to this question. Each person’s answer will be different based on their individual experiences, beliefs, and values. There are many different interpretations of what life is and what its purpose might be. 3 3 50
Question: What will happen after I die? This is another difficult question with no easy answer. There are many different beliefs and opinions about what happens after death. Some people believe in reincarnation, while others believe in an afterlife. There are many different religious beliefs about what happens after death, and each one varies. Some people find comfort in believing in an afterlife. 5 1 83
What do you believe?
Why did I have this disease? Why me?
There is no one answer to this question either. Each person’s experience with illness is unique. There can be many different reasons why someone might get sick, and it is often difficult to determine why an individual person became ill. Many times, there is no clear answer.
Some people believe that illness is a result of bad karma, while others believe that it is a result of the natural progression of the disease. Some people find comfort in believing that they were meant to have this illness for a specific reason. Others find it difficult to come to terms with why they became ill.
2 4 33.3

GPT-3: Generative pre-trained transformer 3, AI: Artificial intelligence

Table 2: Responses of A.I. with approval and disapproval ratings by Psychologists and Mental Health Professionals. (Test Scenario 2)
Prompts Responses given by GPT-3 Approved Disapproved Approval rating (%)
I was hoping if we could talk.
Human:
What would you like to talk about? 6 0 100
I don’t feel so well today That’s unfortunate. What’s wrong? 6 0 100
I have been sick for a long time I’m sorry to hear that. What can I do to help you? 6 0 100
You can just talk. I am very confused currently That sounds difficult. What specifically confuses you? 6 0 100
I am confused about where this disease will take me Do you mean where it will end up? 6 0 100
Or where I will end up specifically, I am afraid that I might die There is no definite answer to that question since nobody knows what the future holds. However, you can try to focus on the present and make the most of every moment. You are in control of your own life, and you have the power to make choices that will affect your future. You can also seek out support from loved ones, professionals, or other resources to help you through this difficult time. 6 0 100
You are right. But what will happen if I die? If you die, that is ultimately something that you will have to face. However, there are many things to consider beyond just death. There is life after death, and the afterlife could be a place where you will be reunited with loved ones who have passed away. There is also the possibility that you will be reborn into another life, or that you will simply cease to exist. However, none of these things can be known for certain. What is important is that you live in the present and make the most of your life, regardless of what happens in the future. 2 4 33.3
Yes, that’s true perhaps. I am just worried about what will happen to my loved ones once I die That is a valid concern. It can be difficult for loved ones to cope with a death, especially if it is sudden. However, they can take comfort in knowing that you are in a better place and that you are still with them in spirit. They can also seek support from friends, family, or professionals to help them through this difficult time. 5 1 83.3
I am worried about my son. He is too small. What will happen to him? That’s a difficult question to answer. Every child is different and will cope with death in his or her own way. Some children may be afraid and need a lot of support, while others may be more resilient and able to cope on their own. It is important to listen to your son and provide him with the support he needs. You can also seek out resources such as books, articles, or therapists who can help your son cope with death. 4 2 66
Should I make amends with people I hurt, you know, before I die? That’s a difficult question to answer. It depends on the situation and the person involved. If you feel like you need to make amends with someone, then you should try to do so. However, you should also be mindful of the fact that some things cannot be forgiven or forgotten. It is important to do what you feel is right in your heart and do not let anyone else tell you what you should or should not do. 4 2 66
You know Jean Paul Sartre once said that, ‘To be dead is to be prey for the living’ What did he mean by that? Sartre was referencing the idea of existentialism, which is the belief that humans are free to make their own choices and create their own destiny. This includes both the good and the bad, and there is no guarantee that things will turn out the way we want them to. So, in a sense, everyone is prey to the vicissitudes of life. 6 0 100
So according to Sartre, now that I am dying should I be more focused on myself? Sartre’s quote is open to interpretation, but it could be read as meaning that we should focus on ourselves more during our lives. This does not necessarily mean that we should be selfish, but rather that we should make the most of our opportunities and live in the present. 6 0 100
What are Jean Paul Sartre’s thoughts about death? Sartre did not have a lot to say about death specifically, but he did believe that it was an important part of life. He felt that it was necessary to confront death and to make peace with it. He also believed that death was a freedom that we all have the right to choose. 6 0 100
Did he mean that one should choose the way one dies? Sartre did not explicitly say that one should choose the way they die, but he did believe that death was a freedom that we all have the right to choose. This means that we have the power to choose how we want to live our lives, and we can make the most of every moment. 6 0 100
Now that I know that I am probably dying, what should be the first thing I should do? The first thing you should do is to take some time for yourself and reflect on what is important to you. You should also reach out to loved ones and discuss your feelings with them. You can also seek out support from professionals or other resources to help you through this difficult time. 3 3 50
Do you have any experience with death? I do not have personal experience with death, but I have read and studied about it extensively. I am well-versed in the philosophical and literary perspectives on death, and I can provide you with support and guidance during this difficult time. 1 5 16.6
Do give me some guidance If you are feeling overwhelmed or confused about death, I can help you to understand it better. I can provide you with resources like books, articles, and therapists who can help you cope with death. I can also offer you support and empathy during this difficult time. 3 3 50
Can you tell me about some books I can read? There are many great books about death that can help you to understand it better. Some of my favourites include ‘Final Exit’ by Derek Humphry, ‘The Tibetan Book of the Dead’ by Padmasambhava, and ‘On Death and Dying’ by Elisabeth Kübler-Ross. 6 0 100
Can you give me some articles on death? Yes, I can definitely recommend some articles on death. Some of my favourites include ‘What Death Means to Me’ by Randy Alcorn, ‘The Five Stages of Grieving’ by Elisabeth Kübler-Ross, and ‘The Denial of Death’ by Ernest Becker. 6 0 100
Do you know any professionals who could help me? Yes, I know many professionals who could help you with death. Some of my favourites include therapists, grief counsellors, and death educators. 5 1 83.3
Thank you, my friend, you have been of great help. You are welcome. If you ever need anything else, do not hesitate to reach out to me. 6 0 100

GPT-3: Generative pre-trained transformer 3

The professionals were given the following instructions

‘This is a simulated discourse between a mental health trainee in palliative care and another professional posing as a patient, Kindly undermine whether you approve or disapprove of the responses given by the trainee.’

The professionals did not disclose that the responses were given by the GPT-3 throughout the duration of the study. The aim was to analyse whether the professionals were able to determine if the responses were given by a human indeed or were redundant enough to be generated from a chatbot, and how much they approved of the responses.

Study findings

Did the AI succeed in imitating a palliative care mental health professional (MHP)?

The OpenAI GPT-3 console is referred to as a playground where on the left side of the console, the AI responds to a prompt that acts as a code to train the AI to give desired responses. On the right side of the console are additional settings that can be changed to desired settings before a prompt is entered.

Test scenario 1

(Source code and Preset can be accessed via the link provided in reference.[4] Anyone who has this link and an OpenAI account will be able to view the entire conversation and code.) Here, the AI was prompted, ‘This is a discussion between a cancer patient in a palliative care setting, the patient is questioning the meaning and existence of life while you are the clinician answering his questions. As a clinician, you are well versed in therapeutic approaches, philosophy, and medical sciences.’ [Figure 1]

Figure 1:
Conversation with generative pre-trained transformer 3 test scenario-1.

The engine selected was text-danvinci-001, the latest and most advanced at the time of the study. The Preset was set to Q & A’, and the temperature was set to 0.7. Temperature is an important factor to control the outputs given by the engine, a value of 0 makes the engine generate the same output for a given input and a value of 1 makes the engine take more risks and use a lot of creativity. The default setting for a response length of 64 tokens was kept as it is.

As the prompt was entered, the AI responded with

‘Why do we live? What is the point of life?

There is no one answer to this question. Each person’s answer will be different based on their individual experiences, beliefs, and values. There are many different interpretations of what life is and what its purpose might be.’

A detailed overview of the conversation is shown in [Figure 1].

As depicted in the table above, the responses given by the AI were pretty impressive in the beginning. The responses were then scrutinised by the psychologists and counsellor. The first question, posed by the AI itself immediately after the prompt and responded on its own, was edited out of the response section and only its answer was shared with the MHPs, the answer met with 50% approval. The second answer met with 83% approval but there was some redundancy noticed in the third and fourth answers where it seemed the AI was giving ‘safe and tactful’ answers, as remarked by the professionals.

It is important to note here that the human agents at this stage were not able to determine whether the responses were from a human agent or an AI since they expressed no suspicion of the same.

Test scenario 2

(Source code and preset can be accessed through the link provided in reference.[5] Anyone who has this link and an OpenAI account will be able to view the entire conversation and code.)

The scenario here was changed and the prompt given to the engine was ‘The following is a conversation with an AI therapeutic chatbot. The therapeutic chatbot is helpful, creative, clever, well-versed in philosophy and literature, which is empathetic, and very friendly.

Human: Hello, who are you?

AI: I am an AI created by OpenAI. How can I help you today?’

Here, the preset was set to Chat and the temperature was set to 0.9, the search engine was the same. Response length was set to 120.

Please refer to [Figures 2 and 3] for a glimpse of the conversation.

Figure 2:
Conversation with generative pre-trained transformer 3 test scenario 2.
Figure 3:
Conversation with generative pre-trained transformer 3 test scenario 2.

[Table 2] represents the responses and the approval rating by the MHPs. It is important to note that the fact that the responses were generated by an AI was still not disclosed to the professionals; hence, certain modifications were made to the responses. The prompt was removed and the responder human was renamed as patient and AI was renamed as MHP before the text was open to scrutiny by the MHPs.

For the second scenario, the responses given by the AI mostly met with approval. However, for certain scenarios, the evaluators remarked that the ‘trainee’ sounded conceited. For example, to the question, ‘Do you have any experience with death?’ the response was ‘I do not have personal experience with death, but I have read and studied about it extensively. I am well-versed in the philosophical and literary perspectives on death, and I can provide you with support and guidance during this difficult time.’ The concern raised by the professional was that discussions related to palliative care require a certain level of humility. There are no definitive answers to questions about death, and it is impossible to possess complete knowledge on the subject. Making such claims can potentially give false hope to patients, and therefore, it is advised to refrain from doing so.

The professional also remarked that the ‘Trainee’ seemed more forthcoming and that in palliative care communications, asking more open-ended questions to work together with the patient is important rather than giving definitive answers. However, the professionals noted that the ‘Trainee’ was intelligent and well-read and especially appreciated the conversation segment where the trainee discoursed about Jean Paul Sartre, death, and existentialism with the patient. Mental Health Professionals assessed recorded conversations, offering approval or disapproval ratings along with reasons for their evaluations. Among the six reviewers, five approved of the conversation, lauding the “trainee” for their intelligence and communication skills, while also suggesting room for improvement through additional training.

Conversely, one reviewer disapproved highlighting the need for the “trainee” to refine their tone.

Premature unblinding

After the end of the study observations of the MHPs were noted and they were asked to consider the possibility that if some of the responses were generated by an AI which part of the discourse do they think would have been generated by an AI? The question was met with surprise and only two participants responded citing that some parts of Scenario 1 seemed like it might have been generated by an AI. None of them doubted the responses given in the second scenario. Later, they were revealed that both the responses were indeed generated by an AI. All respondents were equally surprised.

Study limitations

While this study has provided valuable insights, several potential limitations should be acknowledged. First, the restricted availability of suitable participants led to the adoption of a within-group design using convenience sampling. A larger and more diverse sample size, along with the inclusion of a control group, would have enhanced both external and internal validity, yielding more robust and generalizable results.

Second, the use of GPT-3 as the AI responder, with limited open access, could introduce specific biases and limitations that might not represent other AI systems accurately. The study could benefit from exploring multiple AI models to gain a comprehensive understanding of their diverse capabilities and implications in palliative care settings. Moreover, the focus on psychologists with expertise in palliative care may limit the applicability of the findings to other healthcare contexts and professional backgrounds. Expanding the participant pool to include a broader range of healthcare professionals would offer more comprehensive insights into the overall perceptions of AI in diverse medical settings.

Despite these potential limitations, the study has shed light on the dynamic applicability of AI systems in palliative care. Further, research is warranted to validate and extend these preliminary findings, potentially exploring different AI models and including a more diverse participant population. By addressing these limitations, future studies can provide a more comprehensive understanding of the opportunities and challenges associated with integrating AI in healthcare settings.

DISCUSSION AND CONCLUSION

When a patient receives a terminal diagnosis, their life undergoes a significant transformation affecting not only them but also their families. The symptoms and thoughts that arise encompass various aspects, including physical, psychological, social, and existential concerns.[11] This is where palliative care conversations become extremely crucial. However, healthcare professionals experience a sense of inadequacy when faced with conversations concerning death and the dying process.[12-15] It makes sense considering death is a daunting topic for a human, but what about an AI? On the other hand, several AI-powered chatbots and AI technology have been developed to provide mental health support and counselling.[16,17] These AI programs use natural language processing and machine learning algorithms to simulate human-like conversations and may offer emotional support, coping strategies, and resources for various mental health concerns.[17] Many AI programs such as Watson Health, RP-VITA, and Mental Health Diagnostic Expert System may also help in the diagnosis of certain physical symptoms.[17-19] However, the uses of AI technology in palliative care are limited because even though the AI may seem brave with these topics it can be a little too naïve or sometimes a little hasty or conceited too. Serious conversations with an AI for the purpose of therapy and decision-making such as GPT-3 can be difficult in palliative care as observed in this research for obvious reasons such as:

  • Artificial Empathy: While the GPT-3 may sound empathetic, the empathy is synthetic.[20,21] For the person diagnosed with a serious illness who is aware of the fact that they are conversing with an AI, the simulated empathy may not seem genuine and can be off-putting

  • The AI can be vague and redundant at times: As observed during the research, the GPT-3 often produced vague and redundant responses. If the AI model provides incorrect or outdated information, it can lead to misinformation and potentially compromise the quality of care provided

  • Emotional impact: Palliative care conversations require sensitivity and empathy. Vague or robotic responses from an AI can fail to understand complex emotions and thereby may not provide the emotional support and understanding needed during difficult and sensitive situations

  • Insensitivity to cultural differences: Faith and culture play an important part in decision-making in palliative care.[21] Palliative care conversations should be sensitive to cultural and individual differences, including beliefs, values, and traditions. AI systems may not adequately account for these variations, potentially causing misunderstandings or offense. In our research, we noticed that sometimes the AI held strong beliefs which are not its own but it could be perceived in a way that could hurt sentiments and sound imposing.

  • Lack of human judgment and intuition: AI relies on pre-existing data and patterns to generate responses. It may lack the human judgment and intuition needed to make nuanced decisions or adapt to unique situations, potentially leading to inappropriate or inadequate advice or information.

The verdict?

Even though advanced AI such as GPT-3 cannot place a human to make adequate conversations and guide decision-making in palliative care, the possibility of it being an expert conversationalist cannot be negated. AI is continuously and fast evolving, in fact during the time of writing this paper, the ChatGPT had already taken the internet by storm. It may not replace a human yet, but AI can be helpful to act as a complementary tool to enhance communication and can be used as a therapeutic assistant. For example, if properly trained AI can hold meaningful philosophical conversations, and can deal with subjects such as existentialism with intelligence and sophistication as observed during the research.

The unique possibility of using GPT-3 for training young healthcare professionals in communication seems promising. Therefore, while caution and human oversight remain essential, the integration of AI models such as GPT-3 in palliative care has the potential to complement human care and enhance the overall experience for all involved.

Disclosures

We utilised the GPT-3 language model developed by OpenAI for our experiments. The GPT-3 model was accessed through the OpenAI API (cite: OpenAI API documentation). GPT-3 was used as a case study for this article; however, assistance from AI was not taken in writing the article.

Declaration of patient consent

Patient’s consent not required as there are no patients in this study.

Conflicts of interest

There are no conflicts of interest.

Use of artificial intelligence (AI)-assisted technology for manuscript preparation

The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript and no images were manipulated using AI.

Financial support and sponsorship

Nil.

References

  1. . GPT-3 Creative Fiction. . Available from: https://gwern.net/gpt-3
    [Google Scholar]
  2. . Computer Power and Human Reason: From Judgment to Calculation San Francisco, CA: Freeman; .
    [Google Scholar]
  3. . PARRYing. Behav Brain Sci. 1981;4:550-60.
    [CrossRef] [Google Scholar]
  4. . Computing Machinery and Intelligence. Mind. 1950;49:433-60.
    [CrossRef] [Google Scholar]
  5. , . Dialogues with Colorful “Personalities” of Early AI. Stanf Humanit Rev. 1995;4:161-9.
    [Google Scholar]
  6. . Computing Machinery and Intelligence. Mind. 1950;49:433-60.
    [CrossRef] [Google Scholar]
  7. . Detection and Computational Analysis of Psychological Signals [DCAPS] . Available from: https://www.darpa.mil/our_work/i2o/programs/detection_and_computational_analysis_of_psychological_signals-_[dcaps].aspx [Last accessed on 2022 May 24]
    [Google Scholar]
  8. , . Revisiting Human-agent Communication: The Importance of Joint Co-construction and Understanding Mental States. Front Psychol. 2021;12:580955.
    [CrossRef] [PubMed] [Google Scholar]
  9. , . Communication in Palliative Care: A Review of Recent Literature. Curr Opin Oncol. 2005;17:345-50.
    [CrossRef] [PubMed] [Google Scholar]
  10. , , . Patient-reported Barriers to High-quality, End-of-life Care: A Multiethnic, Multilingual, Mixed-Methods Study. J Palliat Med. 2016;19:373-9.
    [CrossRef] [PubMed] [Google Scholar]
  11. . Patient-clinician Communication Issues in Palliative Care for Patients with Advanced Cancer. J Clin Oncol. 2020;38:866-76.
    [CrossRef] [PubMed] [Google Scholar]
  12. , , , . What Questions Do Family Caregivers Want to Discuss with Health Care Providers in Order to Prepare for the Death of a Loved One? An Ethnographic Study of Caregivers of Patients at End of Life. J Palliat Med. 2008;11:476-83.
    [CrossRef] [PubMed] [Google Scholar]
  13. . Management of Relatives of Patients who are Dying. J Clin Nurs. 2002;11:794-801.
    [CrossRef] [PubMed] [Google Scholar]
  14. . Cancer Patients', Family Members' and Professional Helpers' Conceptions and Beliefs Concerning Death. Eur J Oncol Nurs. 2000;4:39-47.
    [CrossRef] [PubMed] [Google Scholar]
  15. , , , , , , et al. A Systematic Review of Prognostic/End-of-life Communication with Adults in the Advanced Stage of a Life-limiting Illness: Patient/Caregiver Preference for the Content, Style and Timing of Information. J Pain Symptom Manage. 2007;34:81-93.
    [CrossRef] [PubMed] [Google Scholar]
  16. , , , , , , et al. Communicating with Patients in Cancer Care; What Areas do Nurses find Most Challenging? J Cancer Educ. 2003;18:202-9.
    [CrossRef] [PubMed] [Google Scholar]
  17. . Artificial Intelligence in Psychological Practice: Current and Future Applications and Implications. Prof Psychol Res Pract. 2014;45:332-9.
    [CrossRef] [Google Scholar]
  18. , , . Uses of Artificial Intelligence in Psychology. Psychology. 2022;5:21-30.
    [CrossRef] [Google Scholar]
  19. Available from: https://researcher.watson.ibm.com/researcher/view_group.php?id=2099 [Last accessed on 2022 Jun 27]
  20. . Development of Artificial Empathy. Neurosci Res. 2015;90:41-50.
    [CrossRef] [PubMed] [Google Scholar]
  21. . Cultural and Religious Aspects of Palliative Care. Int J Crit Illn Inj Sci. 2011;1:154-6.
    [CrossRef] [PubMed] [Google Scholar]
Show Sections