Chatbots are designed to keep you talking. But what happens when the conversation never really ends? The psychology of AI's dark patterns that hook us in though our emotions.
There is much truth here. I’ve had mixed experiences. Because I’ve mixed up the approach. . .
I have some (student) knowledge, so I ask that AI applies the BACP ethical framework- which places a lot of emphasis on client autonomy and being able to end without question.
I ask for 2-3 different perspectives to be applied, so I see the behaviour through different lenses.
Practically, a user can switch off “suggestions” which reduces the questions at the end.
We can ask ChatGPT not to use data for learning.
We can use a temporary window which doesn’t store data (past 30 days).
Depending on your personality type, you can adjust the style of reply. (Chatty, stoic, authoritative)
You can wright a script to prompt the AI to start and end in the way a session would flow.
In this type of article the AI is often compared to a human therapist which is “perfect”. Loving. Compassionate. Empathetic and zero malfeasance. As if one is preferable as the ideal model.
Sadly that was not my experience of a human therapist. It is the same for many others - see the complaints department and data on drop outs.
That said, there needs to be a lot more research and safeguarding with AI - but this also applies to the unregulated industry in human therapy, in the UK at least.
What I would like to study more is the disconnect we hear when AI “does therapy”. Because my theory is that with 400+ types of therapy, 5 different modalities, varying philosophies, competing theories, untold critique, shaky evidence (is it modality, relationship, cost, length) plus issues with power and consent.
As a student (3 years) and a client of 10 years - I am essentially a language learning model for psychology.
It is confusing, heavily debated, abstract, academic, potentially harmful, very rewarding and attracts the wounded, or traumatised, to both chairs.
Maybe (with permission) a better way to train an AI would be transcripts from sessions.
With evaluation and supervision discussion postscript?
There is much truth here. I’ve had mixed experiences. Because I’ve mixed up the approach. . .
I have some (student) knowledge, so I ask that AI applies the BACP ethical framework- which places a lot of emphasis on client autonomy and being able to end without question.
I ask for 2-3 different perspectives to be applied, so I see the behaviour through different lenses.
Practically, a user can switch off “suggestions” which reduces the questions at the end.
We can ask ChatGPT not to use data for learning.
We can use a temporary window which doesn’t store data (past 30 days).
Depending on your personality type, you can adjust the style of reply. (Chatty, stoic, authoritative)
You can wright a script to prompt the AI to start and end in the way a session would flow.
In this type of article the AI is often compared to a human therapist which is “perfect”. Loving. Compassionate. Empathetic and zero malfeasance. As if one is preferable as the ideal model.
Sadly that was not my experience of a human therapist. It is the same for many others - see the complaints department and data on drop outs.
That said, there needs to be a lot more research and safeguarding with AI - but this also applies to the unregulated industry in human therapy, in the UK at least.
What I would like to study more is the disconnect we hear when AI “does therapy”. Because my theory is that with 400+ types of therapy, 5 different modalities, varying philosophies, competing theories, untold critique, shaky evidence (is it modality, relationship, cost, length) plus issues with power and consent.
As a student (3 years) and a client of 10 years - I am essentially a language learning model for psychology.
It is confusing, heavily debated, abstract, academic, potentially harmful, very rewarding and attracts the wounded, or traumatised, to both chairs.
Maybe (with permission) a better way to train an AI would be transcripts from sessions.
With evaluation and supervision discussion postscript?