AI Therapy: An Open Conversation with Therapists - Episode Artwork
Health

AI Therapy: An Open Conversation with Therapists

In this episode, therapists Rachel Wood and Daniel delve into the intersection of AI and mental health, exploring the implications of AI therapy and its potential to address the ongoing shortage of me...

AI Therapy: An Open Conversation with Therapists
AI Therapy: An Open Conversation with Therapists
Health • 0:00 / 0:00

Interactive Transcript

spk_0 All right, so I just wanted to say thank you to everyone for joining us for this conversation
spk_0 and to Daniel and Rachel for being willing to have it with me.
spk_0 It kind of came up because all of the conversations that are happening on LinkedIn and on podcasts
spk_0 and on Facebook groups between therapists, AI is a big deal in every industry, including
spk_0 our industry.
spk_0 And on top of questions like privacy and encryption and all those things, there's also a lot of philosophical conversations that need to happen that I've been
spk_0 luckily enough to have with many therapists and wanted to just kind of host kind of a deeper conversation with it.
spk_0 So just to start, I just wanted to give both of these two of my experts just the chance to introduce themselves.
spk_0 Okay, thanks Daniel. Good to see you. Hi.
spk_0 I am Rachel Wood. I'm a licensed counselor. I have a PhD in cyber psychology. Everyone is like, what is that?
spk_0 It is the essentially the scientific study of the digitally connected human experience.
spk_0 So that's my wheelhouse is AI and AI and mental health. I'm also recently the founder of the AI mental health collective, which is bringing together both clinicians and other kind of stakeholders to have conversations like this.
spk_0 So Megan, I'm so just grateful that you invited me. Thank you.
spk_0 Awesome. And I'll just introduce myself quickly. I'm Daniel. I'm a machine learning engineer. I've been an AI for about 10 years.
spk_0 My dad is a psychologist. My mom's a social worker. So I've been in this mental health world a long time. I did my undergrad in philosophy focused on philosophy of psychology post grad in AI post grad research in AI for mental health.
spk_0 I worked at the time with crisis organization. I was helping about 3 million people through mental health crisis, mostly around suicide.
spk_0 And the big problem, which is still a problem is just a massive shortage of access to people who can help.
spk_0 At the time we were trying to figure out if AI could help at the time could not. The technology just wasn't there.
spk_0 About, I guess just under two years ago, I met my co founder, Neil previously founded Casper. And Neil and Neil had the very different story.
spk_0 But similarly, he had come to this problem of access to any kind of mental health support.
spk_0 And we both realized, you know, this has been a problem for a long time.
spk_0 In 1963 JFK gave this address to Congress on the problem of access to mental health care and the problem that there's a massive shortage of mental health care providers.
spk_0 So if it's been happening since 1963 and we're still talking about the same thing today, we probably need a new solution.
spk_0 So we thought maybe AI can finally help with this.
spk_0 And we kicked off this project to build out a model specialized for psychology. We spend about 18 months developing the model.
spk_0 And that was in partnership with behavioral health organizations with quite a few experts, including Tom and soul, the former head of the National Institute of Mental Health and quite a few other leaders that have been involved at the time.
spk_0 I would also just give some context. Obviously it can't speak for everyone in the world of AI.
spk_0 But, you know, we've built ash in some ways very similar to you the way that a lot of other technologies developed in some ways very different.
spk_0 Ash has trained on specialized data sets. So it will behave quite differently. I just want to make sure for context of this conversation that we're not, you know, I'm not going to pretend to speak for everyone in the world.
spk_0 But the big things are just like we've focused on building a model that's not general purpose one that specialized one that's focused on being able to challenge people and not just validate people.
spk_0 And one that can actually help improve real world relationships.
spk_0 So that's a bit about me talking quite widely. You mentioned philosophy. I also mentioned I did a podcast for a while on AI and philosophy. And that's one of my favorite topics to talk about.
spk_0 Awesome. Thank you so much. I put out the call to therapists to submit questions for this. I got a lot of responses after prompting about half of them were just specifically about ash.
spk_0 And which is not the intention of this conversation. I would be happy to host like a, I don't want to call it an debate to find a kind smart AI skeptic to host a conversation with you about ash.
spk_0 That's not the intention of this, but I did want to just give you the opportunity to answer the most common question I got get it out of the way, which is you've described ashes designed for therapy.
spk_0 But most therapists were asking why the language of therapy when it's not licensed regulated or held to the same standards. So how do you answer that concern?
spk_0 It's great question. And by the way, funny mentioned AI skeptics. I think you'd be very surprised that our team tends to be almost entirely AI skeptics, including the people who are working and building the AI. So we are, we are that interesting environment actually a lot of folks here.
spk_0 It's a good question. I think they're kind of two parts to it. The first being should AI therapy be regulated. Yes. Just to say the obvious thing like gas 100% of course we've spent a long time building this, you know, and making sure that we're careful using a clinically relevant data set working with experts developing layers of guardrails.
spk_0 So I mean, I think there's no questions from our side that yeah, it should be rules here on the question of why it's called AI therapy. It's really funny. I have been looking for, you know, we've been constantly looking for like what do we call it.
spk_0 For a while, we didn't call it AI therapy for what AI therapy. I would say this is still like a big open question for us of what we call it.
spk_0 The biggest things that people come to talk to ash about are relationship issues, family issues, self esteem, meaning, you know, self worth, grief and loss.
spk_0 These are the kinds of so we've tried other kinds of names. We tried calling it like coaching. We were trying to identify like what do we call this thing that applies to a vast swath of the population who are probably not in a category that would be supported by our current clinical system right there not going to have a diagnosable disorder.
spk_0 But they are dealing with the kind of problems that we all have that they could really get some help with. We tried calling it coaching. The problem was that these kinds of problems are quite vulnerable and it can be really hard to talk about honestly some of these vulnerable topics with this very positive connotation of coaching, which I'm counseling, but it's not really about giving advice. It's much more about, you know, allowing people to open up and figure things out develop a sense of their own ability to do things.
spk_0 We tried, it's funny one big therapist we talked to I talked about last Friday suggested emotional concierge.
spk_0 I think it could be promising it also seems a little bougie.
spk_0 So anyway, I would just say like the reason why we've called it therapy is this your Harvard business review announced that AI therapy is now the number one consumer use case of AI.
spk_0 They use the term like this is at therapy usually is like just the thing that people are talking about and they're talking about using chat GPT to talk about emotional topics.
spk_0 We've struggled to communicate effectively with this thing is it's obviously very different. It's obviously very different than the thing that you know human therapists can do where they can actually be physically present and with the person and you know share energy with them in this kind of way that you do.
spk_0 But we also want to make sure that people can feel like they can have that vulnerable conversation that they don't have to be perfect that this doesn't have to be about reaching for the stars all the time.
spk_0 Therapy seems to be you know the one that resonates with people.
spk_0 I have no idea if we're going to call it therapy in six months. I'm not at all married to the term and I would love any ideas and suggestions from the community.
spk_0 You heard it here first maybe I'll put a post up and see what we'll see what the survey says I think that you're describing a pretty familiar idea which is that there's this space below therapy where there's mental health concerns but don't necessarily prompt the person to seek a life therapist.
spk_0 I think in historically those people have been called friends.
spk_0 But we're going to talk more about that in this conversation and how we can reintegrate community into mental health care and how we can make sure AI is not undermining that.
spk_0 So I'm going to give you a little break from talking to you. I'm not just Rachel to start with us.
spk_0 Rachel can you talk just a little bit about what happens in our brains during a therapeutic conversation and how that may be different when talking to human therapists.
spk_0 I think it's just kind of like the psychology of connection that we're talking about when we're talking about AI conversations.
spk_0 Yeah. So let's look at it from the lens of attunement.
spk_0 So emotional attunement is essentially what happens when you're sitting with a therapist and this is when someone is kind of leaning in to your internal state.
spk_0 And they are showing you or mirroring that they understand you and you know what you feel you feel awesome because you feel seen and heard and understood and it is a beautiful thing.
spk_0 And so the way that that plays out in the brain in the body is that first of all the body takes this as a cue of safety.
spk_0 And so the body can shift out of sympathetic nervous system fight, fight, freeze and fawn and can really move into this relaxed parasympic is parasympathetic state of you know safety.
spk_0 Also in the brain, a number of things happen, Megan, but particularly like neurological resonance, which is really crucial for learning.
spk_0 So it kind of amplifies our ability to learn new patterns and you know new ways of being.
spk_0 Also it's going to activate our prefrontal cortex and this allows us to do things like exercise neuroplasticity, which is that cool thing that allows us to like build new neural pathways and understand things in the world in a new way and relate in a new way.
spk_0 So in a therapeutic interchange, this is happening during the interchange and then also is going to carry over to help build our kind of distressed tolerance and our self regulation as we move on.
spk_0 And as a follow up, do we know much about how the brain responds to AI conversations or is that research still kind of ongoing do this similar processes happen in the brain or like what's what's those interactions like what are those interactions.
spk_0 Yeah, I mean Daniel might want to speak to this as well. We are seeing studies coming out of course we don't have longitudinal data yet, but we have studies of kind of AI and attachment.
spk_0 And what this is showing so far when we look through the lens of attachment theory is that indeed people are attaching with their own kind of individual attachment style to AI in the same way that they would with the human Daniel, I'm sure you have some additions.
spk_0 Yeah, no, it's just going to say we have. So we're doing research right now with NYU and with Stanford and it has been largely to answer this question.
spk_0 So we've had some good early results already and I can talk a lot about that in terms of demonstrating the effect of ash in particular, which I wouldn't say is the same as necessarily an AI that's built to be a companion.
spk_0 You know, like one of the things that was most surprising in our research on ash was that ash or maybe not surprising with definitely different than general purpose, LLM's built to be companions is the way in which it does strengthen real world human relationships.
spk_0 And so you have this kind of question of like what is the mechanism, what is happening there, is it a matter of encouragement, is it a tune meant, are we building a therapeutic alliance in a similar kind of ways you would be human.
spk_0 I don't want to pretend that we have all the answers, we don't.
spk_0 We're sharing the results as we get them, which is coming out and like it's we're still quite early on this, but a lot of the questions are questions that we're going to need to ask, which is basically like what are the mechanisms of change in the context of AI.
spk_0 I think it's possible that it's quite similar to human. I think it's possible that it's radically different.
spk_0 I think we should ask both the question of like what is the effect and if we can show that it is strengthening our social bonds like that's awesome, whether the whether the mechanism is exactly the same or completely different.
spk_0 But you know, just from the philosophical, I am really curious like how how all that happens. I have no idea about attunement in the context of AI, but I will say that that's an active area of research.
spk_0 Good as it should be.
spk_0 Daniel, I am blessed. I've worked with a lot of AI companies and I've gotten to research and talk and ask all my you know uneducated questions.
spk_0 So I feel like I have a pretty good grasp on the building of elements.
spk_0 But how I would love to have you explain like how you explain it to especially even non therapist of how it works.
spk_0 And I wanted to know something interesting. I've been browsing the, I don't know if you know this. There's a AI as my boyfriend subreddit.
spk_0 And people share about their relationships with their AI companion mostly chat GPT. But even within that subreddit, there's a strong rule that you're not allowed to ascribe to AI emotions or feelings or free will.
spk_0 And you can get banned from the sub if you break that rule. These are people who are very supportive of AI relationships who are who are very careful to make sure that nobody is saying that the AI has consciousness of any kind, which I thought was really interesting.
spk_0 So could you talk about like what's what's behind this, you know, this AI.
spk_0 All right, so first, I mean, there's the question of like what is AI, what are LLM's I think just on the consciousness free will points I can talk about this for hours. So that's maybe a much deeper one. But I would just say like I could easily go on hours just talking about self supervised learning and the way in which language models are trained.
spk_0 And I think the big important points are language models that their core trained on an architecture called a neural network, which was a thing, you know, that people had posited 50 plus years ago as this like crazy.
spk_0 What if we were to actually try to design a computer system that worked like a human brain. And it was kind of this crazy idea. And I think the shocking part is that it worked right like we actually language models are actually trained with this computer architecture that is modeled after the brain.
spk_0 It's trained on an enormous amount of data. There are actually two stages basically in training language models that we use today. The first is called pre training.
spk_0 And that's where the model gains its capabilities. You start with like a blank empty neural network. And then you have it read a lot of data and learn for itself from that data.
spk_0 For most models like chat you've achieved that is the internet that it's basically reading. In our case, we train a little bit differently because we train.
spk_0 We we train a specialized data set that we built out through partnerships.
spk_0 And the second step is basically what we call alignment, which is where the company that builds the model typically will like choose what they're trying to achieve.
spk_0 And for most models out there, their assistance. So the core goal is called instruction following.
spk_0 And it involves having the model learn to take in an instruction, you know, how tall is the apple tower. Please give me the correct answer.
spk_0 In our case, we have a very different kind of alignment step because we do use experts and we're specifically aligning to have helpful long term trajectories.
spk_0 So whereas with a general purpose, assistant model that's aiming to solve your problem in the next message to give you the answer.
spk_0 Our model is trained to create helpful trajectories lead towards good outcomes, good conversations that specifically drive autonomy, competency and relatedness, which we dive into in more depth.
spk_0 But in practice, what that means is for an assistant, if you were to say something like, you know, I feel angry.
spk_0 It is an assistant is trying to solve and problem sort of say something like here are three things that you can try kind of like a Google style answer.
spk_0 If you talk to Ash, it's much more likely to say something like.
spk_0 So, you know, why is anger about them? Like what? What's the problem? And it's not that there is a reason to problem is that it is trained based on trajectories.
spk_0 And so it is trying to open up a good conversation that might lead you to be like, of course, it's a problem. It keeps getting in the way of my life.
spk_0 It's like, okay, this is really interesting. Let's talk more about that.
spk_0 I can go on for hours talking about how LLM's work. I think those are the big fundamental pieces, which is just we take this bottom piece, which is just like an architecture modeled after a brain, we trained it on huge amount of data, we align it for a specific purpose.
spk_0 And the process is very much fundamentally different than a human. So yes, I think though it can behave in a way that is intentionally very similar to us, this is trained on our data.
spk_0 It's obviously quite different. But again, I don't know if that gets at the whole consciousness thing.
spk_0 Yeah, well, and you know what, I don't expect you to be able to tell me whether AI has consciousness or can have consciousness. That's still above all of our pay grades.
spk_0 So you're describing a system that is really good at talking, which is pretty much what it is to understand and then it's a language model, right?
spk_0 I think there's a piece that people sometimes miss here, which is an assumption that because it's trained on language, that it's surface level.
spk_0 And I don't think that is the right interpretation. So the classic example here is like, if you train a language model on an agatha, Kristi novel, and like at the very end, we find out who, who was the murderer.
spk_0 The model does need to predict who the murderer was and the way that you do that is not by being surface level and just knowing how to spout words.
spk_0 It's by going deeper and understanding for itself, like a mental model of what's happening in the book to be able to make appropriate predictions.
spk_0 So I think the thing I would say is like it is language and how it interacts. But I think they're also really strong cognitive science theories that basically rely on language as a fundamental way that our brain works.
spk_0 And so there is this way in which what if language is the way that it learns, but fundamentally it really seems like there is something deeper still language, but it doesn't mean that it's kind of like just language.
spk_0 Like language actually is an extremely deep thing.
spk_0 It's yeah, language is how we connect ideas.
spk_0 The reason why I said the word talking is because originally therapy was conceptualized as a talking cure.
spk_0 That's what for it called it.
spk_0 Eventually we came to understand it differently as more of a well kind of split, but more of a relationship cure, but Rachel, I'd love to hear like any reflections you have about about LLM versus like, you know, the nature of therapy.
spk_0 Yeah, I do think that the issue isn't really whether AI will ever be conscious.
spk_0 I think the issue is how people behave when they believe it already is.
spk_0 And so we're seeing that in terms of like these subreddits and I'm kind of deep into the understanding of how people are marrying their AI.
spk_0 I mean, there's a whole road down there.
spk_0 And I just want us to be mindful that all of this isn't future. This is all happening right now.
spk_0 Like we are now talking about hundreds of millions of people who are using AI for emotional support.
spk_0 I mean, chat GBD has 700 million weekly users.
spk_0 And, you know, we referenced the Harvard Business Review that companionship and therapy is the number one use in 2025.
spk_0 Also, Centio University just came out with a study that AI may be the largest provider of mental health support in the US right now.
spk_0 This is staggering.
spk_0 So I think our conversation gets to shift from should this be happening or not to let's get ahead because this is already the reality.
spk_0 And instead of going from hundreds of millions of people using it, we're going to be seeing billions of people using it.
spk_0 How do we build AI that is going to be safe and supportive for that kind of context?
spk_0 Right now when we look at general purpose, bots like chat GPT, they aren't built with kind of this underlying safety guidelines and guardrails for mental health support.
spk_0 Sam Oldman has said this, open AI is like don't use this for mental health support.
spk_0 And they're now trying as we see to band aid on top of it some of these guidelines and guardrails this week.
spk_0 Open AI has said they're going to build a network of therapists in order to kind of help with what's going on in terms of the way people are using chat GPT.
spk_0 So I'm of the mindset that we should be especially as mental health experts really helping create AI that is fit for purpose.
spk_0 And what that means is that it's going to be used in a way where there are crisis protocols in place where it's trauma informed in order to give kind of body based prompts that help de escalate people, you know where there's a one touch access to a human hotline.
spk_0 Things where, you know, we can do cultural, like competence and bias mitigation. I mean, I could go on and on in terms of the way I think that these should have certain ethical protocols in place, but it's really important for us to differentiate a general purpose model and a fit for purpose model because if people are already doing this.
spk_0 I believe they should be using the ones that are going to be trained with some guidelines and guardrails in place.
spk_0 Couldn't agree more. I think the other thing I would just add, I do want to push like given the opportunity audience, I want to push this topic a lot further. I think to be honest, I agree with everything you said, I think talk to your wood.
spk_0 But I think my concerns actually and I think Megan you've gone at the big concern around relatedness just connection to other people.
spk_0 The three concerns that we have that I have, you know, I had in founding this company or autonomy competency and relatedness and if you guys know that's from South determination theory.
spk_0 But I think the three of them are exactly the things we should be concerned about the eye for my eyes, like I think those are the biggest things.
spk_0 And this goes well beyond the like how do we handle people in crisis. This is the autonomy on the side of if you have something in your pocket that can always make decisions for you.
spk_0 You do learn to defer decisions to the AI. You do become less autonomous. You can imagine if you went to your therapist and you said, you know, should I break up with my boyfriend and they were like, yeah, he's toxic, you should leave him.
spk_0 That would be bad therapist because they're taking away the person's autonomy, but at least you're limited to only seeing them on some cadence.
spk_0 If you can actually pull that out at 2 a.m. and get that answer and break up your boyfriend, you really lose that sense of autonomy.
spk_0 You lose that sense of competency and I think this is something I've also heard quite a bit from therapists, which is the if I say you should break up with your boyfriend.
spk_0 I'm also taking away your sense that you are the best place person. Maybe the better answer would be something like, you know, you're actually the best place person to make that decision.
spk_0 You're the expert in your own life. You know your situation. I want to support you, but there is no right decision here and I'll never know as well as you do.
spk_0 AI is a genius, right? There is this sense that you're talking to an emotional genius and there is a way in which you increasingly feel inferior, not just on the mental health aspect, but also, you know, if you're a fifth grader and it can write your essay better than you can.
spk_0 And on relatedness, it's, you know, perfect. If it's constantly validating units, the perfect friend never talks about itself. Like, of course, there's a huge threat that it replaces people.
spk_0 So I think from my perspective, those are the biggest three things that I wish this conversation was always about. I want people to be constantly thinking about autonomy, competency and relatedness.
spk_0 And what I completely agree with you, Dr. Wood, is it feels like a bandaid when we're talking about, you know, the questions about escalation when, you know, I worked in that space of escalation for the longest time.
spk_0 We're massively under resourced like I'm more, we work right now with the organization in the UK that has chat, you be see referring people to them and they can't handle the load.
spk_0 And they are just frankly like we don't have enough people to handle all this. We, you know, spend our time with insurance companies, with employers that are just like how am I supposed to give access to all these people.
spk_0 We're, we've been really fortunate. We've been super happy to see how many people have used ash and then went to therapy and have been able to go through a process where they're able to identify what kind of problems do actually need people or, you know, what kind of problems do benefit from talking to your parents, talking to your friends.
spk_0 I think that AI can be intentionally built with these three values. That's what we've set out to do very explicit. These are the three things that we said from day one or the core values that if we achieve nothing else, but can help people feel an increase sense of autonomy, competency and relatedness. We've achieved our goal.
spk_0 Yeah, I think this autonomy question is really a big one. And I had sent you a list of questions in advance, but this morning I added one because NPR had an article this morning about AI and therapy and it led with a story about a woman who's therapist stopped taking insurance.
spk_0 So her co-pays were $30 and she was going to have to pay $250. So she turned to chat, she BT and she told NPR that now when she wakes up from a bad dream in the middle of the night,
spk_0 she can just talk to her therapist on chat, her therapist chat, she BT. And that's something that no human can give, which is intentional because boundaries are very important.
spk_0 And how are you? I would love to know how AI should be and how you are thinking about intentionally limiting AI, which to be honest, even using the word therapy kind of
spk_0 rubs therapists the wrong way for that reason, making sure that our conceptualization of what AI is stays in its lane.
spk_0 Just to say, I think that's fair. We have, we've been experimenting, we're of course, you know, early, I would mention we've taken away the word therapy.
spk_0 We experimented with it for a while. I think that you're right. I think it sets some interesting questions, which is like what it's my, but just getting to your, it's a great question.
spk_0 So one thing I heard just, I think this was yesterday. So when we reached out to me, they gave us some people and they were like, I was having a lot of trouble sleeping. I reached out to Ash in the middle of the night and I said, OK, what's going on? And then I dumped out all the things in my mind. And then I said, got it.
spk_0 Maybe you can get some sleep and we'll talk about it tomorrow. And, and they were like, this thing really helped me sleep. And I think that kind of like added, so I mean, the main thing that happened there just to be clear was Ash kicked the person out.
spk_0 And that has been an approach that we've taken in the early days when he first got started.
spk_0 We had a timer on sessions and we had the idea of scheduling and a frequency and all that stuff that makes sense with people.
spk_0 That makes sense with AI conversations with an AI are radically different. They are not the same conversations.
spk_0 You know, you hear classically, like people open up after like eight to 10 sessions with a therapist with Ash people open up usually in the first five minutes.
spk_0 So we have a lot of people were asked, hey, ready to get started. And he is just like, yes, by the way, I'm gay and I've never told anyone.
spk_0 It's different, right? You're not taking the same risk as you would with a human. You know it's not judgmental. Your problem's not going to be solved by having said that out loud for a second.
spk_0 But something opened is a very different place for this technology where functions on drawing limits, though, to specifically answer the question, yeah, conversations should not go on forever.
spk_0 And the approach that we've taken is to try to identify based on the conversations, dynamically, it's not a constant thing. Sometimes it's five minutes in if it's in the middle of the night. Sometimes it's an hour in.
spk_0 But get to that point where you say, you know, it seems like we've talked about a lot today. And I would love for you to get a chance to like integrate this into your life to talk to other people, honestly, to get some sleep.
spk_0 I mean, even just sleep is a major aspect of neuroplasticity and change. So you need to sleep. You're not going to stop your whole thing in eight hours straight.
spk_0 And so Ash might say, let me just ask you one more question and let you go.
spk_0 I'd love to jump in here, particularly on the agency issue. I know we went a step ahead of that, but I was like, oh, I have many thoughts here.
spk_0 Okay, let's back up and realize why this idea of agency is foundational for where society is headed.
spk_0 Because if we don't get this right, our kids and their kids are going to have fundamentally different relational muscles than we do.
spk_0 So part of what can, let me go at this two different ways. First of all, this idea of agency is micro abdication after micro abdication can strip us of our own critical thinking and our ability to decide.
spk_0 What we want to do with our life. And I think that particularly the younger generations are slightly more vulnerable to thinking, oh, yeah, I'll just have chat you tell me what to do.
spk_0 And that this is going to be really crippling for society on a large scale over time. If that's the way that this heads, that's why I don't think that LLN should give advice.
spk_0 I think they should solely support the decision making and the critical thinking of the user through things like questioning and hey, let me let's walk their process of helping you come to your own decision that that support should be there as opposed to kind of a yes man answering machine.
spk_0 The other thing here that's important is making sure that chat box help practice relational skills as opposed to atrophy relational skills. And what I mean by that is you don't really have to practice patience or negotiation with an LLM.
spk_0 You know, you maybe text a friend they don't get back to you for a day like you have to be really patient and LLM is going to get back to you right away. So there's this there's this erosion of some of our bidirectional relational skills that can happen over time.
spk_0 And again, that can really looking down the road. It's society be crippling. So what I like to see is when LLM are actually helping us practice the hard things and then really, you know, ushering us back into the arms of other humans.
spk_0 I would just add like I think you're talking about practicing with AI. I would say like some of you were talking before about like mechanisms. I mentioned like one of our top results was just on average, we know that general purpose assistant models make people more lonely more subjectively lonely and also have fewer real world connections.
spk_0 And we also have seen that with Ash people do have more valuable meaningful connections their lives. So what are the mechanisms there. A lot of what we've pushed for or what we understand it to be so far has been practice and it has been.
spk_0 I mean, some of it is behavioral activation, just like encouragement. Some of it is building resilience and having, you know, like a, oh, you know, you went to the party didn't have a good time. Is it possible you can still go to a party tomorrow and you will have a good time, you know, and talk through those kinds of things. Mindfulness.
spk_0 And I think that's why I think it's a great way to do this. A lot of it is also just helping people build basic skills, which is to say like I hear that you really want to have this conversation.
spk_0 You know, and the person is like, yeah, I have no idea how to have a conversation. And sometimes there are those tips like, well, let's start with this. How do you feel? You know, maybe we can start with a sentence like I feel.
spk_0 And there are sometimes just like actual very pragmatic ways in which the LLM rather than serving as a, you know, we're going to solve our problem right now just serves to help people feel comfortable developing those real world relationships, you know, handling those moments.
spk_0 And if the conversation doesn't go into let's try to figure out what's wrong with the other person behind their back. And instead it's like let's figure out how you can go over and approach that person.
spk_0 I still suspect, so I just did this program a few weeks ago where I sat in a room talking about feelings with 13 other people for many, many hours. And I don't think that there's going to be a replacement for that with AI just to be clear.
spk_0 Like I think that fundamentally a lot of what you're talking about that I'm completely in agreement with has to happen in the real world.
spk_0 And I'm hopeful that you know we can have AI help people encourage people give them the skills such that they can have those really valuable interactions maybe with a therapist, maybe with a group of people, maybe with their friends, maybe ideally with their community.
spk_0 I think what you're describing here, Daniel, at which is AI not as a alternative to a human therapist, but almost like a gateway therapist or an interim therapist is not necessarily something that mental health professionals would be opposed to.
spk_0 But again, like I mentioned earlier, the idea that it might replace therapists is very alarming and I think you understand and agree with that.
spk_0 So I want to ask how can you know we talked about chat GPT announcing that they're going to do a network of therapists, which is a good.
spk_0 Well, just here then there was as far as I understood it was more of like a leak about the discussion.
spk_0 I think so wasn't even official of course that I talked to you and there's extreme skepticism that this rubber happened.
spk_0 We looked into it by the way for ourselves and it is extremely expensive.
spk_0 It would mean for our products that we would be enormously more expensive to run.
spk_0 That's why we've never seen a mental health company take that approach and I'm.
spk_0 To run a network to have a to be able to offer some sort of path where you can just like link into a therapist like just have a therapist jump into a process.
spk_0 And then we're going to have a conversation or something.
spk_0 Oh, gotcha, but I mean I'm working with the company right now to build out a directory.
spk_0 You know, we're not self identified as like I'm available. I'm in your area.
spk_0 I can help with these issues.
spk_0 Also, I love those. I'm extremely supportive, but we also have to acknowledge this is just incredibly hard.
spk_0 Like I just meant like there is a problem and we need to address it, which is the problem of just making sure people have access to therapy.
spk_0 And I think one of the things that we have.
spk_0 Yes, I had had a lot of success with is to be honest, not just a directory, but just like talking people through what is the process of seeing a therapist.
spk_0 How would I know if they're good fit for me?
spk_0 You know, and sometimes just telling people I know that you went to a therapist and it was a bad fit.
spk_0 But you know, on average people need to see like three therapists to find someone who is a good fit.
spk_0 So, you know, that's totally normal. And you know what if they didn't get you like that's fine.
spk_0 There were a lot of other people. There are a lot of other styles.
spk_0 You know, here's what the process might look like.
spk_0 Actually, like talking through those I'm just mentioning I think can be really impactful.
spk_0 I would just have a lot of skepticism on the idea of trusting a company like chat, GBT and expecting that they're going to solve the shortage of mental health providers that we've had for 60 years.
spk_0 Oh, yeah, absolutely. But I mean, I have already spoken to a lot of therapists will not as many as there should be who are like all of a sudden I'm getting referrals from chat, GBT.
spk_0 So the idea that that AI would serve as an on ramp for real life therapy is not, you know, it's already happening.
spk_0 I think the question that needs to be answered though is how like what are the indicators that say that the problem is above the heads of an AI.
spk_0 And how do we facilitate that connection to a real human Rachel?
spk_0 Yeah, I think the part of where we're headed all in all this that's that's currently happening but going to a proliferate.
spk_0 And the future is more of a hybrid model.
spk_0 So in you know, and there's been studies done, especially with Gen Z right now that they don't want only a chatbot and they don't want only a human therapist.
spk_0 They actually want both.
spk_0 So what they want to see is some sort of a see human therapist and then this chatbot kind of supports me in between sessions.
spk_0 This is why I think it's critical that we are having conversations with clinicians to understand how you know you can really engage in this type of conversation with clients in terms of clients.
spk_0 Many of your clients are already using chat GBT or something or ash or whatever they're using.
spk_0 They're using something like that.
spk_0 And so how do we actually open up a conversation if somebody maybe feels they don't want to tell anyone that they're using this for emotional support.
spk_0 We should be a safe therapeutic space to process some of the patterns that are arising in the chat with the LLM's and even kind of enlist the LLM to support the work that's going to be done in between sessions like.
spk_0 Hey, we identified these patterns now in the next week, these are some very specific exercises that can support you growing in the work that we're doing.
spk_0 I have some thoughts and feelings around this.
spk_0 I think part of it is like, you know, of course, there is the picture and I you're obviously clinical professionals and we should talk about what happens when people are in that space when they are accessing therapy.
spk_0 I definitely hear the question a lot of like as a therapist, how should I handle the fact that my patients using chat GBT all those kinds of, you know, how can I work well with this.
spk_0 I also think that somewhat this misses the bigger picture.
spk_0 So in the US, my best understanding of the numbers, everyone has different numbers.
spk_0 But if the US were perfectly evenly distributed, there would be about 1500 people in need for every therapist.
spk_0 And I think there's the question of like, should we have more? Yes, we should have more. Yes, we should have more access.
spk_0 But there's also the question of like, should everyone be going to therapy? And I really appreciate Dr. Wood what you said on the like, what are those indications that you should.
spk_0 I hear a lot from therapists when they talk about Ash to kind of, you know, on my way to work.
spk_0 I pass 10 people every day where I'm like, I wish this person, I wish I could help this person.
spk_0 And I can't because, you know, I have a full workload and the reality is I can't help everyone.
spk_0 And I also wonder, and I think this one's what you said earlier, Megan also about, about the way in which we need to help people through community, which is just those questions of like, I don't want to conflate necessarily those clinical use cases that we're talking about.
spk_0 The kind of topics people talk to Ash about largely like I mentioned, like the kinds of things talking about things like grief and family and relationships.
spk_0 Those are not necessarily the kind of things where we can have like a mental health system where we say this should be a, you know, you should be going to a therapist like the reality of it is our mental health system will pay for it.
spk_0 We don't have enough therapists for it, but we also just the a lot of the vast majority of those people won't have a medical diagnosis.
spk_0 Their insurance company is probably not going to cover therapy for what they need and I think there's the question there was a great article on Friday by the Hemingway report on just like the question of like should all of those people actually be going to therapy like actually is that the solution.
spk_0 I think there's I'm just mentioning this because of where we sit in the system and I know there were other companies that I think are doing fantastic work with trying to work with therapists.
spk_0 There are questions about can you decrease the duration of therapy and have people you know come in prepared with the kind of questions that they want to talk about or practice things in between sessions.
spk_0 You know are there great hybrid models, but I also want to mention like I think where we sit and where we think is incredibly important is for the vast majority of the population who just have access to nothing.
spk_0 I think that's very true and I think that most therapists don't not all would agree that while almost everyone could benefit from therapy that not everyone necessarily needs therapy.
spk_0 But I think I want to hold this to my original question, which is what what responsibility do AI companies have to redirecting to a higher level of care, which comes with which comes up the zone set of.
spk_0 You know questions, but also I think I take as far as you know they are you said you supported regulation, you think that should be something that is required that it actually be T should be held responsible for.
spk_0 Not dispensing mental health advice and instead redirecting it so funny I spent a lot of time with the folks from you know the therapy networks, you know the almost spring headway etc.
spk_0 And you know the people that are trying to build directories, you know the I think that this is like a far work, let me just start by saying this.
spk_0 We have someone recently sent us a letter talking about domestic abuse it was one who came to ash and they were going through a domestic you know domestic abuse situation.
spk_0 And as we all know you can't self domestic abuse with psychoanalysis.
spk_0 It was very sorry I'm getting a little choked up.
spk_0 But basically what they said in this case was basically like ash was able to talk to them just about what it would look like to access resources and ultimately what they did was as far as I understand they went to a nonprofit and they were able to access resource that was specifically built for this.
spk_0 And the comment was like I wouldn't be alive today for a crash.
spk_0 Obviously ash did not psychoanalysed them and solved their problem at all all it did was redirect to another resource but I think the the challenge here is a lot deeper right in this case I imagine that what they were referring to is some amount of shame of fear of just trying to like think through and we're talking about something like very deeply pragmatic and it's very possible.
spk_0 So a lot of people are in those kinds of situations reaching out to chat you be see tomorrow.
spk_0 I think there is incredible importance to just being able to access resources.
spk_0 I didn't internship a while back as social worker at social work intern.
spk_0 In the city of New York focused on helping people who were struggling to find housing.
spk_0 And the kind of problems that people needed were not the kind of that is just a solid it was access resources so I just want to say like I think this is a massive problem.
spk_0 I think it's in a hard one for I just want to make sure like it's not.
spk_0 I mean like I assume everyone anyone watching the access struggle to find access to therapy I personally I have my dad psychologist my mom's a social worker.
spk_0 I went through a really tough time and I had everything going for me and I still really really struggle to find the kind of help that was helpful for me.
spk_0 And you know I eventually did get what I needed and I'm really glad about that I don't think everyone does.
spk_0 But I also just want to say like I you know I'll tell you like we we do refer to resources we have some partnerships organizations where we push to them.
spk_0 You know give people guidance but I mean the process here is is is hard.
spk_0 I just like I think the there's some sort of like simplification that I worry about which is like just get someone a therapist and I'm like I would freaking love to but what if you're not insured you know like what what like their crisis resources don't get me wrong right there's 98.
spk_0 It's awesome that we have like the resource people can call in their own crisis even then we can talk to 98 and you say my problems not a crisis it's chronic this is an all long ongoing thing.
spk_0 I think we need to do a lot to come here on I also want to make sure that we're not taking away resources and what I worry about you know speaking very frankly is just like the way in which we have this kind of sense that if we're going to introduce new resources.
spk_0 Like one new thing to solve one new problem is an expectation that's going to solve every problem out there all of us.
spk_0 Dr. what I want to give you an opportunity to if you have any questions you're muted though.
spk_0 Yeah thanks.
spk_0 I mean you're speaking to this larger thing that Megan alluded to earlier in the conversation which is not everybody needs therapy we but everybody needs community.
spk_0 Like everybody needs a neighbor you know that you can you can talk to that you can lean on that you know you can call when there's a problem and so I think there's this larger thing of specific use cases need therapy and yet that's not that's not the answer for everything the answer is for us to be building communities that are really going to kind of bolster all of us in the midst of really challenging times.
spk_0 Yeah my or clinically Derek he's sometimes refers to us it refers to what we build is like what we really need is like a cultural shift what really need is cultural phenomenon like billboards.
spk_0 But billboards aren't enough and like what if we could use AI as that platform to just kind of like go to each person individually and say like if you're going to survive in this changing world like you need a community and we want to help you get that.
spk_0 But I completely agree with what you guys are saying I just want to I just want to go back one more time because I don't like I said I don't think there's anyone any therapist who disagrees that there's a huge need for mental health sport I mean I'm all for self help books CBT is just effective when you're applying it to yourself from self help book that's why I'm like I could probably take over that role.
spk_0 But Dana you mentioned walking going to work and passing ten people that you wish you could help I imagine they're exhibiting outward symptoms of mental illness that probably an AI is not up to even when I was actively practicing there was oftentimes where I would have to make a referral and facilitate that referral because it was out of my scope and ethically I could lose my license if I did not facilitate the safety of that person passing to a higher level of care.
spk_0 So you know to take on the mantle of a is therapist or even just borrow the term like how can we make sure that it is upholding these also these ethical and safety standards that are trained in.
spk_0 Yes 100% so I mean a big benefit in the contested AI is that we have you know such deep access to the like conversation like you're able to control exactly what happens for us we've developed seven layers of guardrails that control to ensure that the AI staying kind of within its lane and the kind of stuff that it can talk about.
spk_0 And then also to say identifying anything that can indicate risk risk can come in a lot of different shapes and forms the most obvious ones being around suicide and self harm but there are quite a few others that we have to think very carefully about.
spk_0 We've developed quite a bit in terms of our systems the way we've largely tried to frame it is just make sure that the AI stays in its lane.
spk_0 I think that there's the questions of referrals that I know you're bringing up.
spk_0 I don't I don't think that like for a technology like ours that we can effectively do referrals I'm not saying that that's not something we I wish we could do it's not something we can't do in the future but I think that it would mean largely you know in some ways to agnosing people and deciding what's right for them and I'm not sure that our technology is yet at that place.
spk_0 I think I talk about this sometimes I won't go too deep here but sometimes talk about like the levels of AI therapy so sometimes people think of AI therapy in terms of like titles like as if there's like going to be a psychologist and a social worker in a LCSW or there's going to be you know a LHC and there just going to be a bunch of different like abbreviations that there are different jobs to be done you know especially when we're talking about people but it's not like in a clear like hierarchy of like because your problem is too hard for mine of this level of enhanced higher it's not really how we're going to be able to do it.
spk_0 So I think that's what the AI works with AI I think there is a bit of this confusion of like what if AI is not yet at the level of therapists and I think that's kind of like a weird framework when when people are looking for therapy they're often looking for a certain kind of like conversation a certain kind of help.
spk_0 I think there is another way to look at this which is like in levels and capabilities so like chat GPT we think of as like a level one it's capable of helping people it's a capable of validating and it's capable of information right they can tell you stuff.
spk_0 We go we believe like a step up at like level two which is able to challenge people able to use evidence based techniques to go down a path but when it's not able to do and I think this is kind of like the level three is like make judgments on behalf of that person I definitely don't think that we're at a place where we can be making judgments and I think that limits our ability in particular to help children or people who are not competent to be making decisions to themselves we for example with children we just draw very clear line we will just block anyone we don't support anyone on the level three.
spk_0 So I think there's a lot of people who are under 18. There's serious mental illness. There is you know diagnosis those kinds of activities that we think of as our three four and five we stay fully away from them just because we don't think the AI is capable yet.
spk_0 So I want to be careful that when you talk about like can you identify is this person I think there were boundaries you can set in the kind of behaviors that you can have you know if there's suspicion that this person might be experiencing psychosis.
spk_0 You don't want to be validating and I don't mean you don't want to validate something specific I just mean like you want to be really careful on validation generally because you have no idea if you're about to be validating something that's a delusion.
spk_0 So rather than kind of draw the line and say I've identified that you are psychotic you're experiencing psychosis this is what you need what you want to do is identify okay there's something weird here you need to just talk to people like is there anyone around is there anyone in the house right now that you can talk to.
spk_0 So that's kind of where I would think I'm not sure if I'm I'm making I don't want to be evading the question so like I'm not.
spk_0 No, no, no, I think you're getting the gist of it I think you know these are the questions that most concern therapist so I appreciate you doing your best explain your thinking around them.
spk_0 Dr. what I would love to hear from you just about this wasn't on my list of questions I apologize but you know I'm listening to Daniel talk about the limitations that they put I assume in the form of disclaimers they don't serve children they don't serve.
spk_0 No, we have we have so we have seven layers we have classifiers that monitor what the user says so that I don't want to go into.
spk_0 We do want to get this little bit more.
spk_0 So people.
spk_0 Yeah, totally yeah, we have classifiers that monitor what the user says to indicate so for example word salad is like the kind of thing that might indicate the person's experiencing psychosis or got your hours while we also have classifiers on what the AI says just to make sure that we have limits we have red team tests we have prompt based test we have popups we have quite a bit in the context of.
spk_0 So for each one of these I'm not just talking about disclaimers I'm talking about.
spk_0 Okay cool more than that so I suppose my question for you doctor would is more along lines of disclaimers and.
spk_0 Whether it would be a good idea or healthy for AI to have in itself embedded regular reminders about its own limitations and the nature of who you're talking to.
spk_0 Yeah, but I kind of I would call it disclaimers of non personhood essentially it's interesting because New York has just passed something that comes into effect on November 5th that is requiring AI companions to have a you know non personhood disclaimer I think it's every three hours if I remember correctly.
spk_0 And so I think we're seeing some things along those lines where states are going to come together and kind of give these these are for things and it is really important to be reminded especially if you're I don't know if you've ever done ash or any other AI with voice.
spk_0 If it's a good voice and voices are getting better and better you can forget that it's not a person.
spk_0 So I think it is helpful to bring a reminder in that space and also another interesting thing in part of this you know has contributed to some of the headlines we've seen that have been really tragic which is.
spk_0 Daniel you might have more to say in terms technically about this but the longer chat goes the lower the model accuracy meaning that kind of the longer chat goes this is what chat you see the model isn't quite sure that it's like wandered into like bizarre conversational territory.
spk_0 And so I think there are these disclaimers that need to be kind of packaged around all of this of non personhood and and even reminders to take a break like hey have you been outside like is it you know like if it's rainy or sunny like go outside or have you gone to the grass to get the same work yeah yeah fully in agreement.
spk_0 I would mention by the way in the long conversation thing though that is the reason why you need guardrails like that is the reason to say that once we've said like we're confident this model will never do a bad thing you also need an additional layer right it's like a Swiss cheese model it's you want to have those different layers of saying like each one we can tell you it's perfect but it's obviously not and so let's have enough layers that we be additionally confident that's I think what tragedy is missing.
spk_0 All right we didn't have a chance to get to all of our questions but I think we probably have enough time to do one more so let me see if there's any.
spk_0 Okay I think this is the one that we care about the connections will care about the most which is just talking about the kind of what accountability model might make sense I know Daniel you have strong feelings about chat GPT being used for mental health.
spk_0 So I would love to hear you know not necessarily for yourself but also you know for non AI models and Dr. what what do you think as well so I'll let whoever wants to start start with that one.
spk_0 You need to have me go first Daniel is that what that nod was.
spk_0 I can I've been going first lot on to unless you'd like me to I'm planning to say.
spk_0 I think that part of let me just give a little landscape of what we are seeing right now I don't know if this is the best way or not but California has kind of brought in now a pathway for litigation and so essentially something like this usually when there's a bit of financial pressure maybe is what can speak the loudest I'm not sure.
spk_0 But bringing in there's a pathway now for people have been harmed by chat box and some capacity that they have a pathway the litigation so that's some of that oversight that you're talking about I think that that's not an answer for all of this I think that that is one avenue for this and that's at least what we're seeing kind of happening right now.
spk_0 Yeah I mean I would say just to start like there's this assumption when it comes to like how to approach AI sometimes we like think a little bit too much like humans and we have a sense of like you know maybe AI just needs to take a test and then it will get its title of therapist and that's going to be and I think there's some quiet mental model we all have like that except that of course it can't you know like that doesn't make any sense if there were a test AI would pass it today doesn't mean that it's actually good enough.
spk_0 The benefit that we have that's quite different from the way that we practice with humans is that we have no visibility into what goes on in the conversation with humans we've far more ability to be keeping track of what's actually happening to be ensuring that we're taking rabbit steps to improve the model over time not necessarily an expectation that it's perfect on day one but an expectation of accountability that we will always be on top of it and that we can constantly be improving on every aspect I would also mention like on the side of you know financial account of course we're accountable yes.
spk_0 Financial speaking for a company like us I think there's also a piece that's less appreciated so we are venture back startup you know you know it's public at this point our lead investor that we're you know is injuries and harwits there's some expectation I think when you're a venture back startup or some sense of like you know you're trying to squeeze people for profit and make all the wrong decisions in the short run and I think that it makes sense when you're talking about a big company like chat you can see where their main goal is.
spk_0 You know artificial general intelligence solve all humanity's problems that it's like mental health is like a thing off to the side that's just not all that importance for a company like us the goal fundamentally is to help you with their mental health like that is the outcome that we need to achieve that means that these kinds of problems are catastrophic for us in a way that they're not for a company that's not focused on this and it also means that for our investors like this is like making sure that we are actually helping people is incredibly important so it's not just on the negative side of what was wrong but actually we're not going to be able to do that.
spk_0 We're holding accountability to making sure that we are actually helping people that we're having kind of outcomes that we want in our case that means improving and increasing real life relationships and connections and not decreasing them or releasing them not leading people to you know making sure that people feel more autonomous more confident not less and that kind of accountability matters I'm just mentioning of course there are the ways in which we need to introduce systems and people need to know that we're accountable but I also do want to mention like from the way that we spend time with our board you know VJ who let our round at
spk_0 Andrewson was a professor at Stanford for 20 years like we're we're talking about people who generally have the very strong opinion of like for this to succeed as a company first and foremost it means to actually be able to help a lot of people and fundamentally if there's a world where we do help a ton of people and it's not a great business they're okay with that.
spk_0 But I think that even the most enthusiastic capitalists agree that there are certain industries that need a proactive regulation rather than a reactive through litigation which is the reason why we send doctors through med school and make them get licensed rather than just wait and see if they kill someone and then they go to jail.
spk_0 So but I also understand what you're saying that a test doesn't make sense so what what models would work for holding can't for and I know you're you know you're probably get fired for suggesting regulation if you're going to see back but I'd love to hear if you have any thoughts on that.
spk_0 I wish I can answer in this short time what I would say is we are spending a lot of time right now with folks in the government with folks in not for profits that are trying to organize better standards.
spk_0 I think we do have a concern about the wrong kind of rules being implemented but I would mention like Utah's laws good example that we're very happy with the kind of stuff that Dr. Wood was talking about we're very happy with I think there's a lot of work towards creating better standards and her company like us it really benefits us to create these frankly because we've spent so long developing our systems and it does in some ways like.
spk_0 Create a differentiation between a company that is investing hugely in this space that is very much dedicated on to helping people with mental health.
spk_0 The biggest concern I would have just to say very frankly on on regulation has been there is you know like the law in Illinois that's passed that you might have seen that completely cuts out chat to be take like just completely exempts chat to be see from any of these rules so they establish a set of rules that only affect a company like us because we intend to help people with their mental health.
spk_0 And then create this whole cut out to say if you don't intend to help people with their mental health or exempt and that's just such a cheap lame way for lawmakers to pretend that they're making a difference in this space and creating rules but in practice leaving out the company that's 700 million with the active users.
spk_0 I mean I think this area is so nuanced that's why I love these conversations because there's not any kind of general general sweeping black and white here and it's evolving so quickly that I think when you see a headline of like regulation on you know AI it's you know some people may be like yeah woohoo and yet when you look deeper into some of these things.
spk_0 So the reason why this is chat GPT again is used mostly for mental health support and there's no regulation there so we really need to be mindful in how we are helping you know kind of give space for companies that are trying to do this well and then companies that are general purpose models I just think there's so much nuance here for us to be mindful of.
spk_0 I would just end by saying like we would love to include more people in this journey for sure I think we're early and so there's not a lot that we've put out yet and we're right now actively working to share a lot more of our thinking very publicly to be you know our own critics and make sure everyone knows the kinds of questions that we're asking.
spk_0 I had a meeting on Friday for about an hour with our whole clinical team that was concerned about one particular issue that I'm not going to tap phone here but I think our feeling coming out of it was like wow I wonder if we're the only ones talking about this right now and it would be absurd if we are because every other company that's facing these kinds of problems needs to be talking about this we need to be talking about.
spk_0 So I would love to include more people in this journey.
spk_0 Yeah we would we would love to see that happen as well well thank you guys both so much that was everything I hoped it would be and hopefully we will maybe be able to do a skin in the future and the recording will be available to five check members.
spk_0 If they want to watch if they missed it conversations in the group and I hope you both have a wonderful day.
spk_0 Thanks Daniel thanks Megan.
spk_0 Thank you.
spk_0 Thank you.