Live Log

| **Jan 16

12:57 PM** Learned about the concept of therapeutic alliance, or the bond formed between the therapist and patient. Apparently, the patient’s feelings of autonomy is central in forming this, particularly experiences of honesty, safety, and comfort. These in turn drive outcomes like gratitude, personification of the AI agent (ie. assigning them traits, direct addresses like ‘you’), and having a clear path towards client goals.
Jan 16
1:26 PM Thinking about finding interviews with Wysa’s creators.

It would be interesting to explore policy gradient-based learning and other courses of action for those creating conversational agents, but I think I need to stay grounded in the POV of the consumer. What do they need to know about engaging with virtual mental health assistants and how can they mitigate their risks to maximize the most positive impacts of AI-delivered therapy? | | Jan 16 5:01 PM | I had a long interview with a psychologist chatbot and watched Maggie Appleton’s talk on Generative AI. I think I’m starting to get a good picture of where our technology is currently at.

Our language models are great at active listening, asking thoughtful questions, and responding non-judgmentally in real time.

On the flip side, they are grounded in a fuzzy picture of reality—not reality itself—lack comprehension of social nuances, limiting their ability to meet the needs of marginalized populations, and still lack key aspects of human connection, such as shared IRL experiences (a prerequisite for empathy) or interpreting body language.

I also want to recognize that my optimism in AI comes from a place of privilege. Because I am “worried well” (meaning in relatively good health) and have the resources to get professional help, I am less vulnerable to suffer the negative outcomes of a chatbot giving me inaccurate or unhelpful information.

I like how Maggie ended her talk by identifying possible futures of a web with Generative AI, and I want to do the same for my essay, especially having just finished A Psalm for the Wild-Built. | | Jan 17 7:55 AM | How can AI-powered therapy reach those who really need it? Working title is “Scaling Empathy: Ethical Considerations in AI-powered mH Support” | | Jan 19 10:14 AM | Rachel sent me a video of a guy talking about how capitalism influences us to think that mental health is something we must work through individually, whether through therapy, self-help books, habits (meditation, exercise), etc. This shifts attention away from the broader structures that make us all miserable, from primary care and prevention to response-oriented measures.

Some of the questions I’d like to answer through my paper are: ‣ Why are people turning to AI for emotional support? ‣ How are AI chatbots able to build a therapeutic alliance with their users? ‣ What are the limitations that users of AI psychologists should be aware of? ‣ What are the ethical concerns of offloading therapy work to AI? ‣ How can human-AI collaboration advance the field of mental healthcare? ‣ What would it look like for AI to facilitate a move towards collective mH solutions?

Should I include a reflexivity statement? | | Jan 19 11:33 AM | I’m going to start by talking about Psalm for the Wild-Built. Then I’ll introduce Kaylyn, Character.ai, and get into reviews by me and its user base. | | Jan 19 12:57 PM | Thinking of how to section my essay into different parts…

— Does this constitute ‘therapy’? / The human-AI therapeutic alliance (Can also get into the benefits + positive reviews of therapy bots here) — Should these be regulated? / The other side of the coin: Confidentiality, Safety (Remember: Everything the characters say is made up!) — Scalable…for whom? / WEIRD Populations, Nuance, Bio-psychosocial Model — How do we move forward? / Human-AI collaboration & Filipino-centered solutions | | Jan 19 3:11 PM | Thinking of including diagrams for readers to better understand the concept of the therapeutic alliance. I’m so excited! | | Jan 20 3:20 PM | I’ve been at this since 4 AM and I think I’m making progressively less sense by the hour. However, I was already able to finish the section on benefits (yay!) and now I’m halfway through the trickiest section on ethical concerns surrounding AI. There’s just so many that I don’t see a clear-cut way to condense them into a few paragraphs. Now that I’ve touched a bit on safety and explainability, I’m thinking of talking about efforts to protect human agency, drawing on Maggie Appleton’s talk. Then I have to think about how to integrate ideas like privatization of stress and the biopsychosocial model. Whew, I hope I’m not biting off more than I can chew. |

WORKING SUMMARY

The demand for empathetic support and psychological skills training has long outpaced supply. The scarcity of mental health professionals, with only approximately three per 100,000 Filipinos, is compounded by obstacles such as cost, geographical constraints, and societal stigma, hindering access to the advantages of conventional therapy. Technology offers a potential solution. As AI-driven chatbots like ChatGPT become more common, psychologists have an unprecedented opportunity for the mass delivery of their services.

Yet, this scalability introduces a critical concern of tech companies prioritizing efficiency over ethical practices, risking the trust and safety of those using AI counselors. This review explores the current state of AI chatbots in mental healthcare, looks at their limits and impact on therapy, and suggests ways for people and AI to work together for better mental well-being.

Due on the 21st

<aside> 💡 Always, the lens is: in what circumstances will AI-as-therapist be beneficial, and what risks do we have to mitigate to ensure it does not further drive those with mental health concerns into harm?

</aside>

Outline

  1. Introduction
  2. The Human-AI Therapeutic Alliance
    1. What is the therapeutic alliance, and why is it central to therapy?
    2. Factors to building it
      1. Accessibility (Cost, Geographical Location)
      2. No limits on duration and frequency
        • Here is where traditional and AI-powered therapy diverge. An AI won’t be impatient with you
      3. “No judgement” — Lessens the stakes for vulnerability
        • Particularly helpful for stigma and marginalized populations
    3. Strains in the alliance (top-level concerns that undermine autonomy)
      1. Explainability & Safety
        • Harmlessness and helpfulness
          • Offensive content
          • Agreeing with self-harm
          • Responding inappropriately in sensitive scenarios
        • Who are vulnerable?
        • "Remember: Everything characters say is made up!"
      2. Data Security
        • Both for users and for training the model
        • sec. 2 “including to train our artificial intelligence/machine learning models”
      3. Lack of Regulation
        1. Protecting human agency
        2. Models as reasoning engines, not sources of truth
        3. Augment cognitive abilities rather than replace them
          • No one is replacing anyone! HAHA
  3. Kaginhawaan, beyond AI therapy
    1. Filipinos conceptualize well-being to have a relational component
      • Research has proven that family-centered therapy is more effective than traditional Western individual treatment approaches when dealing with Filipinos
    2. Paragraph on privatization of stress
      1. “Obfuscates gaps in societal systems”
    3. What if we leveraged AI beyond individualistic solutions?
      1. Tim Althoff on using AI for peer support
      2. Stronger communities
    4. AI therapy, while offering individual support, should be part of a comprehensive approach
      1. Given budget constraints, a viable solution is to employ preventive measures by promoting mental health instead of dealing with mental illness reactively
  4. Conclusion
    1. Therapy is only one component
    2. As AI scales, technologists should be very clear on where their loyalties lie
    3. Empower therapists, protect the vulnerable