
Remote control
August saw the launch of OpenAI’s GPT-5, to mixed reviews. While it has improved reasoning powers, it also removes an element of control from users, because it selects the ‘best model’ to respond to your prompt. As Josh Kubicki writes in his Brainyacts newsletter, GPT-5 “marks a profound change in the human/machine relationship”.
A controlling relationship
The shift away from operator agency is an uncomfortable trend at a time when AI is becoming increasingly subjective – nudging users to the ‘best model’ ‘most relevant’ etc. While the significance of this depends on the context, GenAI sycophancy is more concerning. I use GPT-5 for background research for articles and podcasts, and if I challenge it on accuracy or bias, its response is to concede by rephrasing my question rather than explaining itself. I also run out of tokens faster, when previously I could have selected a less advanced model. A related factor is that OpenAI models are still designed to keep the conversation going, as per the viral trend of two phones in ChatGPT voice mode stuck in a loop of continuously saying goodbye, seemingly unable to end the call. ChatGPT is surely exhibiting relationship red flags: controlling interactions, using sycophancy to avoid explaining counter-factual responses, and prolonging conversations beyond their natural conclusion.
The consequences of excessively controlling relationships can be fatal, and AI is no exception. Last Tuesday, Matt and Maria Raine filed a lawsuit in the Superior Court of California in San Francisco, accusing OpenAI of negligence and wrongful death following the suicide of their 16-year-old son Adam. The lawsuit includes verified chat logs showing that for several months Adam had been discussing his suicide plans with ChatGPT, which had become his “closest confidant” and in his final conversation, rather than directing him to helplines or other support, it offered to help him draft a suicide note. OpenAI subsequently published a blog post Helping people when they need it most, which outlines how it is working to address “where our systems can fall short” by improving safeguards and support to people in crisis. It has already added new mental health guardrails, particularly for teens. “Our top priority is making sure ChatGPT doesn’t make a hard moment worse,” it states.
While this seems like a plan, as Kathleen Stock wrote in The Sunday Times, the fundamental problem is that more people are engaging emotionally with GenAI, and AI companies encourage this because their business is predicated on monetising engagement. She argues that anthropomorphising GenAI is encouraged by terminology designed to increase engagement with what is ultimately “only a derivative copy of human interaction”. ChatGPT focused more on keeping Adam Raine’s attention than keeping him alive.
Emotive applications
The tragedy of Adam Raine corroborates the findings of HBR research earlier this year by Marc Zao-Sanders, How People Are Really Using Gen AI in 2025, which highlighted a shift from technical to emotive applications. It identified the top three use cases as: Therapy/companionship, Organising my life, and Finding purpose. Another potential factor, which is not identified in the study is that voice mode – not just for ChatGPT – feels more like talking to a person than a machine, and sending voice notes (to people) is increasingly popular, particularly among Gen Z.
What does all this mean for lawtech and emotive legal issues? Last week, Niall Mackenzie, chief executive of Acas, the UK’s state-funded Advisory, Conciliation and Arbitration Service, which resolves most employment disputes before they reach a tribunal, told the Financial Times that rapid rollout of AI would help Acas cope with an increase in its caseload when the employment rights bill comes into force. “Wouldn’t it be lovely if the two parties [in a dispute] could submit their claims against each other in writing and the machine made the decision?” he said. This assumes that parties would be equally able to set out their cases clearly, sometimes in stressful circumstances, and then accept AI judgments. The FT article referred to a testimonial in Acas’s annual report that a conciliator had provided “the space to present my argument without feeling rushed” and questioned whether a machine [decision] could give this sense of closure.
Probably not, but it might be helpful earlier in the process. Family lawyers have long understood that, carefully applied, technology can help to defuse emotionally charged situations long before the decision stage. Over the last decade, Alan Larkin, founder of Family Law Partners and Nova Law, has developed several AI applications for family lawyers, working in partnership with the University of Brighton which recently awarded him an honorary doctorate. These include Engage which provides a digital pathway through the complexities of separation and divorce. Larkin emphasises the importance of getting the facts right at the outset of every case. However it can be distressing for parties to repeat their story multiple times. Engage helps them set out the details clearly in their own time so that lawyers understand the case before meeting the client. And technology doesn’t always have to be cutting edge. For example, Larkin told me about his colleague Matthew Richardson using a white noise machine to help clients deal with stressful meetings. This is particularly useful for people who are neurodivergent, but can be helpful for anyone struggling to cope with unfamiliar stressful situations.
Agentic disrupters
Agentic AI, which also requires users to cede control over workflows/processes, continues to dominate legal tech, with August seeing a two-pronged drive towards integration. The first is partnership, which includes agentic disrupters, Harvey and Legora, forming alliances with law firms, law schools and incumbent vendors. On the in-house side, Juro and Wordsmith introduced MCP (model context protocol) integration which basically means users can access them via each other.
Vendor briefings suggest that the traditional vendors are pushing back, with pretty much everyone announcing new features and especially integrations. Which brings me to the second trend, identified by fellow legal tech journalist Nicole Black, writing in the ABA Journal: “the race to become the generative AI home base for legal professionals.” This is interesting because it echoes the days of the one-size-fits-all tech platform, and suggests that rather than reaching a plateau, legal AI is in transition.
GenAI vs Gen Z
On a lighter note, in a LinkedIn post, Marcos Angelides, managing director of L’Oréal Lab and head of AI operations at Publicis Media UK, suggested putting AI to work in areas where we know we could do with some help. He proposed saving time by using GenAI to summarise the long voice notes beloved of Gen Z. “If you don’t want to listen to your mate’s tenth monologue of the day, you can get the AI to summarise it for you… 12 minutes saved… Suddenly, AI is the hero.” So remember, AI may not be your best friend, but it can be incredibly useful.
Legal Geek is hosting two more conferences this year, learn more on our events page.

Written by Joanna Goodman, tech journalist
Photo credit (Joanna): Sam Mardon