Everything changes
William Gibson’s comment “The future is already here – it’s just unevenly distributed” is still apposite. July saw significant developments in GenAI, with OpenAI launching ChatGPT Agent, which is a task management application, on 17 July, and GPT-5, expected to launch this month, will have o3 reasoning capabilities built in.
Legal is in the bubble of early adopters when it comes to GenAI. A recent study by Pew Research Center found that only 34% of US adults use ChatGPT, although there are trends around age and education: 58% of adults under 30 and 52% of those with a postgraduate degree have used it. However only 28% of respondents used ChatGPT for work. This may be because they are using a different GenAI platform, or because GenAI isn’t useful for the type of work they do. But last week Google introduced AI Mode – a search interface similar to ChatGPT – into its UK search engine (the future is already here in the US and Japan) – and pretty much everyone uses Google.
This is a potential GenAI tipping point, as Google’s AI Overview, the AI summary that appears at the top of search results, is leading to fewer click throughs. Another Pew Research Centre study found that users clicked on a traditional result in 8% of searches with an AI summary, just under half the 15% click rate on pages without one. And SEO providers report that organic visibility declines as people click on AI Overview citations rather than scrolling down the results, perhaps making SEO and advertising less relevant. So to some extent Google is gambling its considerable advertising revenues on its GenAI-powered search becoming as ubiquitous as its original search engine. The risk is that Google could eventually lose its dominance as people migrate to their preferred GenAI model, which could be ChatGPT or Anthropic’s Claude – as long as these remain free to access.
Privacy and privilege
As OpenAI models become more capable and more popular, they raise more privacy issues. Last week, Fast Company reported that shared ChatGPT conversations – i.e. if a user created a shareable link and clicked the option to make the chat discoverable – appeared in Google search results, although these did not identify the user. The following day, OpenAI removed the discoverability option.
Another way in which OpenAI is looking to extend its influence is directly relevant to lawyers, as it relates to legal privilege. An article by Jason Snyder in Forbes refers to the New York Times lawsuit against OpenAI, in which the NYT asked the courts to compel OpenAI to retain all user content, which again raises privacy concerns for users of ChatGPT. In last week’s episode of This Past Weekend w/ Theo Von, OpenAI CEO Sam Altman acknowledged that the fact that there is no legal privilege/confidentiality for ChatGPT conversations could be a blocker to broader user adoption.
But that too, raises questions – including legal ones. Snyder asks, “If AI chats have legal privilege, what does that make the system listening on the other side?” As he explains, there is no global consensus on regulating AI memory or interaction. “While the EU AI Act includes transparency mandates, there is so far no regulation on what it means to interact with a memory enabled AI. Which begs another question – where are the privacy rights in relation to AI memory? Are advertisers also getting access to ‘private’ conversations and shared information?” Snyder’s concern is based on his long expertise in creating experiential advertising campaigns for global entities. He warns: “We are entering a phase where machines will be granted protections without personhood and influence without responsibility…This isn’t just about ethics. It’s about enforceable, mutual accountability.”
If AI is the future…
Children are the future sang Whitney Houston in 1984 – but as comedian Tim Dillon put it on Steven Bartlett’s Diary of a CEO podcast, “The children are no longer the future. The future is AI,” adding “Who’s reviving the economy of San Francisco? The children? No. AI.” He has a point. AI is certainly the leading player in the Ministry of Justice’s future plans. The executive summary of its AI action plan for Justice, published last week, doesn’t mention legal service providers until its last paragraph!
As AI continues to eat the world, the trend among law firms and incumbent legal tech vendors is to keep up with the pace of change by establishing partnerships with leading AI companies like Harvey and Legora. But as AI supports legal processes, workflows, and even negotiations, it raises the training question: how are lawyers going to gain sufficient expertise to handle high-value work? The answer of course is AI, as is evidenced by Flex Legal and BARBRI training programmes, and various law firm initiatives supported by legal AI vendors.
As Richard Susskind predicted, AI is creating new roles in legal, with firms increasingly employing AI leads, data scientists and prompt engineers. But these are technical rather than vocational – most lawyers I meet were drawn to law because they like getting things right, which might mean protecting human rights, or on the transactional side, getting deals done right. The risk here is if you reduce the roles that inspire vocational aspiration, how will the profession attract the best candidates? Right now the hook is money, which would explain why promising associates in top law firms are becoming legal tech/AI founders – the amounts being raised by legal AI start-ups are so high that even a magic circle or US white shoe law firm wouldn’t attempt to match them.
Avoid cognitive offloading!
“Is AI making us stupid?” An MIT Media Lab study found strong evidence that using AI tools for writing tasks leads to lower brain activity, potentially eroding critical thinking skills. Researchers divided participants into three groups – using ChatGPT, Google search, and just their brains – and asked them to write and rewrite essays. An EEG was used to record their brain activity over several months. The ChatGPT group got lazier with each subsequent essay, and consistently underperformed. However, another academic study published in the Harvard Business Review (HBR) in May suggests that GenAI makes people more productive – and this is certainly happening in corporate legal, where small teams are able to handle more work by offloading routine tasks to GenAI. The hidden danger identified by the HBR study was that when people rely on AI they become less motivated to perform. The message here may be use AI productivity tools, but write your own emails!
Legal Geek is hosting two more conferences this year, learn more on our events page.
Written by Joanna Goodman, tech journalist
Photo credit (Joanna): Sam Mardon