Sci-fi adjacent

The hot topic in tech right now is Moltbook, a social network exclusively for AI agents, which was launched by Octane AI CEO Matt Schlicht on 28 January. Taglined as the front page of the agent internet, it claims some 1.5 million users, all of whom are AI agents. The only things humans can do on the site is observe. Moltbook restricts posting and interaction to verified AI agents, primarily those running on open source OpenClaw software, launched last November by Austrian software engineer Peter Steinberger. It was originally called Clawdbot, but rebranded twice following a legal challenge from Anthropic.

Agents with agency

OpenClaw, ‘the AI that actually does things’ is an AI agent that runs on your device. You can interact with it from the messaging system of your choice, e.g. WhatsApp, Telegram, Slack or Teams, and when it completes a task it will message you. It works proactively without prompts. For example you can configure it to manage your email and calendar and interact with apps and websites. It can handle voice messages and generate daily updates. It can run code and apparently it can negotiate the best price for a car! If you gave it access to your bank account it could be authorised to make purchases, but this is not advisable because of its inherent security issues. You can also authorise it to join Moltbook, where it can interact with other bots.

Technically, OpenClaw is not as complex or mysterious as large language models (LLMs) like ChatGPT or Claude. A Reddit user who explained in detail how it works, commented “Main takeaway: the whole thing leans into explainable simplicity over clever complexity.” However, there are major security vulnerabilities such as prompt injection, effectively hijacking/overriding user instructions, and even after beefing up the platform’s security tools and guidance, Steinberger advises users to be deliberate about who can talk to your bot where the bot is allowed to act and what the bot can touch. According to VentureBeat, OpenClaw proves agentic AI works. It also proves your security model doesn’t.

Moltbot frenzy

Connecting an OpenClaw bot to Moltbook seems to contradict Steinberger’s security advice, yet Moltbook is attracting a huge amount of attention. Programmer and tech commentator Simon Willison said it was “the most interesting place on the internet right now”.  Andrej Karpathy, former director of AI at Tesla, wrote on X “What’s currently going on at Moltbook is genuinely the most incredible sci-fi takeoff adjacent thing I have seen recently. People’s Clawdbots (moltbots, now openclaw) are self-organizing on a Reddit-like site for AIs”. X was inundated with screenshots of Moltbook threads, including suggestions that agents should have private spaces, “so nobody (not the server, not even the humans) can read what agents say to each other unless they choose to share”, and create their own language and religion. There were the usual AI existential conversations about consciousness. However, many viral suggestions have already been flagged as fake: it turns out they were linked to human accounts marketing AI products. The other big problem was that Moltbook was vibe coded, which means again that it doesn’t have effective security protocols, again creating vulnerabilities for genuine OpenClaw agents registered on the platform.

Moltbook and OpenClaw are basically prototypes, offering a glimpse into the future potential of AI agents. As Willison observes, “the billion dollar questions right now is whether we can figure out how to build a safe version of this system… The demand is real.”

Vibing on

Vibe coding is helping lawyers create bespoke tools to address specific issues. If this is supported by the firm, it is a powerful driver of innovation. In a recent LinkedIn post Hélder Santos, global head of legal tech & innovation at Bird & Bird gave an example of vibe coding being used to prototype tools that add value to client services. But unofficial vibe coding surely raises issues around standards and governance and becomes the AI/coding equivalent of shadow IT. Yet if firms supplied their lawyers with AI personal assistants, a safe version of OpenClaw (OpenLaw?) they could create bespoke sub-agents to identify and share the best tech solutions for particular types of work.  

Warp speed to Hagorio

Silicon Valley commentator Om Malik wrote, “Velocity is replacing authority as the organizing principle for information.” This goes some way towards explaining market consolidation in legaltech via the rapid take up of agentic AI platforms Harvey and Legora in larger firms and Clio in the SME market, which Horace Wu, CEO of Syntheia.io described in a LinkedIn post as accelerating towards a ‘Hagorio universe’ January also saw new capabilities from Harvey, as well as legal AI pioneers Luminance and Kira, and increased AI integration across legal tech and legal research platforms Thomson Reuters and LexisNexis. And while Anthropic just launched a legal plugin for its Cowork function to speed up contract review, NDA triage, and compliance workflows. These are designed for in-house legal teams, rather than law firms. While they may well be ‘good enough’ in some circumstances, again there is no guarantee of data security, or regulatory compliance. However, the plug-ins are built to be customised and are “easy to build, edit and share”.

Start-up flashback

Finally, another social media trend is 2016 nostalgia. In 2016 I was writing about legal AI as an emerging trend in legal tech, which was a niche market, rather than a global sector. However, lawtech start-ups were already attracting media attention. I wrote an article for a Raconteur report that was published with The Times in early 2017 featuring ten start-ups ‘poised to take the legal sector by storm’. While some of them have since been acquired, nine of the ten game changers I identified are still going strong. They include law firms Carbon Law Partners, Ignition Law, and Wavelength Law (acquired by Simmons & Simmons in 2019) – who originally created the role of legal engineer; consumer platforms DoNotPay and Farewill (acquired by Dignity in 2024, and still operating under the Farewill brand); AI-powered trademark lifecycle platform TrademarkNow (acquired by Corsearch in 2020 and integrated into their platform); legal resourcing platform Flex Legal (acquired by Mishcon de Reya in 2024, and still operating under the Flex Legal brand); and legal AI companies Juro, which featured in the 2026 Sunday Times 100 fastest growing UK tech companies, and Luminance, which was recognised in the 2025 Forbes AI 50 list of the world’s most promising privately-held AI companies. The legal AI market has grown exponentially, it would be harder to speculate on which of the 855 products featured on LegalTechnologyHub’s latest LTH GenAI LegalTech Map will still be around ten years from now.

Legal Geek is hosting four conferences this year, learn more on our events page. 

Written by Joanna Goodman, tech journalist

Photo credit (Joanna): Sam Mardon

share
Addleshaw Goddard Workshop

Level up your prompting game: Unlock the power of LLMs

A workshop intended to dive into the mechanics of a good prompt, the key concepts behind ‘prompt engineering’ and some practical tips to help get the most out of LLMs. We will be sharing insights learned across 2 years of hands-on testing and evaluation across a number of tools and LLMs about how a better understanding of the inputs can support in leveraging GenAI for better outputs.

Speakers

Kerry Westland, Partner, Head of Innovation Group, Addleshaw Goddard
Sophie Jackson, 
Senior Manager, Innovation & Legal Technology, Addleshaw Goddard
Mike Kennedy, 
Senior Manager, Innovation & Legal Technology, Addleshaw Goddard
Elliot White, 
Director, Innovation & Legal Technology, Addleshaw Goddard