News You Can Use

Edition 39 · 1st - 14th April 2026

News You Can Use

Opening

The frontier moved in three different directions this fortnight, but each one is a different shape of the same thing: the product is moving quicker than the institutions meant to buy it. Capability is outrunning governance, and it is outrunning the literacy and knowledge needed to use it well.

Deep Dives

Three stories worth sitting with

Harvey's Spectre Agent and the Self-Improving Legal AI

Artificial Lawyer - Spectre|Artificial Lawyer - Harness Engineering

What
Spectre is an autonomous company agent that operates without a human prompt. It monitors the business and acts on incidents, bug reports, customer feedback, and internal signals. It sits on what Harvey calls a "law firm world model" - a live picture of what is happening inside an organisation and what needs to happen next. The same week, Harvey published research on "harness engineering" - agents teaching themselves to do legal work through iterative self-improvement. On internal benchmarks, average scores jumped from 40.8% to 87.7% with no model update and no manual prompt engineering. Seven of twelve tasks exceeded 90%. The agents generated their own legal-specific tools along the way: cross-document playbooks, validation checkpoints, structured fact sheets. The headline is self-improvement. Given good examples and a rubric, the agents got meaningfully better at legal work by themselves.
So what
Capacity is no longer the constraint. The technology can do the work. The question for innovation leaders is whether the firm has the people with the judgement to decide what needs to be done and what good looks like. Given examples and a rubric, agents now auto-generate their own toolkits and improve without a model update. The compounding advantage lives with the firm that owns the examples and the rubric. Static prompt libraries are already behind. And with Spectre removing the human trigger, the bottleneck shifts to review, prioritisation, and coordination, all of which is judgement work. Partners have a different job. Juniors have a different apprenticeship. The question worth answering is how to rebuild supervision, judgement, and pricing around a workforce that can produce more than the firm can currently absorb.

The Vendor Layer Splits: Microsoft Goes Multi-Model, Anthropic Goes Direct

GeekWire|Artificial Lawyer - Claude for Word

What
Microsoft 365 Copilot's Researcher agent can now use OpenAI's GPT to draft a response, then have Anthropic's Claude check it for accuracy and citations before finalising. Two features make this work: Critique (one model checks another's output) and Council (multiple models collaborate). It ships as part of Microsoft 365 E7 "Frontier Suite". Three days later, Anthropic launched Claude for Word in beta with legal contract review as the flagship use case. It ships through Anthropic's own admin console rather than the Office marketplace, is limited to Claude Team and Enterprise plans.
So what
Microsoft's bet is that the platform layer abstracts the model layer. Anthropic's bet is that frontier labs do not have to accept being routed to. Both can be right at once, and they imply very different market structures three years out. The "which AI provider do we bet on" question is answered: you do not. Any strategy built around a single named vendor is vulnerable. Design around tasks, workflows, and data governance. Microsoft's public admission that no single model is reliable enough is also an architectural endorsement of verification-by-design. Any firm workflow that outputs AI-drafted content without a second-model check is the odd one out. The legal AI vendor moat has narrowed with it. Claude for Word meets lawyers in Word (like the Legal AI plugins), with a frontier model behind it, for the price of a Claude Enterprise seat. Vendors with deep workflow orchestration, proprietary data, and integration plumbing will survive the compression. The ones selling a thin wrapper on a frontier model will feel this first. Procurement teams that have been avoiding the "which AI stack are we building on" conversation are about to have it forced on them.

Anthropic's Mythos Finds Zero-Days. Anthropic Locks It in a Box.

TechCrunch|Anthropic - Project Glasswing

What
Claude Mythos Preview is a new frontier model with cybersecurity capabilities that surprised even Anthropic's own researchers. During internal testing, Mythos autonomously discovered thousands of zero-day vulnerabilities across every major operating system and browser, including a 17-year-old remote code execution flaw in FreeBSD that the entire security industry had missed. Rather than shipping commercially, Anthropic launched Project Glasswing: a restricted programme limiting access to around 50 partner organisations including Amazon, Apple, Microsoft, CrowdStrike, and the Linux Foundation, backed by $100M in usage credits and $4M to open-source security organisations. The model scores 93.9% on SWE-bench Verified, higher than any publicly available model, and it is not being sold. It is being deployed as defensive infrastructure. Sceptics counter that zero-day discovery is already industrialised through fuzzing, that severity matters more than raw volume, and that a capability-so-dangerous-we-cannot-sell-it narrative is convenient positioning in Anthropic's pre-IPO window. Both reads can hold at once.
So what
Whatever is frontier today is commodity within 12-24 months. If Anthropic has built a model that finds vulnerabilities humans missed for 17 years, others are building it too, and not all of them will withhold. For law firms, which remain high-value targets (Jones Day was hit by Silent Ransom Group for a $13M ransom this same week), the attack surface just changed. Any cybersecurity posture that assumes human-speed offensive tooling is already behind. The firms investing in AI-augmented security testing now are buying time. Anthropic's decision to withhold the model from the market sets a separate precedent. The Bank of England has already added Mythos to its next Cross Market Operational Resilience Group agenda with Treasury, FCA, and NCSC attending, the first UK regulator to treat a frontier model as an operational-resilience question rather than a compliance one. For any innovation function that still thinks of cybersecurity as an IT problem, the Mythos moment closes that gap. Invest in the capability and invest in the governance to use it safely. Do not assume the next frontier lab will choose responsibility over revenue.

Worth Reading

Everything else worth a click

- Market Moves

Crosby Raises $60M Series B for AI-Native Law Practice

Not a software vendor. An AI-native "law firm" backed by Lux, Index, and Sequoia. Claims $1B+ in contracts negotiated since emerging from stealth less than a year ago, plus simulations that predict counterparty responses to redlines. Total raised: $85.8M.

Orbital Launches Its Own Law Firm, "Farringdon"

UK real estate legal tech vendor Orbital launches an SRA-regulated conveyancing firm staffed by six lawyers and conveyancing engineers, taking instructions from May. Orbital will continue to sell software to its new law firm competitors. The fourth AI-native law firm announced this quarter alongside Crosby, NewMod, and Innanen's Lavern.

- Agents and Automation

Innanen Builds an Agentic Law Firm on a Mac Mini

Antti Innanen, a Finnish lawyer who has built real law firms before, created one staffed by 66 AI agents - lawyers, engineers, designers. Intake, decomposition, specialist routing, internal debate, synthesis. He has given it 30 days to find a commercial home; otherwise the whole thing gets open-sourced. Product at lavern.ai.

Axiom Partners with Harvey - ALSP Meets AI

Axiom's 14,000 on-demand lawyers now trained on Harvey. Pitch: Harvey-ready talent at 50% lower rates than top firms. The ALSP model has evolved from flexible staffing to AI-augmented flexible staffing. For in-house teams, a meaningful alternative to both BigLaw and DIY.

Clio Adds Agentic AI and Launches Vincent Mobile

Lawyers can now issue goal-oriented instructions ("find everything that could kill this deal") and the system executes multi-step workflows. 84% of AI queries submitted as goal-based requests. The mid-market is getting the same agentic capabilities as BigLaw.

- Adoption and Practice

Thomson Reuters: GenAI Use Nearly Doubles to 40%

Fourth annual report, 1,500+ professionals. Legal professionals expect to free up 240 hours/year (up from 200 in 2024). Only 15% use agentic AI but 53% are planning or considering it. The gap between "using AI" and "using agents" is where the next wave of adoption will play out.

Washington Post: 60%+ of Federal Judges Now Using AI

Texas federal judge Xavier Rodriguez feeds filings into AI for case timelines, suggested hearing questions, argument weaknesses, and sometimes drafts rulings. When the judiciary adopts AI faster than the profession appearing before it, courtroom practice changes.

Lloyds Becomes First FTSE 100 Firm to Put AI in the Boardroom

Agent built by UK governance-tech vendor Board Intelligence with access to confidential board papers, supporting directors on M&A, sustainability, cyber and financial analysis. Lloyds cited £50m of GenAI value in 2025 and is targeting £100m in 2026. A different adoption pattern from the "AI as faster drafter" framing.

Krishnan Nair: "When Clients Learn to Love AI"

The Global Lawyer argues clients are now using AI more aggressively than their own law firms. The best short read of the fortnight on the shifting balance of AI fluency between buyer and seller.

- Quality and Risk

A Gaming CEO Used ChatGPT to Dodge $250M - and the Court Found Everything

Delaware Court of Chancery ruled against Krafton CEO Changhan Kim, who used ChatGPT to design a corporate takeover strategy to cancel a $250M earnout, then deleted the logs. Vice Chancellor Lori Will: executives are "expected to exercise independent human judgment - not outsource good-faith decisions to an AI." Required reading for every governance committee. Chat logs are discoverable. Deletion makes it worse. "The chatbot told me to" is not a defence. Spellbook analysis.

$145K in AI Hallucination Sanctions in Q1 2026 - NPR Takes It Mainstream

Damien Charlotin's database now past 1,200 documented cases worldwide, ~800 from US courts. Annualised rate approaching 1,400 cases in 2026, triple 2025's volume. Landmark Oregon case at $109,700. Sixth Circuit $30K. When NPR covers it, the story has crossed from industry concern to public awareness - which changes the conversation with clients, courts, and regulators.

Bloomberg: Only 23% of In-House Lawyers Use AI Daily

27% have not used AI tools in the past six months. One-third of regular users say it saves less than 30 minutes per day. Top barriers: unreliable outputs (49%), ethical concerns (49%), security risks (48%). Adoption headlines look different when you ask about daily reality.

- Critical Perspectives

Altman Publishes Industrial Policy for the Intelligence Age

OpenAI's 13-page blueprint proposes a Public Wealth Fund seeded by AI companies, robot/automation taxes, 32-hour workweek pilots framed as "efficiency dividends," automatic safety-net triggers, portable benefits decoupled from employers, and a "Right to AI" framing treating access as foundational infrastructure. Sits on top of Altman's personally-funded UBI study showing $1,000/month produces modest mixed results - recipients work slightly less but value work more, pursue more education, and start more businesses. Evolution of thinking: from cash transfers in 2021 to structural reform in 2026. The efficiency dividends framing is politically astute for anyone selling workforce transformation to a partnership or board.

- Regulation and Policy

The UK AI Paradox: Sovereign Fund Launches as OpenAI Pulls Investment

On one side: the Chancellor's $500M Sovereign AI Fund formally launches 16 April at Wayve's offices as part of a $2.5bn government AI and quantum commitment. On the other: OpenAI paused its UK Stargate data centre citing regulatory uncertainty and energy costs, particularly around copyright. The government has adopted a "wait and see" approach and the AI Bill is delayed until after the next King's Speech. Meanwhile New Statesman reveals AI-generated text has already been incorporated into an Act of Parliament. Plan for regulatory ambiguity through 2026.

Bank of England to Brief UK Banks on Anthropic's Mythos

BoE is adding Mythos to the next Cross Market Operational Resilience Group and CMORG AI Taskforce agenda within two weeks, with Treasury, FCA, and NCSC attending. First UK regulator move treating a frontier model as operational-resilience risk rather than a compliance question.

- Technical Developments

Google Releases Gemma 4 - Open Source, Edge-Ready, Agentic

Four model sizes (2B to 31B parameters), Apache 2.0 licence, 256K context, native vision and audio, 140+ languages. Flagship 31B model ranks #3 on Arena AI's text leaderboard at 1452 Elo - outperforming models twenty times its size. Runs entirely offline on phones, Raspberry Pi, and NVIDIA Jetson. When frontier-grade reasoning runs on a phone with no API call, the deployment assumptions for legal AI shift materially.

ServiceNow Goes "AI Everywhere"

Every ServiceNow product now ships with AI, data, workflow, security, and governance by default. Context Engine in preview, Build Agent skills GA from 15 April, SDK lets developers create ServiceNow apps directly from Cursor, Copilot, or Codex. Firms running ServiceNow for ITSM and ESM now get agents by default - the platform-level agent substrate story is no longer just a Microsoft story.

- Security

Jones Day Hacked - Silent Ransom Group Accesses Client Data

Breach via the Accellion file transfer platform. Files of 10 clients accessed. $13M ransom demanded. SEC investigating whether material non-public information was exposed. A reminder that law firms remain a high-value target for sophisticated threat actors - and a direct counterpart to the Mythos deep dive above.

Anthropic v Pentagon - Appeals Court Denies Bid to Block Blacklisting

The standoff over Anthropic's refusal to grant unfettered military access to Claude continues. A separate SF judge granted a preliminary injunction blocking enforcement. Defence contractors are dropping Claude. The question of where AI companies draw ethical lines is playing out in federal court.

- Practitioner Voices

Sam Harden: "The Legal UI Revolution"

The real AI revolution is not faster documents. It is "single-serving legal software" - bespoke, disposable apps built for a single trial, deal, or crisis. Demonstrates building two interactive legal tools in 30 minutes. What previously cost months and five figures now takes half an hour.

Benedict Evans on AI Economics

$400B spent by Big Four platforms on AI last year, $650B budgeted for 2026. Describes AI as "a normal technology that will take time." The supply crunch for GPUs, electricity, and HVAC has been "basically impossible to build enough capacity since ChatGPT launched."

Dean Ball: "New Sages Unrivalled"

Uses Mythos to argue the "adolescent period" of AI policy is over. Six-point framework: strengthen cyberdefences, end the Anthropic-Pentagon standoff, enhance semiconductor controls, pass transparency requirements, adopt third-party audits. The most thoughtful policy piece to come out of Glasswing.