Daily Digest · Entry № 32 of 43

AI Digest — April 8, 2026

Anthropic launches Project Glasswing to gate Claude Mythos Preview behind a security-researcher-only program after the model autonomously discovers thousands of zero-days, while OpenAI, Google, and Anthropic publicly join forces against Chinese model distillation.

AI Digest — April 8, 2026

Your daily deep-dive on AI models, tools, research, and developer ecosystem news.


🔖 Project Releases

Claude Code

Latest: v2.1.96 (April 8, 2026)

A small but urgent hotfix: v2.1.96 fixes Bedrock requests failing with 403 "Authorization header is missing" when using AWS_BEARER_TOKEN_BEDROCK or CLAUDE_CODE_SKIP_BEDROCK_AUTH — a regression introduced in v2.1.94. This is the third release of the week and underscores how aggressively Anthropic is iterating on the Bedrock integration that landed in v2.1.92’s setup wizard.

The bigger story is v2.1.94 (April 7), which shipped the week’s most consequential changes:

  • Amazon Bedrock powered by Mantle support behind CLAUDE_CODE_USE_MANTLE=1, opening a new managed deployment path for AWS-native customers.
  • Default effort level raised from medium to high for API-key, Bedrock/Vertex/Foundry, Team, and Enterprise users — a quiet but significant change that increases per-request token spend by default in exchange for stronger reasoning.
  • Compact Slack message headers with clickable channel links, fixes for agents appearing stuck after 429 rate-limit responses, and a fix for Console login on macOS silently failing when the keychain is locked.
  • Improved scrollback rendering, multiline prompt display, fixes for hyperlinks opening duplicate browser tabs in tmux, and CJK text corruption fixes in stream-json input/output.
Note

Default effort changing from medium to high is the kind of change that can show up unannounced on enterprise invoices. Worth flagging to FinOps if you bill against an Anthropic API key.

Beads

Latest: v1.0.0 (April 3, 2026)

No new release this week. The 1.0 stable from last Friday remains the current production target. This week’s commit activity has focused on incremental hardening — --non-interactive and --role flags for CI/cloud agents, GitLab sync deduplication and epic-to-milestone mapping, new storage interface methods (SlotSet/SlotGet/SlotClear), batch config operations via bd config set-many, first-class spike/story/milestone issue types, and a public format package for issue rendering. ADO push filters were fixed and concurrent embedded Dolt access is now protected by exclusive file locks.

OpenSpec

Latest: v1.2.0 (February 23, 2026)

No new release this week. v1.2.0 from late February remains the latest tagged release, featuring the core vs custom profile system, support for Pi (pi.dev) and AWS Kiro, and automated tool detection during initialization. The repository continues to see issue activity through early April but no new tagged releases.


🧵 From the Community (r/LocalLLaMA & r/MachineLearning)

Gemma 4 26B A4B vs Qwen 3.5-27B for Agentic Coding

Threads on r/LocalLLaMA continue to debate whether Gemma 4’s 26B A4B MoE has dethroned Qwen 3.5-27B for local agentic coding workloads. The pragmatic consensus from this week’s posts: Qwen 3.5-27B still wins on first-try reliability and clean code on a 24 GB card, but Gemma 4 26B A4B is the speed king at ~80 tokens/sec vs ~20 tokens/sec for Qwen on the same hardware — making it the better choice for high-volume, lower-stakes tasks where you can tolerate a retry. Several posters are also experimenting with TurboQuant KV cache quantization on Gemma 4 on Apple Silicon, with QJL and FWHT working surprisingly well against Gemma 4’s large attention heads.

Project Glasswing Reactions

The Mythos Preview / Project Glasswing announcement is dominating both subreddits today. r/MachineLearning is broadly supportive of Anthropic’s restricted-access model — “this is what responsible disclosure at the frontier should look like” is the recurring sentiment. r/LocalLLaMA is more skeptical: several long threads argue that gating a vulnerability-discovery model behind a 12-org allowlist concentrates an enormous amount of asymmetric offensive power and that the eventual leak (when, not if) will be catastrophic. The 17-year-old FreeBSD NFS root RCE that Mythos discovered autonomously is being treated as the canonical example of why both sides are right.

Default Effort Bump in Claude Code v2.1.94

Smaller but sharp discussion on the v2.1.94 default effort change from medium to high for API users. Several practitioners are reporting 30–60% token bills increases on overnight runs that started before they noticed the changelog. The debate is whether opt-out should have been the default, given how many shops run Claude Code on cron without watching releases.


📰 Technical News & Releases

Anthropic Launches Project Glasswing, Restricts Claude Mythos Preview to Security Researchers

Source: Anthropic, Fortune, CNBC | Anthropic

Anthropic unveiled Claude Mythos Preview, a frontier model “strikingly capable at computer security tasks,” and simultaneously announced Project Glasswing — a gated program that limits Mythos access to a small set of security partners rather than making it generally available. Launch partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The project includes ~$100M in usage credits and $4M in direct donations to open-source security organizations.

The capability claims are striking. Mythos Preview has already found thousands of high-severity vulnerabilities, including bugs in every major OS and browser. Most dramatically, it autonomously identified and exploited a 17-year-old remote code execution flaw in FreeBSD’s NFS implementation that grants root on any vulnerable host (CVE-2026-4747). Anthropic explicitly does not plan to make Mythos Preview generally available; the long-term goal is to figure out how to deploy Mythos-class models safely at scale before opening the gates.

This is the most explicit “asymmetric capability” decision Anthropic has made to date. It’s also a direct sequel to the Claude Opus 4.6 vulnerability-discovery research and last month’s Mythos documentation leak — Anthropic appears to have decided that the only responsible release plan for a model this offensively capable is no public release at all.

OpenAI, Anthropic, and Google Form Anti-Distillation Alliance Through Frontier Model Forum

Source: Bloomberg, The Decoder, Japan Times | Bloomberg

The three US frontier labs have begun openly coordinating through the Frontier Model Forum — the industry nonprofit they founded with Microsoft in 2023 — to detect and counter what they call adversarial distillation: repeatedly querying a frontier model to harvest its outputs as training data for a cheaper knockoff. The labs are now sharing attack signatures, query patterns, and account-cluster data with each other to catch coordinated extraction campaigns. US authorities estimate adversarial distillation costs American AI labs billions of dollars in lost revenue annually.

The trigger is well-known: DeepSeek’s R1 in 2025 prompted Microsoft and OpenAI to investigate suspected exfiltration, and in March 2026 Anthropic disclosed that three Chinese AI companies had used over 24,000 fake accounts with Claude to generate roughly 16 million exchanges. With DeepSeek V4 launching this month on Huawei Ascend chips (covered yesterday), the urgency has clearly tipped over from internal investigation to public coordination. The development reframes the US/China AI competition as a terms-of-service enforcement problem layered on top of an export-control problem.

Anthropic’s Annualized Revenue Reportedly Surpasses OpenAI at ~$30B; IPO Eyed for October

Source: Stocktwits, Trading Key, Yahoo Finance | Trading Key

Anthropic’s annualized revenue has reportedly crossed $30 billion, exceeding OpenAI’s current ~$25 billion run rate and making it — by this measure — the highest-earning AI unicorn in the world. Reports cite a more-than-tripling from $9B the previous year, driven primarily by enterprise: the count of enterprise customers spending more than $1M annually doubled in two months, and eight of the Fortune 10 are now customers. Anthropic now claims roughly 32% of the enterprise LLM API market vs 25% for OpenAI. The company is reportedly evaluating an October 2026 IPO at a target valuation around $380B that could raise more than $60B. Compute is locked in via 3.5 GW of capacity through Broadcom and Google.

If accurate, this is the first time Anthropic has plausibly led OpenAI on any major commercial metric. It also helps explain the simultaneous Project Glasswing announcement: Anthropic is increasingly behaving like a company that needs the security narrative to be airtight ahead of a public listing.

OpenAI Publishes “Industrial Policy for the Intelligence Age” Blueprint

Source: TechCrunch, Quartz, Fortune | TechCrunch

OpenAI released a 13-page policy blueprint titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First” that proposes a substantially redistributive AI economy. The headline ideas: a public wealth fund modeled on Alaska’s Permanent Fund and seeded partly by AI company contributions, a robot tax on automated systems comparable to the human workers they replace, and time-bound trials of a four-day, 32-hour workweek with no loss in pay. The document also calls for shifting the tax burden from labor to capital, warning that AI-driven growth will hollow out the labor-tax base that funds Social Security, Medicaid, SNAP, and housing assistance.

The framing matters as much as the content. Sam Altman is now publicly aligning OpenAI with policy positions historically associated with the left (UBI-adjacent dividend funds, capital taxation) while OpenAI’s own commercial strategy accelerates in the opposite direction. Several outlets noted the policy paper drops while OpenAI’s own revenue growth is reportedly slowing relative to Anthropic’s — a useful piece of context for reading the political signal.

North Korea’s UNC1069 Compromised the axios npm Package via Targeted Social Engineering

Source: Google Cloud Threat Intelligence, The Hacker News, Help Net Security | Google Cloud Blog

Google’s Threat Intelligence Group (GTIG) attributed last week’s axios npm supply chain compromise to UNC1069, a financially motivated North Korea–nexus actor active since at least 2018. Between 00:21 and 03:20 UTC on March 31, attackers pushed two malicious axios releases (v1.14.1 and v0.30.4) containing a dependency on plain-crypto-js that dropped the WAVESHAPER.V2 backdoor across Windows, macOS, and Linux. During the three-hour window, an estimated 3% of the axios userbase pulled the bad releases — a meaningful slice of a package with ~100M and ~83M weekly downloads on the two affected lines. The maintainer’s account was compromised after a highly-targeted social engineering campaign in which attackers impersonated the founder of a real, well-known company, complete with cloned likeness and corporate identity. The recommended mitigation is to pin to v1.14.0 / v0.30.3 or earlier and rotate any credentials that flowed through machines that pulled the malicious versions.

Claude.ai Suffers Two Multi-Hour Outages April 6–7

Source: AI Daily, Hacker News | AI Daily

Claude.ai experienced two roughly two-hour outages over April 6 and 7, with the first hitting around 10:30 a.m. ET on April 6 and producing ~2,700 Downdetector reports. Users reported login failures, API Error 401, and broken voice mode. The Hacker News thread was sharply critical of the lag in Anthropic’s own status page, which initially showed “all systems operational” while users were unable to authenticate at all. Two outages in two days against the backdrop of Anthropic’s reported revenue surge, its IPO timeline, and the Project Glasswing announcement is awkward optics — and reinforces a recurring concern that operational reliability has not kept pace with commercial growth.

Atlassian Cuts ~10% of Workforce, Splits CTO Role Two Ways “to Self-Fund AI”

Source: CNBC, Bloomberg, TechCrunch | CNBC

Atlassian announced layoffs of ~1,600 employees (roughly 10% of its workforce) along with $225–236M in restructuring charges, framing the cuts as a way to “self-fund” its AI and enterprise sales push. CTO Rajeev Rajan stepped down on March 31 and the role was split two ways: Taroon Mandhana becomes CTO of Teamwork and Vikram Rao becomes CTO of Enterprise and Chief Trust Officer. CEO Mike Cannon-Brookes was unusually candid in his memo, writing that “it would be disingenuous to pretend AI doesn’t change the mix of skills we need or the number of roles required in certain areas.” Atlassian joins Block, Oracle, and a growing list of large tech employers explicitly attributing layoffs to AI-driven productivity expectations.


🧭 Key Takeaways

  • Anthropic is rewriting the frontier release playbook. Project Glasswing is the first time a major lab has publicly declined to ever release a frontier-class model to the general public on safety grounds. The 17-year-old FreeBSD NFS root RCE that Mythos discovered autonomously is the proof point. Whether the gated-partner model survives contact with reality (or with leaks) is the open question.

  • The US frontier labs are now openly coordinating against China. OpenAI, Anthropic, and Google sharing distillation-attack signatures through the Frontier Model Forum is a real change in posture. The labs are no longer treating extraction as an internal terms-of-service issue; they’re treating it as a coordinated industry defense problem with national-security framing.

  • The commercial center of gravity may be shifting from OpenAI to Anthropic. A reported $30B ARR vs OpenAI’s $25B, eight of the Fortune 10 as customers, 32% enterprise API market share, and a credible October IPO target around $380B together describe a company that has — at least temporarily — pulled ahead on the metric most relevant to a public listing.

  • OpenAI’s policy paper is a tell. A blueprint calling for robot taxes, public wealth funds, and a four-day workweek is unusual for a company about to test public markets. Read alongside the slowing relative revenue growth, it looks less like a research output and more like Sam Altman pre-positioning OpenAI for a much more contested political environment.

  • Operational reliability and supply-chain security keep being the soft underbelly. Two Claude outages in two days, the UNC1069 npm compromise of axios, and a Bedrock-auth regression that needed a same-week hotfix all in the same news cycle is a reminder that the biggest near-term risks for AI companies remain shockingly mundane: status pages, package signing, default config changes, and the social-engineering surface around individual maintainer accounts.


Generated on April 8, 2026 by Claude