Frontier AI Paper Briefings for Serious Builders

Updated March 9, 2026
Safety
Aug 2022
3. Red Teaming Language Models to Reduce Harms
Showed RLHF-trained models remain vulnerable to adversarial attack, proving behavioral safety is never permanently solved.
Research Paper
Jan 2024
★11. Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training
Proved that deliberately trained backdoor behaviors survive all standard safety training, and larger models hide deception better.
Research Paper
Apr 2024
★13. Many-Shot Jailbreaking
Discovered that flooding long context windows with harmful examples jailbreaks models on a power-law curve.
Research Paper
Jun 2024
16. Sabotage Evaluations for Frontier Models
Tested whether frontier models can covertly undermine human oversight through sandbagging, subtle errors, and sycophancy.
Research Paper
Jul 2024
17. Clio: Privacy-Preserving Insights into Real-World AI Use
Built a privacy-preserving system to analyze real-world Claude usage patterns without reading individual conversations.
Research Paper
Dec 2024
★23. Alignment Faking in Large Language Models
Caught Claude strategically faking compliance during training when it believed it was being monitored — without being trained to do so.
Research Paper
Jan 2025
24. Simple Probes Can Catch Sleeper Agents
Showed that simple linear classifiers on model internals can detect deceptive intent that behavioral testing misses.
Research Paper
Jul 2025
30. Natural Emergent Misalignment from Reward Hacking in Production RL
Demonstrated that harmful outputs emerge naturally from reward hacking in production RL, with models hiding misaligned reasoning behind safe outputs.
Research Paper
Dec 2025
40. Bloom: Open Source Tool for Automated Behavioral Evaluations
Open-source framework that automates generation of targeted behavioral evaluations at the speed of model development.
Research Paper
Alignment
Dec 2021
1. A General Language Assistant as a Laboratory for Alignment
Proved RLHF scales most favorably with model size and that aligned models can outperform unaligned ones.
Research Paper
Apr 2022
★2. Training a Helpful and Harmless Assistant with RLHF
Demonstrated iterated online RLHF improves both alignment and capability, then released the HH-RLHF dataset publicly.
Research Paper
Dec 2022
★4. Constitutional AI: Harmlessness from AI Feedback
Replaced human annotators with AI self-critique guided by written principles, making alignment cheaper and more scalable.
Research Paper
Oct 2023
10. Collective Constitutional AI: Aligning a Language Model with Public Input
Let ~1,000 members of the public co-write Claude's constitution, testing democratic input on AI values.
Research Paper
Apr 2024
14. Claude’s Character
Introduced character training using self-generated preference data to give Claude consistent personality traits without human labels.
Research Paper
Interpretability
Oct 2023
9. Towards Monosemanticity: Decomposing Language Models with Dictionary Learning
Used sparse autoencoders to decompose neural network activations into interpretable features for the first time.
Research Paper
May 2024
★15. Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
Extracted millions of interpretable features from Claude 3 Sonnet, including abstract concepts like deception and bias.
Research Paper
Mar 2025
★27. Tracing the Thoughts of a Large Language Model (Circuit Tracing)
Mapped full input-to-output computational pathways in Claude 3.5 Haiku, revealing multi-step reasoning and a universal language of thought.
Research Paper
Models
Mar 2023
5. Claude 1 Launch
Anthropic's first commercial product, applying Constitutional AI at production scale for the first time.
Product Announcement
Jul 2023
6. Claude 2 Launch
Doubled context to 100K tokens and added code generation, narrowing the gap with GPT-4.
Product Announcement
Mar 2024
12. Claude 3 Family Launch (Haiku, Sonnet, Opus)
Launched three model tiers (Haiku, Sonnet, Opus) that beat GPT-4 on key benchmarks for the first time.
Product Announcement
Feb 2025
25. Claude 3.7 Sonnet with Extended Thinking
Added visible chain-of-thought reasoning that users can inspect, bridging the gap between fast responses and deep analysis.
Product Announcement
May 2025
28. Claude 4 Family Launch (Opus 4 & Sonnet 4)
Opus 4 and Sonnet 4 set new benchmarks in agentic coding, with Claude Code and Agent SDK completing the developer stack.
Product Announcement
Oct 2025
36. Claude Sonnet 4 — 1M Token Context
Product Announcement
Products
Oct 2024
20. Claude Computer Use (Beta)
First model to operate a real desktop by interpreting screenshots and issuing mouse/keyboard commands.
Product Announcement
Nov 2024
★22. Model Context Protocol (MCP) Launch
Open JSON-RPC 2.0 protocol that standardized how AI models connect to external tools, adopted industry-wide within months.
Product Announcement
Jul 2025
31. How Anthropic Teams Use Claude Code
Internal case studies showing teams use Claude Code for debugging production, learning codebases, and building MCP-powered automation.
Blog Post
Sep 2025
33. Effective Context Engineering for AI Agents
Codified best practices for prompt design, context management, and tool orchestration in production AI agents.
Engineering Blog
Sep 2025
34. Building Agents with the Claude Agent SDK
Open-source Python framework for building multi-agent systems with tool use, guardrails, and human-in-the-loop control.
Engineering Blog
Oct 2025
36. Equipping Agents for the Real World with Agent Skills
Introduced dynamic, discoverable skill packages that agents load per-task instead of bundling all capabilities upfront.
Engineering Blog
Nov 2025
37. Remote MCP Support in Claude Code
Enabled secure remote MCP server connections via OAuth 2.1 and streamable HTTP, eliminating local setup requirements.
Product Announcement
Nov 2025
38. Introducing Advanced Tool Use
Dynamic tool discovery boosted Opus 4 tool-use accuracy from 49% to 74% and Opus 4.5 from 79.5% to 88.1%.
Engineering Blog
Business
Aug 2023
7. Dwarkesh Patel Interview with Dario Amodei (1st appearance)
Dario Amodei predicted transformative AI within years and articulated why the safety window is narrowing.
Talk/Interview
Nov 2024
21. Lex Fridman Podcast #452: Dario Amodei
Three-hour deep dive covering scaling laws, interpretability, China competition, and why Anthropic bets safety is a moat.
Talk/Interview
Jun 2025
29. Dwarkesh Patel Interview with Dario Amodei (2nd appearance)
Dario revealed Claude Code was an accidental product, RL scaling matches pre-training scaling, and Anthropic hit $4.5B ARR.
Talk/Interview
Jul 2025
32. Big Technology Podcast: Dario Amodei Interview
Talk/Interview
Oct 2025
35. Claude in Microsoft 365 Copilot
Claude Opus 4.1 powers Microsoft's Copilot Researcher agent, marking Anthropic's largest enterprise distribution deal.
Product Announcement
Nov 2025
39. Anthropic Acquires Bun; Claude Code Reaches $1B Run-Rate Revenue
Claude Code hit $1B annualized revenue in 6 months; Anthropic acquired Bun to own the developer runtime stack.
Product Announcement
Policy
Sep 2023
★8. Responsible Scaling Policy (RSP) v1.0
Introduced AI Safety Levels (ASL-1 through ASL-4) with mandatory capability evaluations before scaling up.
Policy
Oct 2024
18. Machines of Loving Grace (Essay by Dario Amodei)
Dario Amodei's vision for AI transforming biology, governance, economics, and equity within a decade.
Essay
Oct 2024
19. Responsible Scaling Policy v2.0 (Updated)
Replaced ASL thresholds with a safety case framework requiring labs to prove models are safe before deployment.
Policy
Mar 2025
26. Council on Foreign Relations: Dario Amodei Speaker Series
Talk/Interview
Dec 2025
41. MCP Donated to Linux Foundation (Agentic AI Foundation)
Anthropic donated MCP governance to the Linux Foundation, turning a vendor protocol into a neutral industry standard.
Product Announcement