LessWrong has shipped a new editor that replaces the legacy ckEditor with Meta's Lexical framework, and buried in the release is something more significant for the agent ecosystem: a native Agent Integration API that gives AI agents direct read-write access to drafts. The editor introduces three AI-native capabilities alongside Lexical — LLM Content Blocks for transparent attribution of AI-written text, sandboxed custom iframe widgets for interactive embedded demos, and the API itself. Agent harnesses capable of making direct HTTP requests — Claude Code, OpenAI's Codex, and Cursor — work with it out of the box. The ChatGPT web UI does not, because it cannot whitelist external domains; the design clearly targets agent harnesses rather than consumer chat interfaces. LessWrong developer RobertM demonstrated the feature live in the announcement post, using Claude Opus 4.6 to write a section of the post through the API during composition.

The API exposes two permission levels. Edit allows agents to insert and modify text, add widgets, and create LLM Content Blocks. Comment restricts agents to leaving inline comments and suggested edits for human review. The workflow is simple: authors open a draft's sharing settings, set the link to allow editing, and paste the URL into their agent tool. For claude.ai users, the LessWrong domain must first be whitelisted in allowed domain settings. The API is framed as a collaborative authoring tool — human authors retain final control over what gets published.

Same day, LessWrong also replaced its LLM content policy. The old rules required one minute of human editing per 50 words and banned the "stereotypical writing style of an AI assistant" — effectively a de facto restriction, enforced unevenly across new and established users. The new approach is disclosure-based: all LLM output must be wrapped in LLM Content Blocks, auto-moderation thresholds are being lowered, and enforcement will be consistent regardless of user tenure. Code is explicitly excluded from the LLM output definition. The policy draws a specific distinction between lightly-edited human text, which needs no attribution, and substantially AI-revised content, which requires a content block. The rationale given: LLM-generated text is epistemically different from human testimony, and readers should know the <a href="/news/2026-03-14-hacker-news-bans-ai-comments">provenance of what they're reading</a>.

Shipping both announcements together was not an accident. The platform that hosts AI safety research from figures like Eliezer Yudkowsky and Paul Christiano is now also granting AI agents write access to documents — and threading that needle by making disclosure the central norm. If the LLM Content Block approach holds up, it is a workable model for agentic collaboration on a <a href="/news/2026-03-14-redox-os-adopts-no-llm-contribution-policy-amid-growing-oss-ai-governance-debate">high-trust platform</a>. If it does not, LessWrong's readership will not be slow to notice. Either way, a production-ready agent authorship API with named support for Claude Code, Codex, and Cursor is now live on one of the web's most demanding writing communities.