Christian Meurer just released Lula, an open-source multi-agent coding assistant that takes a different approach to the 'AI writes code' problem. Instead of letting the reasoning model touch your filesystem directly, Lula splits the work: a Python-based LangGraph orchestrator handles planning, while a native Rust execution engine does all the actual file operations and tool calls. The Rust runner enforces path boundaries, command allowlists, and sandbox isolation before anything gets executed. The security model is where Lula gets interesting. It uses a degradable sandbox stack that tries Firecracker MicroVM isolation first, falls back to Linux namespaces if that's not available, and defaults to a SafeFallback mode with process isolation and environment stripping. Every tool call can require HMAC-signed approval gates, meaning a human has to sign off before the agent mutates your codebase. Meurer built this for engineering teams who need audit trails and governance around autonomous coding pipelines. Lula also packs a tripartite persistent memory store combining semantic, episodic, and procedural memory with vector search baked in. No external vector database required. The 9-node dynamic DAG scheduler supports cycle-safe runtime rewiring across git-worktree-isolated agents. Meurer positions it against Copilot Workspace, OpenHands, and Devin, with local and private-cloud deployment as the main differentiator. The project's got 1,788 tests and 84% coverage. If you'd rather fence in your AI coder than let it run loose, Lula's worth a look.