Running AI coding agents safely is a genuine problem. Docker containers share a kernel, which gets risky fast when you're executing untrusted AI-generated code. Bhatti, a new open-source project from developer Sahil Shubham, tackles this with Firecracker microVMs. Each sandbox gets its own kernel and filesystem.

The numbers speak for themselves. A paused sandbox resumes and executes a command in under 3ms. On a Raspberry Pi 5, the median exec time hits 1.26ms. That's container-speed with VM-level isolation. Bhatti uses a thermal management system that shifts idle VMs through hot, warm, and cold states, freeing memory when sandboxes aren't active while keeping resume times fast. Warm state resume clocks in around 462 microseconds.

Unlike E2B's managed cloud service, Bhatti runs on your own infrastructure. The host daemon and guest agent deploy on Linux servers with KVM support, or you can use the CLI from macOS. Multi-tenant isolation comes built in with per-user API keys, dedicated network bridges, and resource quotas.

The project is Apache 2.0 licensed and includes diff snapshots, preview URLs that auto-wake sleeping sandboxes on first request, and streaming exec output. If you've got Linux servers with KVM support, you can deploy this today and stop trusting AI-generated code to shared kernels. This approach complements other sandboxing solutions like Google's Scion hypervisor for AI agents, which continues to be an important topic in the industry, as seen with OpenAI's acquisition of Cirrus Labs focusing on agent infrastructure.