AgentFM just shipped as a single Go binary that turns idle hardware into a decentralized AI compute network. Instead of paying AWS or renting GPU time from cloud providers, you pool spare CPUs and GPUs across machines you already own, or tap into a global mesh of volunteered GPU infrastructure. The project is MIT-licensed and open source on GitHub.

The architecture splits nodes into two roles. Boss nodes dispatch tasks. Worker nodes execute them inside ephemeral Podman containers that self-destruct after each job completes. The networking stack runs on libp2p, using Kademlia DHT for routing and Circuit Relay v2 to punch through NATs and corporate firewalls. Workers broadcast their live hardware state via GossipSub and automatically reject tasks when they're at capacity. Install the binary and it finds peers. No configuration required.

For teams working with sensitive data, AgentFM offers "Darknets," private encrypted swarms created with a shared key. You can distribute workloads across your own hardware without traffic touching the public internet. A weak laptop can offload heavy inference to a coworker's GPU workstation in another country, with data encrypted end-to-end.

The framework works with Python, Go, Rust, and Node.js, and integrates with tools like Ollama for local model hosting, frameworks like GAIA SDK. A headless API gateway lets you trigger tasks from external apps like Next.js or n8n workflows. This is practical software for teams with idle GPUs and cloud bill fatigue. The open question is whether the public mesh can attract enough nodes to stay reliable at scale.