Darkbloom wants to turn your idle Mac into an AI inference machine. Built by Eigen Labs, the decentralized network connects Apple Silicon Macs to people who need AI compute, promising up to 70% cost savings versus centralized providers like OpenRouter. The pitch is straightforward: over 100 million Macs sit idle most of the day, their unified memory architecture and Neural Engines doing nothing. Darkbloom routes inference workloads to them instead of through the typical NVIDIA-to-hyperscaler-to-API markup chain.

The technical claim that matters is privacy. Operators literally cannot see your prompts or responses. Darkbloom uses Apple's Secure Enclave for hardware-verified keys, end-to-end encryption, and a hardened runtime that blocks debugger attachment and memory inspection. The attestation chain traces back to Apple's root certificate authority. It's built on the Bonsai protocol from Layr-Labs, which provides cryptographic proof that code executed correctly. Trust comes from math and cryptography rather than reputation and terms of service.

But the economics deserve scrutiny. If a Mac Mini could pay for itself in months through inference revenue, why wouldn't Eigen Labs just buy their own hardware? Early users report quirky installation issues, failed model downloads, and long idle periods with negligible earnings. The platform charges zero fees and operators keep 100% of revenue, which sounds great until you realize that means Darkbloom itself has no obvious revenue model. Current supported models include Gemma 4 26B, Qwen3.5 variants up to 122B parameters, and MiniMax M2.5 239B.

The OpenAI-compatible API is a smart move that cuts adoption friction: switch your base URL and existing code works. Whether enough demand materializes to make operators real money is the real question. Apple Silicon is genuinely capable hardware for inference, and the security architecture appears sound on paper. But distributed compute networks have a long history of underwhelming operator returns. Darkbloom's bet is that private inference generates demand that wouldn't exist on networks like AgentFM, a project that turns idle GPUs into a P2P AI grid. If they're right, the privacy layer is the whole product.