QVAC SDK hit Hacker News this week. It's a JavaScript toolkit for running AI models locally in browsers and Node.js, no cloud API calls required.

That's the pitch. It's a good one. Local AI means data stays on device. Apps work offline. No API costs.

The SDK claims cross-runtime support. Details are thin, but it likely uses WebGPU, similar to how ONNX Runtime Web and TensorFlow.js handle browser inference. Those are the elephants in the room. QVAC isn't first to this party.

One HN commenter: "Finally, a way to run inference without wrestling with Python dependencies." Another called out the lack of documentation on supported model formats. Fair point.

If QVAC delivers a drop-in experience that just works, JavaScript developers might bite. The demand is real. The HN thread saw solid engagement.

But the README needs more meat. Which models run? How fast? On what hardware? Those answers don't exist publicly yet.

Local AI in JavaScript is happening with or without QVAC. The question is whether this SDK becomes the easy button or gets buried by alternatives.