Meta just dropped Muse Spark, the first model from its new Superintelligence Labs division. It's a multimodal reasoning model that can use tools, process visual information through chain of thought, and orchestrate multiple agents at once. The headline feature is "Contemplating mode," which spins up parallel reasoning agents to tackle hard problems. Meta says it hits 58% on Humanity's Last Exam and 38% on FrontierScience Research, putting it in the same conversation as Gemini Deep Think and GPT Pro.

The practical demos are where it gets interesting. Muse Spark can look at your coffee machine and build an interactive troubleshooting guide with bounding boxes highlighting each component. Point it at your fridge and it'll generate nutritional breakdowns with health scores for pescatarians watching their cholesterol. They worked with over 1,000 physicians on the health reasoning capabilities. It's the kind of personalized, multimodal intelligence that sounds useful in theory, though we'll see how it holds up outside carefully curated demos.

Meta rebuilt their entire pretraining stack for this one. Same capabilities as Llama 4 Maverick, but with over an order of magnitude less compute. The reinforcement learning pipeline apparently delivers smooth, predictable gains without the instability that typically plagues large-scale RL. There's also a clever trick in their approach to test-time reasoning: a thinking time penalty that forces the model to compress its reasoning, then extend again for better performance.

The model is available now at meta.ai with a private API preview for select users. But the Hacker News crowd is already raising eyebrows at the privacy implications. The real question is what happens to all that personal data flowing through an AI made by a company whose business model runs on targeted advertising.