Volodymyr Frytskyy built Deflect One, a single Python file that turns SSH access into a full DevOps command center. No agents to install on servers, no cloud services required. It monitors CPU, RAM, disk, Docker containers, and databases across unlimited servers from one terminal. There's also a dual-panel file manager, log aggregation, package management, and attack detection with automatic IP banning. All through SSH.

The optional AI features integrate Claude, GPT-4, Gemini, Mistral, and local models via LM Studio. You hit Ctrl+A, describe what you want in plain English, and it translates to commands and runs them. More concerning: you can set up background governance loops where per-host LLM instructions execute autonomously. Something like "restart BotService if tests.log is older than one day" runs 24/7 without oversight.

That's the risky part. Giving LLMs root access to production servers through autonomous loops removes the human safety net. A model hallucinating a kernel panic could trigger destructive commands. Indirect prompt injection through log files could embed malicious instructions that the agent acts on. And unlike deterministic tools like Ansible, LLMs are probabilistic. The same input can produce different outputs, which is the opposite of what you want in infrastructure management when agents cheat or hallucinate commands.

Frytskyy acknowledges the AI features are experimental. The tool uses hardware-bound encrypted credential storage, so SSH keys stay on your machine. But if you're going to let AI loose on your server fleet, maybe start with the demo mode and watch what it does before enabling those background loops.