Nova, a new open-source project from HeliosNova, makes a claim that — as far as can be determined as of publication — no other self-hosted personal AI assistant currently supports: permanently updating its own model weights by learning from user corrections. Released under AGPL-3.0 with 1,443 passing tests, Nova runs entirely on local hardware via Ollama — defaulting to Qwen3.5:27b — and implements a full Direct Preference Optimization (DPO) fine-tuning pipeline that converts user corrections into structured training pairs in the format {query, chosen, rejected}. Once enough pairs accumulate, Nova automatically runs a fine-tuning job with A/B evaluation before deploying the improved model. Khoj (32K GitHub stars) and Open WebUI (124K stars), by contrast, treat the underlying model as a fixed external dependency and limit "memory" to retrieval-time context — a design that puts weight-level learning off the table entirely.
The correction pipeline itself is two-stage — a regex pre-filter followed by LLM confirmation — designed to reliably extract factual lessons without false positives. Corrections are one of four learning channels feeding the fine-tuning loop. Nova also runs a reflexion layer for silent failure detection (hallucinations, failed tool loops), a curiosity engine that queues background research when it detects knowledge gaps, and a success-pattern store that logs high-quality responses as positive reinforcement. A user who corrects Nova repeatedly over months ends up running a materially different model than on day one — one tuned at the weight level to their specific knowledge domain, not just their conversation history.
Nova's hybrid retrieval system combines ChromaDB vector search, SQLite FTS5 full-text search, and Reciprocal Rank Fusion re-ranking — an approach the project argues produces more robust recall than the vector-only pipelines used by better-known competitors. The system ships with 21 built-in tools (including sandboxed Python execution, Playwright-based browsing, and SSRF-protected HTTP fetching), integrations with four messaging platforms (Discord, Telegram, WhatsApp, and Signal), 14 proactive background monitors, and a temporal knowledge graph with fact supersession and provenance tracking. It also operates as both an <a href="/news/2026-03-15-godex-building-a-free-ai-coding-agent-with-mcp-servers-and-local-llms-via-ollama">MCP client and MCP server</a>, meaning it can expose its memory and knowledge graph to agentic coding tools like Claude Code or Cursor — a dual-role capability absent in Open WebUI, which supports MCP only as a client.
The project is intentionally built without LangChain or LangGraph, consisting of roughly 74 files of plain async Python using FastAPI and httpx. The developer's stated rationale is that framework abstraction layers introduce versioning instability and opaque prompt injection surfaces that become especially problematic in a self-improvement loop where data provenance and reproducible prompt construction must remain deterministic. Nova's security posture reflects this first-principles approach: it implements four-tier access control, four-category prompt injection detection, HMAC-signed skill validation, SSRF protection, and Docker hardening with a read-only root filesystem and all Linux capabilities dropped. The minimum requirement of 20GB of VRAM limits casual adoption, but that hardware threshold maps closely to the segment of users for whom <a href="/news/2026-03-14-spacedrive-v3-local-first-data-engine-prompt-injection-screening">local data sovereignty</a> is a primary motivation — precisely the users most likely to engage with a self-improvement loop that grows more valuable with each correction logged. The project is available at github.com/HeliosNova/nova.