A Medium opinion piece by Andrey Mandyev, a software and data engineer, makes the case that the context failures routinely blamed on AI coding agents are not novel technical problems but well-understood data governance issues in disguise. Stale specifications, context the agent cannot locate, and missing decision traces all have direct analogues in the data engineering world: data ownership failures, discovery gaps, and absent lineage. Mandyev argues that practitioners building agent pipelines are essentially rediscovering problems the data community has spent years solving, and that the disciplinary silo between software engineering and data engineering is the main reason this parallel goes unrecognized.
The practical implication is that the tooling already exists. Data catalogs, schema contracts, lineage graphs, and data quality monitors are mature artifacts from the data engineering world that Mandyev contends can be adapted directly to agentic context pipelines. Rather than treating agent context as ephemeral scaffolding or subject to <a href="/news/2026-03-14-personality-md-cargo-cult-engineering-llms-have-no-nature-to-change">prompt-based fixes</a>, he argues it should be modeled and managed as a first-class data artifact — assigned an owner, governed by contracts that define shape and freshness, and instrumented for lineage so an agent's decisions can be traced back to the inputs that drove them.
That framing distinguishes the piece from the bulk of agent memory research, which has focused on retrieval architecture — how systems like MemGPT or LangChain's context management handle what an agent can access at inference time. Mandyev's concern is upstream: who owns the context artifacts, whether they're stale, and whether their provenance is auditable. Those are operational and organizational questions more than algorithmic ones, and data engineering has a longer track record on them than AI infrastructure does.
Writing on Hacker News under the handle andrey_m, Mandyev noted that his own frustration with agent misbehavior triggered recognition of patterns he had already encountered and solved in data contexts. The thesis has a straightforward implication for teams currently building agent infrastructure: audit your context management layer through a data-quality lens before reaching for agent-specific solutions. The tooling may already be on the shelf.