At the 2026 Winter Paralympics in Cortina, Ukrainian para-biathlete Maksym Murashkovskyi won silver in the men's visually impaired biathlon — shooting a perfect round without missing a single shot — and then made headlines for an unconventional reason: he had used OpenAI's ChatGPT as his primary coach for the six months leading up to the Games. The 25-year-old told The Guardian that the AI generated roughly half of his training plan and supported his motivation, race tactics, performance psychology, and even occasional medical guidance. "I used it as a psychologist, coach and, sometimes, as a doctor," he said, describing the technology as "revolutionary" and crediting it with replacing what he called "classical" human coaching relationships. It was only his second Paralympic race.

Murashkovskyi's use of ChatGPT extended well beyond athletic training. He applied it to language learning, chemistry, biology, and personal projects — a self-directed athlete running a general-purpose LLM as an all-purpose tutor and advisor. His guide, Vitaliy Trush, shared the silver medal, as is customary in visually impaired biathlon. Murashkovskyi drew a deliberate contrast between AI's military use in Ukraine's ongoing war — targeting systems, satellite imagery analysis — and his own application of the same technology to sport and learning. The tools destroying things on the front, he argued, are the same ones that built him into a Paralympic medalist.

General-purpose LLMs have now demonstrably entered elite sport without custom tooling, fine-tuning, or institutional oversight — and no governing body has moved to address it. The International Paralympic Committee, World Athletics, and WADA currently have no rules restricting AI coaching tools. The IOC has moved the opposite direction: it deployed AthleteGPT, built on Mistral AI, to 11,000 athletes at Paris 2024. What has not moved is liability. UK sports law firm Brabners has flagged that <a href="/news/2026-03-14-anthropic-refuses-dow-demand-to-remove-ai-safeguards-declared-supply-chain-risk">responsibility for AI-driven injury decisions remains entirely unresolved</a>. Murashkovskyi's explicit use of ChatGPT for medical guidance makes that gap concrete — he was operating in a space where <a href="/news/2026-03-14-lancet-psychiatry-ai-associated-delusions-study">no doctor is accountable</a>, no platform is liable, and no sports body has yet decided it needs to be.