Wharton's Generative AI Labs interviewed 20 game studios, from AAA publishers to indie shops, and found AI adoption follows four predictable stages. Most companies start by giving employees ChatGPT Enterprise and calling it done. Individuals get faster, but the organization stays the same. When studios try automating entire workflows from the top down, they slam into tacit knowledge. The unwritten rules and institutional memory stuck in people's heads. "Before AI can automate reliably, someone must extract and codify all of that," the report states. And plenty of employees aren't eager to help with that extraction.
The actual progress happens at the edges. Individuals start using AI to cross domain boundaries, doing work that used to require other teams. A product manager writes complex data queries. An engineer generates 2D art assets, a shift that creates **emerging roles in the AI economy**. One PM tracked by the researchers dropped query failure rates from 70-80% down to 5% over several months because context accumulated in shared documents.
The bigger gains come from studios designed around AI from their founding. These AI-first operations see 4-20x productivity improvements. They look nothing like traditional studios. One ran with five technical team members, all generalists. Another had roughly 22 people total. Specialist silos get replaced by small teams organized around business outcomes. Game design documents become the primary input for AI production pipelines. The technical setup uses RAG to parse structured design specs, feeds them to models like GPT-4 or Claude 3.5 Sonnet, and generates code for Unity or Unreal Engine. A vertical slice that previously took four months now takes four weeks. Thirty UI icons that took weeks through contractors get done same day.
But studios consistently hit a ceiling. AI can't handle work centered on aligning people. Strategic planning, mostly. Getting a team to actually commit to a direction. A strategy document spit out by AI might have the right words, but without the team actually participating in the process, it fails to generate real commitment. There's a feedback loop here: those human coordination processes produce the explicit documentation that AI systems need to execute tasks in the first place. Human judgment and AI execution stay tied together, effectively transforming software engineering into **managerial, human-centric work**, whether anyone likes it or not.