A developer who spent six months replicating the University of Zurich's champion-level drone racing research has published a guide on building a 2D quadcopter simulation from scratch. The post by mrandri19 walks through deriving equations of motion using Newton-Euler rigid-body dynamics, converting them to state-space form, and implementing everything in Python with NumPy. The whole thing runs maybe 30 lines of actual computation. It's the foundation you need before training a reinforcement learning agent to fly anything.

The simulation models a planar quadcopter with two arms, each generating thrust. Three equations govern the system: horizontal translation, vertical translation, and rotation. Rather than controlling individual propeller thrusts directly, the formulation uses total thrust and differential thrust. This makes the math cleaner and maps better to how you'd actually design a controller.

The author is replicating the "Champion-level drone racing using deep reinforcement learning" paper from Davide Scaramuzza's lab at UZH. That research used the Flightmare simulator and heavy domain randomization (mass, inertia, thrust coefficients, sensor latency, visual conditions) to train neural network policies that transfer to real drones. This 2D simulation is step one: understand the dynamics before you make an agent learn them. The author says they're writing the tutorials they wished existed when they started.