DBOS has pulled off something clever with their durable execution library for Python. They made async workflows deterministic, which matters because durable workflows need to replay correctly after failures, and that requires knowing exactly which steps ran in what order. The problem is that async code with concurrent tasks, like those started with asyncio.gather, seems inherently unpredictable. DBOS found a workaround by exploiting how Python's asyncio event loop actually schedules tasks.
The trick is in the timing. When you create multiple async tasks, they don't run immediately. The event loop queues them and runs each one until it yields control via await. DBOS discovered that asyncio schedules newly created tasks in FIFO order. Their @Step() decorator assigns step IDs before the first await, capturing this deterministic startup sequence. So even though tasks might finish in any order, they start in a predictable one, and that's enough for replay-based recovery.
DBOS has serious credentials. The company was founded by Mike Stonebraker (Turing Award winner, creator of PostgreSQL), Matei Zaharia (Stanford professor, creator of Apache Spark), and Andy Pavlo (CMU database systems researcher). Their goal is to replace complex queue infrastructure with simple code annotations that provide resilience and automatic failure recovery for coding agents.
Not everyone is thrilled with this approach. Hacker News commenters and the Trio async library maintainers pointed out that FIFO scheduling is an implementation detail of stdlib asyncio, not a guarantee from the Python spec. Trio deliberately randomizes task startup order to catch developers who accidentally depend on scheduling behavior. If CPython ever changes how its event loop works, code relying on this trick could break in confusing ways. DBOS is betting their entire recovery mechanism on undocumented behavior.