DBOS engineers Peter Kraft and Qian Li pulled off something clever for their Python durable execution library. They needed workflows that could run steps concurrently but still replay deterministically after a crash. Replay just means re-running a workflow from the start after failure and getting identical step ordering, so you don't accidentally charge a credit card twice or send duplicate emails. The trick? Python's asyncio event loop dequeues tasks in FIFO order. So if you assign step IDs before the first await statement, you get a consistent ordering every time, even when using Async Python to run things in parallel.

The implementation lives in the @Step() decorator. Before any async work happens, it grabs and increments a step ID from the workflow context. Since tasks start in a predictable order (first coroutine passed to gather gets ID one, second gets ID two, etc.), replay works. You get the performance of concurrent execution with the reliability of deterministic checkpointing in Postgres.

But Hacker News commenters flagged a real risk. FIFO scheduling is a CPython implementation detail, not a language guarantee. The Trio async library deliberately randomizes task order to catch code that depends on scheduling assumptions. Relying on this behavior makes your code brittle and non-portable. Critics argued that explicit dependency modeling and idempotency are safer bets than exploiting event loop quirks.

DBOS has serious pedigree. It was co-founded by Mike Stonebraker (PostgreSQL creator and Turing Award winner), Andy Pavlo (CMU database professor), and Matei Zaharia (co-creator of Apache Spark). The company builds on Stonebraker's research vision of making the database the central operating system for cloud apps. Whether this asyncio technique is genius or a ticking time bomb probably depends on how much you trust CPython to maintain backward compatibility.