Up until now, launching the server meant everyone was immediately in the game. Open two browser tabs, both players are already moving and placing bombs. When someone dies, you restart the server. No countdown, no waiting for other players, no concept of rounds.
The core gameplay loop was working. But a game that starts without asking and ends without telling you isn't a game. It's a tech demo. This week I added the thing that turns a tech demo into something you'd actually want to play with friends: a lobby, a ready system, and a round loop.
the state machine
The game now has three phases, and I modeled them with a Rust enum:
enum GameState {
Lobby,
Playing,
RoundOver { timer: u64, winner_id: Option<u8> },
}
Lobby: players connect, see each other, press R to ready up. Playing: the actual game. RoundOver: brief pause showing the winner, then back to Lobby.
The interesting part is RoundOver carrying its own data. The countdown timer and winner ID only exist in that phase. They're not floating around as fields on some god struct waiting to be accidentally read during the wrong phase. When you match on the enum, the compiler forces you to handle every variant:
GameState::RoundOver { ref mut timer, ref mut winner_id } => {
*timer -= 1;
// ...
}
Coming from TypeScript, where you'd model this with a type discriminant and a union type, the Rust version feels more enforced. You can't access timer during Playing because it doesn't exist during Playing. In TypeScript, you'd narrow the type with an if check — but nobody stops you from skipping the check and accessing timer anyway. In Rust, it's structurally impossible.
the deadlock returns
I hit a deadlock during the refactor. Third time in this project, same class of mistake.
The game loop acquired the players mutex at the top of the loop body and held it across every interval.tick().await. Meanwhile, the connection handler was trying to lock the same mutex to insert new players. The game loop never released the lock between ticks, so accept_connections blocked forever.
The symptom was subtle: players "connected" — the WebSocket handshake succeeded — but never appeared in the lobby. Zero players, every tick. No errors, no panics, no crash. Just silence.
The fix was straightforward: lock per tick, not per loop. Acquire the mutex, do the work, drop the guard, then await the next tick. The lock is held for microseconds instead of forever.
The rule I keep relearning: lock late, release early. If you just moved a lock acquisition to a wider scope, and something stopped updating, check whether you're holding the lock across an await point.
the "alive = false" trick
During Lobby, players shouldn't be able to move or place bombs. My first instinct was to add phase checks to every input handler: if we're not in Playing, ignore the input.
Then I realized: I already had a mechanism that blocks all player actions. Every input handler starts with if !player.alive { continue; }. So I just set alive: false on player creation. The existing guards block everything. When all players ready up, set_all_players_alive() flips them on right before transitioning to Playing.
Reusing an existing field to enforce a new constraint. No new code paths, no phase checks scattered across handlers. It felt like the right kind of lazy.
the inverted condition
The update_players_state function handles both building the player state for broadcast and updating positions. I added a freeze_positions: bool param so RoundOver could broadcast positions without letting the winner walk around during the victory screen.
First attempt:
if freeze_positions {
update_player_position(map, bombs, player);
}
Moves players when they should be frozen. The fix:
if !freeze_positions {
update_player_position(map, bombs, player);
}
Classic. Classic. A unit test caught it immediately: just checked that x didn't change when freeze_positions was true. The kind of bug that would have been confusing in production ("why is the winner teleporting during the victory screen?") but trivial to catch with a test.
the vanishing explosion
When the last player died, the server immediately transitioned to RoundOver. But RoundOver wasn't broadcasting game state. The client got one last frame of the killing explosion, then nothing. The explosion vanished instantly, no death played out.
The fix: broadcast game state during RoundOver too, keep ticking explosions so they expire naturally, and only start the 3-second countdown after the fireworks finish. The round-ending kill now plays out in full before the winner banner appears.
This is the kind of thing you don't think about when you're designing state transitions on paper. Phase transition means "stop doing game stuff." But the visual consequences of the last game event still need to resolve. The state machine moved on; the player's eyes haven't.
finally using React for something
Up until now, React was a shell around the PixiJS canvas — everything happened imperatively inside a useEffect. The lobby was the first time React state actually earned its keep.
The game phase (lobby, playing, roundOver) lives in useState, and the WebSocket message handler updates it. Overlays render conditionally based on the phase. Standard React pattern, except for the stale closure.
The onmessage callback is created inside useEffect and captures the initial phase value from the closure. So this check is always true inside the handler, even after the game starts:
if (phase === "lobby") {
// always true — phase was captured at effect creation
}
The fix is the functional form of setState:
setPhase((prev) => (prev === "lobby" ? "playing" : prev));
This reads the latest state at the time of the update, sidestepping the stale closure entirely. It's the kind of thing you know intellectually but still trip over when you're wiring up a WebSocket handler inside a useEffect.
The overlays themselves are simple. DOM elements positioned absolutely over the canvas. Lobby shows a player list with color-coded borders matching sprite colors, ready/not-ready status, and a "Press R" prompt. Round-over shows the winner's name in their player color, or "DRAW" in grey. No CSS files, just inline styles. It's a game prototype, not a design system.
what the lobby taught me
The lobby has zero gameplay code. No physics, no collision, no timers. But it touched almost everything:
- The game loop needed to branch on phase and handle state transitions.
- The connection handler needed to interact with game state without deadlocking it.
- The player struct needed to pull double duty (alive = false as a lobby guard).
- The client needed React state for the first time, plus stale-closure awareness.
- The broadcast needed to keep running during RoundOver so explosions could finish.
The non-gameplay parts of a game — lobby, menus, transitions — are where the integration complexity lives. The gameplay code is self-contained: physics in, state out. The lobby code has to coordinate across every system.
next up
The game has a loop now. Lobby, play, die, see the winner, play again. It works perfectly on localhost.
But everything works perfectly on localhost. The round trip between client and server is under 1ms. Add real network latency and suddenly there's a gap between pressing an arrow key and seeing your character move. The game feels like you're piloting a submarine.
Next time: implementing the algorithm John Carmack pioneered in 1996 to save online gaming. Client-side prediction with server reconciliation, in a browser, with a Rust server.