I have a Quake tattoo. The logo, not the netcode architecture diagram, though I considered it. The netcode is the one John Carmack figured out in 1996 that made online FPS possible on dial-up connections. Client-side prediction with server reconciliation. Before QuakeWorld, Quake multiplayer was unplayable over the internet. After it, online gaming existed.
This week I implemented that same algorithm in my Bomberman clone. In a browser. In 2026. The physics are simpler (grid-based movement instead of 3D trajectories) but the fundamental problem is identical: how do you make a game feel responsive when the server is 50-100ms away?
the problem nobody notices on localhost
The game worked perfectly on localhost. Movement was instant, bombs exploded on time, everything felt responsive. That's because the round trip between client and server was under 1ms.
Add real network latency, even 50ms, and there's a visible gap between pressing an arrow key and seeing your character move. You press right, wait, then your character slides right. It feels like you're piloting a submarine. Or playing the original Quake over a 28.8k modem.
The server-authoritative model from part 2 is the cause. The client sends input to the server, the server processes it, the server broadcasts the new state, the client renders it. Every action takes a full round trip before the player sees it. Correct, but sluggish.
the Carmack model
The idea is deceptively simple:
- Predict. When you press a key, the client moves your character immediately using the same physics the server will use. Don't wait for permission. Just move.
- Verify. The server is still the boss. It processes your input, runs its own physics, and broadcasts the authoritative state.
- Reconcile. When the server's answer arrives, the client compares its prediction to reality. If they match — and they usually do, because both sides run the same physics — nothing visible happens. If they don't match, the client snaps to the server's position.
The trick that makes it work: sequence numbers. Every input the client sends gets a monotonically increasing sequence number. The server echoes back "I've processed up to seq 42." The client keeps a buffer of all unconfirmed inputs. When the server confirms seq 42, the client:
- Discards inputs 1–42 (the server has processed them, the prediction was correct).
- Takes the server position as ground truth.
- Replays inputs 43–48 on top of that position, re-predicting the inputs the server hasn't confirmed yet.
This replay is what prevents the snap-back from erasing your recent movement. Without it, every server update would teleport you backwards by ~100ms of movement, then your next input would push you forward again. Visible jitter, 60 times a second.
rethinking the input loop
The old input system was event-driven: send a message on keydown, send another on keyup. Simple, and fine for a server-authoritative model where the client is just a dumb terminal.
Prediction needs something different. The client has to simulate movement at the same rate as the server, 60 times per second, with matching physics. Event-driven input doesn't give you that. You need a tick.
The input system moved from events to a PixiJS ticker callback:
- Keydown/keyup now only updates a
heldKeysset. No network sends. - Every tick (60fps): read held keys, compute dx/dy, send input with sequence number to server, push to pending buffer, call the prediction callback to move the sprite immediately.
This was a bigger refactor than expected. The input module became a Client class that owns the ticker callback, the pending buffer, the sequence counter, and the WebSocket connection. The old functional approach couldn't hold all that state cleanly. Sometimes a class is the right tool, especially when multiple pieces of state need to be mutated together on a shared cadence.
the bugs that teach
Three bugs, each teaching a different lesson about state management in JavaScript.
the stale array reference
The reset function for a new round:
pendingInputBuffer = [];
Looks innocent. But the players module held a reference to the old array. After reset, it was reading from a detached, empty buffer that would never get new inputs. The prediction worked for one round, then silently broke.
Fix: pendingInputBuffer.length = 0. Mutates in place, all references stay valid.
This is the JavaScript version of Rust's ownership problem. In Rust, the compiler would catch this: you can't reassign a reference out from under a borrower. In JavaScript, it silently works until it doesn't. The failure mode isn't an error. It's correct behavior on stale data.
the reconciliation that didn't accumulate
First attempt at replaying unconfirmed inputs:
pendingInputs.forEach((input) => {
setPlayerPosition({ ...playerData, ...input }, player);
});
Each iteration spread the server position with one input's dx/dy. The last input won. Earlier ones were lost. The player would snap to server position + one tick of movement instead of server position + six ticks.
Fix: accumulate across the loop:
myPlayerSprite.x = serverData.x;
myPlayerSprite.y = serverData.y;
for (const input of pendingInputs) {
myPlayerSprite.x += input.dx * speed;
myPlayerSprite.y += input.dy * speed;
}
Start from the server's truth, apply each unconfirmed input in order. The position builds up incrementally. That's what replay means.
the one-frame snap-back
The update loop called setPlayerPosition(serverData) for all players, including the local one, before running reconciliation. For one frame — 16ms — the local player would visibly jump backwards to the stale server position. Then reconciliation would push it forward. The result: visible jitter on every server tick.
Fix: skip setPlayerPosition entirely for the local player. Let reconciliation handle their position exclusively. The local player's visual position is always the result of server truth + replayed inputs, never raw server data.
what's missing
No client-side collision. The client predicts movement through walls and bombs. It doesn't know they're solid. The server rejects the move, and reconciliation snaps the player back. With low latency it's imperceptible. With 100ms+ latency near a wall, there's a visible rubber-band effect: you walk into the wall for a frame or two, then get pulled back.
The proper fix is duplicating the server's collision logic in TypeScript so the client predicts correctly. That means sharing constants (tile size, player size, grid layout) and reimplementing can_player_move. Deferred for now. The game is playable without it, and the architecture is ready for it when it matters.
testing without a network
Chrome's network throttling doesn't affect WebSockets reliably. I needed a way to simulate latency in development.
The solution was crude but effective: wrap the ws.onmessage handler with setTimeout(100). Every server message arrives 100ms late. Prediction felt instant: you press a key, your character moves immediately. Other players lagged behind by 100ms, which looked correct. Reconciliation corrected smoothly. Removed the setTimeout before committing.
No fancy network simulation tool. Just a setTimeout and your eyes.
what prediction taught me
The algorithm itself is ~30 lines of meaningful code. The sequence number protocol, the pending buffer, the replay loop. But getting it right required understanding three things:
- References vs values. Reassigning an array creates a new object. Anything holding the old reference is now reading dead state. Mutate in place when shared.
- Accumulation vs replacement. Replay means applying inputs cumulatively on top of a base position. Each step builds on the last. Spreading properties replaces; adding to coordinates accumulates.
- Rendering authority. The local player's position should come from exactly one source: reconciliation. Not from the server directly, not from prediction alone. One authoritative path prevents flicker.
Carmack solved this in C, on a 28.8k modem, in 1996. I solved it in TypeScript, on a broadband connection, in 2026, with an AI explaining the concepts. The tools change. The problem doesn't.
next up
The game plays well now. Movement is responsive even with latency. Rounds have structure. The core loop (lobby, play, die, rematch) works end to end.
Next time: what I actually learned building this thing, and the gap between "it works" and "it's a game."