Skip to content
Blog

How to Handle Rate Limits in a Multiplayer Game Backend

How to Handle Rate Limits in a Multiplayer Game Backend

A player kills three enemies in quick succession. The first two stat updates persist. The third returns 429 and the client silently fails it. From the player's pov, that's a kill that didn't count, an achievement that didn't unlock, and a reason to write a negative Steam review. That's what bad rate limiting looks like.

Good rate limiting is invisible. Limits exist, but the client absorbs them, batches around them, and recovers without the player noticing.

This article covers what to actually build, and what to expect from a backend you don't build yourself, so your rate limits stop bots and runaway clients without dropping legitimate players.

What Rate Limiting Solves and Where it Hurts

Rate limiting is a counter. It tracks how many requests a given entity (user, IP, API key, studio) has sent in a window of time. Anything over the threshold gets rejected, usually with an HTTP 429 Too Many Requests response. The point is to protect your backend from three things: malicious abuse (bots, DoS, credential stuffing on login), buggy clients hammering an endpoint in a tight loop, and legitimate-but-runaway traffic during launch spikes that would otherwise melt your database.

That's the protection side. The cost is that every limit you set is a potential player-experience bug. Here are four examples of it goes wrong in multiplayer games:

  • Stat updates fire on every kill.
    A player on a streak in a shooter generates 12 kills in 8 seconds. If you call your updateStat endpoint per kill, that's roughly 90 RPM per player on a single endpoint. Multiply by a session of 100 concurrent players and you're hitting global limits before the round ends.
  • Lobby chat opens fast.
    When matchmaking pulls 10 players into a lobby, every client requests friend status, presence, lobby state, and chat history simultaneously. Even at 5 calls per client, that's 50 requests in under a second.
  • Reconnects after a server hiccup.
    If 200 clients all retry at exactly the same moment, the retry storm hits harder than the original spike. This is the thundering herd problem and it's how rate limits make outages worse instead of better.
  • The client doesn't tell the player.
    A 429 returns. The game silently fails the action. The player tries again, gets the same silent failure, and assumes their input is broken.

The core trade-off: tight limits make abuse cheap to block but punish bursty real traffic. Loose limits leave you exposed. Game traffic is bursty by nature (login storms, match-end stat flushes, lobby joins) so the design has to handle bursts cleanly without permanently raising the ceiling.

Where Rate Limiting Needs to Live

Three-layer rate limiting architecture for a multiplayer game backend showing edge, gateway, and service-level limits.

A common mistake is putting all the limits in one place. Rate limits work best as a defense in depth, with three layers each catching a different kind of abuse:

  • Edge layer (CDN, WAF, load balancer). Catches IP-based floods before they reach your code. This is where you stop credential-stuffing attacks on /login and basic DoS attempts. Cloudflare, AWS WAF, and similar services do this with no game-specific tuning needed.
  • Gateway layer (API gateway). Enforces per-user and per-endpoint quotas after auth resolves who the user actually is. This is where the bulk of "is this user behaving normally" lives. Most modern gateways (Kong, Apache APISIX, AWS API Gateway, Azure API Management) ship with token-bucket implementations, configurable headers, and tier-based quotas.
  • Service layer (game logic). Enforces feature-specific rules that the gateway can't see. Chat spam ("five identical messages in 30 seconds = mute"), match-action cooldowns, friend-add velocity. These are gameplay rules dressed up as rate limits, and they belong in the service that owns the feature.

Algorithm choice matters too. For backends, a token bucket is likely the right call.

Token bucket in 30 secs: A bucket refills at a fixed rate (say, 10 tokens per second). Every request consumes one token. The bucket has a max capacity (the "burst size") and when it's empty, requests get rejected. That's what lets a player fire 8 stat updates after a multi-kill without tripping the limit.

It allows controlled bursts (a player firing 8 stat updates after a multi-kill) while enforcing an average rate over time. Fixed-window counters have a known problem where a user can fire 2 * limit requests in two milliseconds at the boundary, which is fine for a slow-traffic API and catastrophic for a competitive shooter.

Sizing Limits That Don't Punish Real Players

The numbers matter and they're game-specific. Generic API guidance ("100 requests per minute is fine for most APIs") doesn't survive contact with a session-based shooter. Three rules of thumb that hold up in practice:

  1. Set a per-user limit and a global limit, both enforced simultaneously.

    Per-user prevents one bad client from eating the whole studio's quota. Global protects you when something systemic goes wrong (a broken client patch loops a request). A request gets rejected if either limit is hit.

  2. Allow burst capacity equal to the worst-case legitimate burst, not average.

    A player who chains 10 actions in a second is normal. Size the burst at 20-30 to give headroom, then enforce the average rate over the next minute. Token bucket gives you this for free; fixed windows do not.
  3. Different endpoints get different limits.

    Login is precious, IP-based limits to stop credential stuffing. Stat updates are bulk - offer a batch endpoint and enforce a low per-call rate. Position updates in real-time gameplay should not go through your REST gateway at all; they should be on a UDP or WebSocket channel that's rate-shaped at the transport layer.

Concrete starting numbers for a small-to-mid multiplayer game: 500 RPM per user is generous and covers nearly all legitimate gameplay patterns. 5,000 RPM as a studio-wide ceiling is reasonable for a game with a few thousand concurrent players. These are starting points and they should grow with your CCU. Most managed backends use numbers in this range as defaults.

Handling 429 on the Client Without Breaking Play

The server side stops the abuse. The client side stops the player from rage-quitting. Both have to be right. When the client gets a 429, four things have to happen:

  1. Respect Retry-After if it's present. The header tells you exactly how long the server wants you to wait. Hardcoding "wait 5 seconds" instead of reading the header is the most common mistake, you either wait too long or too short.

  2. Exponential backoff with jitter when there's no header. Standard pattern: wait 1s, then 2s, then 4s, then 8s, capped at some ceiling. Add random jitter (+/- 25%) so 200 reconnecting clients don't all retry at the same millisecond. If all your failed clients retry at exactly 2 seconds, you create a new spike. This is how rate limits go from helping to making the outage worse.

  3. Queue and coalesce non-urgent calls. Stat updates, achievement progress, telemetry - none of these need to fire on the exact frame they happen. Queue them locally and flush in batches. AccelByte's docs recommend flushing stat updates roughly every 10 seconds rather than per-event, which collapses 30 calls into 1 and never trips a per-call rate limit on its own.

  4. Cache reads aggressively. Friend lists, item catalogs, inventory state -- these change rarely. A client that re-fetches them every menu open is a client that hits 429s during a launch spike. Cache with a TTL, invalidate on explicit events (friend added, purchase made), and stop polling.

Client-side flow for handling 429 rate limit responses with Retry-After, exponential backoff, jitter, and request batching.

The client should also tell the player something honest when an action does fail. Not "Error 429" but "Reconnecting..." or "Action queued." Silent failures make rate limiting feel like a broken game.

Your Options for Adding Rate Limiting to a Game Backend

There are four meaningful paths. Each has a real trade-off, not just a feature list.

Full control over the algorithm, the keys, and the response shape. You'll have to implement a token bucket, distribute it across instances, handle clock skew, and write the client retry logic. Pricing: free in code, but the engineering cost is real and ongoing.

Mature token-bucket and leaky-bucket plugins, standard 429 responses with X-RateLimit-* headers, multi-dimensional limits (per consumer, per route, global). You still operate the gateway and tune its config. Pricing: free open source; commercial editions vary.

Fully managed, scales automatically, integrates with the rest of your cloud stack. Less flexibility on custom keys or game-specific algorithms, and you pay per request. Pricing: usage-based, gets expensive at high CCU.

Handles IP-level abuse before it reaches your origin and is very effective for DDoS and credential stuffing. Doesn't replace per-user limits because it doesn't know who your authenticated users are, so this is a complement to a gateway rather than a substitute. Pricing: tier-based with generous free plans; paid plans scale with traffic.

These options handle the API plumbing. They don't know what a session, a stat, or a chat room is, which means you still have to build the game-specific layer (chat spam rules, match-action cooldowns, batch endpoints) on top. AccelByte ships a backend that already has rate limiting designed for game traffic. The per-user, per-studio, per-endpoint, and per-feature layers are all configured out of the box.

How AccelByte Handles Rate Limiting

AccelByte is a game backend platform that handles accounts, matchmaking, sessions, lobby, chat, stats, commerce, and the rest of the live-ops layer. Rate limiting is built into the gateway and into the individual services. Defaults are tuned for game traffic, and every limit can be configured per game.

AccelByte two-tier rate limiting model showing studio-level, user-level, and per-API limits plus service-level token-bucket limits in Lobby and Chat.When a request does get throttled, the response is the standard pattern:

C/C++
{
  "error": {
    "message": "You have exceeded the allowed request limit. Please try again later."
  }
}

Combined with HTTP 429 status, this is enough for any standard client retry logic to handle. The AccelByte SDKs handle reconnection and retry on Lobby and other WebSocket services out of the box; for direct REST calls you still own the retry logic.

To plug AccelByte's rate limiting into your game:

  1. Sign up for AccelByte Gaming Services, create a game namespace and follow this step by step setup process for your game. Default limits are in place.
  2. Configure Lobby's Burst Limit and Duration Limit per the Lobby configuration docs for your traffic.
  3. Configure Chat's Rate Limit and Spam Limit per the Chat configuration docs.
  4. Use batch endpoints for stat updates and telemetry instead of per-event calls. The recommendation in docs is to batch every 10 seconds, not every second.
  5. Adjust SDK HTTP retry defaults if necessary. Unity and Unreal SDKs include built-in retry logic (1s initial delay doubling to 30s, jitter, 60s timeout, 3 retries). Use SetRetryParameters() in Unity or FHttpRetryScheduler in Unreal to modify these settings. For direct REST calls or non-SDK languages, implement the client-side retry patterns previously described.
  6. Before you launch, talk to your account manager about a pre-launch rate-limit review. AccelByte's Launch Readiness process includes capacity planning that takes your projected CCU and adjusts limits before they become a problem.

Before Launch: What to Check

Things to pin down before launch, regardless of the backend you're using:

  • Run a load test that mirrors realistic player behavior, not just request count. A 1000-RPS test of a single endpoint tells you nothing about whether matchmaking works at peak. Simulate login bursts, lobby joins, in-match stat flushes, and end-of-match settlement together.
  • Verify every client-side retry path on a real 429. Force a 429 in staging (lower the limits temporarily) and confirm the client recovers cleanly. Check that no UI shows "Error 429" to the player.
  • Check that batch endpoints are wired up everywhere they should be. Many studios discover one place where a batch was supposed to happen and didn't, usually telemetry or stat writes.
  • Audit IP-based endpoints separately. Login, password reset, account creation. These need stricter limits than your general API.
  • Have a runbook for raising limits in production. Whether you're self-hosted or on a managed backend, know who to contact and how fast they adjust on launch.

Start for free with AccelByte: Test Rate Limits Against Real Traffic

You can plug AccelByte into your game today and see how the rate limits behave under your actual traffic. AccelByte Gaming Services is free during development and comes with a 90 day trial (or 25,000 player hours, whichever comes first) after which usage-based pricing on peak concurrent users applies if your game is live. You can adjust the default limits with your account manager whenever your traffic warrants it.

Get Started for Free or Talk to us

Find a Backend Solution for Your Game!

Reach out to the AccelByte team to learn more.