Skip to content

feat(gateway): add general per-IP request rate limiter#48336

Open
TerminalsandCoffee wants to merge 1 commit intoopenclaw:mainfrom
TerminalsandCoffee:feat/gateway-security-headers
Open

feat(gateway): add general per-IP request rate limiter#48336
TerminalsandCoffee wants to merge 1 commit intoopenclaw:mainfrom
TerminalsandCoffee:feat/gateway-security-headers

Conversation

@TerminalsandCoffee
Copy link
Copy Markdown

Summary

  • Adds a general-purpose per-IP request rate limiter to protect the gateway from request flooding when exposed publicly (Tailscale Funnel, reverse proxy, LAN binding)
  • Unlike the existing auth rate limiter (which only tracks failed auth attempts), this counts all incoming HTTP requests per client IP
  • Default: 120 requests/minute per IP, loopback exempt, configurable via RequestRateLimitConfig
  • Returns 429 Too Many Requests with Retry-After header when limit exceeded
  • Wired into the HTTP request pipeline as an optional requestRateLimiter parameter — checked early, before all request stages

Files

File Change
src/gateway/request-rate-limit.ts New module: createRequestRateLimiter() with fixed-window counter
src/gateway/request-rate-limit.test.ts 13 unit tests
src/gateway/server-http.ts Accept + check requestRateLimiter in handleRequest()
src/gateway/server-runtime-state.ts Thread requestRateLimiter through to HTTP server creation

Design

  • Fixed window (not sliding) — simpler, lower overhead than per-request array allocations
  • Follows existing patterns — mirrors auth-rate-limit.ts API shape (check/size/prune/dispose)
  • Opt-in — callers create a limiter instance and pass it; no behavior change if omitted
  • Loopback exempt by default — local CLI sessions are never throttled

Test plan

  • pnpm vitest run src/gateway/request-rate-limit.test.ts — 13/13 passing
  • pnpm vitest run src/gateway/auth-rate-limit.test.ts — 21/21 passing (no regression)
  • TypeScript compilation clean (no errors in changed files)
  • Tests cover: basic counting, window expiry, per-IP isolation, IPv4/IPv6 normalization, loopback exemption, prune, dispose

Closes #20373

🤖 Generated with Claude Code

Adds a fixed-window per-IP request rate limiter that protects the
gateway from request flooding when exposed via Tailscale Funnel or a
public reverse proxy. Unlike the existing auth rate limiter (which
tracks failed authentication attempts), this counts all incoming HTTP
requests per client IP.

- New module: request-rate-limit.ts with createRequestRateLimiter()
- Fixed-window counter per IP (default: 120 req/min)
- Loopback addresses exempt by default (local CLI unaffected)
- Returns 429 + Retry-After header when limit exceeded
- Periodic background pruning of expired entries
- 13 unit tests covering counting, window expiry, IP isolation,
  loopback exemption, and cleanup

Wired into createGatewayHttpServer() as an optional requestRateLimiter
parameter, checked early in the request pipeline before all stages.

Closes openclaw#20373

Co-Authored-By: Claude Opus 4.6 <[email protected]>
@openclaw-barnacle openclaw-barnacle bot added gateway Gateway runtime size: M labels Mar 16, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 16, 2026

Greptile Summary

This PR adds a well-structured per-IP fixed-window request rate limiter to the gateway, mirroring the existing auth-rate-limit.ts API. The module itself (request-rate-limit.ts) is clean and well-tested, but the integration in server-http.ts has a critical flaw: the rate-limit check is performed using req.socket?.remoteAddress before the trusted-proxy configuration is loaded.

Key findings:

  • Critical: requestRateLimiter.check(req.socket?.remoteAddress) runs before trustedProxies / allowRealIpFallback are read from config. In a reverse-proxy deployment (nginx → gateway), req.socket?.remoteAddress is the proxy's own IP (often 127.0.0.1), which triggers the loopback exemption and silently bypasses rate limiting for every request. This defeats the stated goal for the reverse-proxy scenario.
  • The fix is to move the rate-limit check to after the config snapshot and client IP are resolved, using resolveRequestClientIp(req, trustedProxies, allowRealIpFallback) as the IP source.
  • The underlying createRequestRateLimiter implementation, test coverage, and plumbing through server-runtime-state.ts are all correct.

Confidence Score: 2/5

  • Not safe to merge without addressing the proxy-IP bypass that silently disables rate limiting for all reverse-proxy deployments.
  • The rate limiter module is well-implemented and tested, but the integration point in server-http.ts uses the raw socket address instead of the resolved client IP. For any deployment behind a local reverse proxy — the most common production setup — the loopback exemption causes the rate limiter to be completely bypassed, making the feature ineffective precisely where flood protection is most needed.
  • src/gateway/server-http.ts — the rate-limit check must be moved to after the config snapshot loads and the client IP is resolved via resolveRequestClientIp.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/gateway/server-http.ts
Line: 772

Comment:
**Rate limiter bypassed entirely when behind a reverse proxy**

The limiter is called with `req.socket?.remoteAddress`, which is the TCP peer address — the IP of the reverse proxy, not the actual client. For the most common "reverse proxy" deployment scenario mentioned in the PR description, this means:

1. If the proxy runs on loopback (`127.0.0.1`), `isLoopbackAddress` returns `true` and every request is exempted from rate limiting.
2. If the proxy is at any other private address, all clients share that single counter, so the first client to hit 120 req/min blocks everyone.

The true client IP must be resolved first (reading `X-Forwarded-For` / `X-Real-IP` against the configured `trustedProxies`). The helper `resolveRequestClientIp` already does this correctly, but it requires the config snapshot. Moving the rate-limit check to after the config is loaded and the client IP is resolved would fix this:

```typescript
// After loading configSnapshot and resolving trustedProxies / allowRealIpFallback:
const clientIp = resolveRequestClientIp(req, trustedProxies, allowRealIpFallback);
if (requestRateLimiter) {
  const result = requestRateLimiter.check(clientIp);
  if (!result.allowed) {
    const retryAfter = result.retryAfterMs > 0 ? Math.ceil(result.retryAfterMs / 1000) : 1;
    res.setHeader("Retry-After", String(retryAfter));
    sendText(res, 429, "Too Many Requests");
    return;
  }
}
```

Without this fix, the feature only works correctly for direct (non-proxied) connections.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: 5cc6be7


// General per-IP request rate limiting (flood protection).
if (requestRateLimiter) {
const result = requestRateLimiter.check(req.socket?.remoteAddress);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rate limiter bypassed entirely when behind a reverse proxy

The limiter is called with req.socket?.remoteAddress, which is the TCP peer address — the IP of the reverse proxy, not the actual client. For the most common "reverse proxy" deployment scenario mentioned in the PR description, this means:

  1. If the proxy runs on loopback (127.0.0.1), isLoopbackAddress returns true and every request is exempted from rate limiting.
  2. If the proxy is at any other private address, all clients share that single counter, so the first client to hit 120 req/min blocks everyone.

The true client IP must be resolved first (reading X-Forwarded-For / X-Real-IP against the configured trustedProxies). The helper resolveRequestClientIp already does this correctly, but it requires the config snapshot. Moving the rate-limit check to after the config is loaded and the client IP is resolved would fix this:

// After loading configSnapshot and resolving trustedProxies / allowRealIpFallback:
const clientIp = resolveRequestClientIp(req, trustedProxies, allowRealIpFallback);
if (requestRateLimiter) {
  const result = requestRateLimiter.check(clientIp);
  if (!result.allowed) {
    const retryAfter = result.retryAfterMs > 0 ? Math.ceil(result.retryAfterMs / 1000) : 1;
    res.setHeader("Retry-After", String(retryAfter));
    sendText(res, 429, "Too Many Requests");
    return;
  }
}

Without this fix, the feature only works correctly for direct (non-proxied) connections.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/gateway/server-http.ts
Line: 772

Comment:
**Rate limiter bypassed entirely when behind a reverse proxy**

The limiter is called with `req.socket?.remoteAddress`, which is the TCP peer address — the IP of the reverse proxy, not the actual client. For the most common "reverse proxy" deployment scenario mentioned in the PR description, this means:

1. If the proxy runs on loopback (`127.0.0.1`), `isLoopbackAddress` returns `true` and every request is exempted from rate limiting.
2. If the proxy is at any other private address, all clients share that single counter, so the first client to hit 120 req/min blocks everyone.

The true client IP must be resolved first (reading `X-Forwarded-For` / `X-Real-IP` against the configured `trustedProxies`). The helper `resolveRequestClientIp` already does this correctly, but it requires the config snapshot. Moving the rate-limit check to after the config is loaded and the client IP is resolved would fix this:

```typescript
// After loading configSnapshot and resolving trustedProxies / allowRealIpFallback:
const clientIp = resolveRequestClientIp(req, trustedProxies, allowRealIpFallback);
if (requestRateLimiter) {
  const result = requestRateLimiter.check(clientIp);
  if (!result.allowed) {
    const retryAfter = result.retryAfterMs > 0 ? Math.ceil(result.retryAfterMs / 1000) : 1;
    res.setHeader("Retry-After", String(retryAfter));
    sendText(res, 429, "Too Many Requests");
    return;
  }
}
```

Without this fix, the feature only works correctly for direct (non-proxied) connections.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5cc6be71ac

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".


// General per-IP request rate limiting (flood protection).
if (requestRateLimiter) {
const result = requestRateLimiter.check(req.socket?.remoteAddress);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Resolve real client IP before enforcing request limits

This limiter keys requests by req.socket.remoteAddress, but in reverse-proxy deployments that value is the proxy hop, not the end user. In the same handler we already load gateway.trustedProxies/allowRealIpFallback later for auth paths, so rate limiting here either bypasses protection entirely when the proxy connects from loopback (default loopback exemption), or makes all users behind one proxy share a single bucket and get cross-user 429s. Compute the client IP with resolveRequestClientIp(...) (falling back to socket address) before calling requestRateLimiter.check.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

gateway Gateway runtime size: M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature: Gateway rate limiting and security headers

1 participant