Gitpulse
LatestReleasesStand-up
Merged
Size
M
Medium: 100-500 weighted lines
Change Breakdown
Bug Fix80%
Performance20%
#3166fix(batch): move batch queue global rate limiter to worker consumer level

Batch queue rate limiting moved to per-item processing

ER
ericallam
·Mar 3, 2026·#3166fix(batch): move batch queue global rate limiter to worker consumer level

Batch processing throughput is restored by tracking actual items processed instead of claim attempts, while a new backpressure system prevents visibility timeouts under heavy load.

Batch processing throughput is no longer artificially throttled. Background jobs process efficiently without burning rate limit tokens on empty or single-item queues.

The global rate limiting mechanism was rebuilt to track actual items processed rather than claim attempts. Additionally, a new backpressure system pauses processing when the worker queue gets too deep, preventing tasks from timing out before they can be executed.

The rate limiter was moved out of the queue claiming phase and directly into the worker consumer loop. To replace the safety valve the old rate limiter provided at the claim phase, a strict was introduced, configurable via a new .

View Original GitHub Description

The global rate limiter was being applied at the FairQueue claim phase, consuming 1 token per queue-claim-attempt rather than per item processed. With many small queues (each batch is its own queue), consumers burned through tokens on empty or single-item queues, causing aggressive throttling well below the intended items/sec limit.

Changes:

  • Move rate limiter from FairQueue claim phase to BatchQueue worker queue consumer loop (before blockingPop), so each token = 1 item processed
  • Replace the FairQueue rate limiter with a worker queue depth cap to prevent unbounded growth that could cause visibility timeouts
  • Add BATCH_QUEUE_WORKER_QUEUE_MAX_DEPTH env var (optional, disabled by default)
© 2026 · via Gitpulse