Guides
Configuration & Tuning
Recommended defaults and trade-offs.
This page explains how to choose RateLimiterOptions values.
What you configure
Options are set per provider. You typically construct one RateLimiterOptions and share a RateLimiter.
use trypema::{HardLimitFactor, RateGroupSizeMs, RateLimiterOptions, SuppressionFactorCacheMs, WindowSizeSeconds};
use trypema::local::LocalRateLimiterOptions;
use trypema::redis::RedisRateLimiterOptions;
let options = RateLimiterOptions {
local: LocalRateLimiterOptions {
window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
hard_limit_factor: HardLimitFactor::default(),
suppression_factor_cache_ms: SuppressionFactorCacheMs::default(),
},
redis: RedisRateLimiterOptions {
connection_manager: todo!("create redis::aio::ConnectionManager"),
prefix: None,
window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
hard_limit_factor: HardLimitFactor::default(),
suppression_factor_cache_ms: SuppressionFactorCacheMs::default(),
},
};
let _ = options;
Knobs (per provider):
window_size_secondsrate_group_size_mshard_limit_factor(suppressed strategy)suppression_factor_cache_ms(suppressed strategy)
For the Redis provider, keep configuration identical across instances:
window_size_seconds, rate_group_size_ms, hard_limit_factor, and suppression_factor_cache_ms.Defaults (recommended starting point)
This is a good baseline for most APIs:
window_size_seconds = 60rate_group_size_ms = 10hard_limit_factor = 1.5(suppressed)suppression_factor_cache_ms = 100ms(suppressed)
Knobs
window_size_seconds
Sliding-window duration.
- Larger: smoother decisions, slower recovery.
- Smaller: faster recovery, more sensitive to bursts.
rate_group_size_ms
Bucket coalescing interval.
- Smaller: more precise window accounting and retry hints, more buckets.
- Larger: lower overhead, less precise retry hints.
hard_limit_factor (suppressed)
Hard cutoff multiplier used by the suppressed strategy:
hard_limit = rate_limit * hard_limit_factor
When the accepted series exceeds the hard limit, the result is Rejected { ... }.
suppression_factor_cache_ms (suppressed)
How long the suppressed strategy reuses a computed suppression factor for a key.
- Local: an in-memory cache duration per key.
- Redis: the TTL of the cached suppression factor stored in Redis.
It is a validated newtype (SuppressionFactorCacheMs): SuppressionFactorCacheMs::try_from(value) fails if value == 0. Default is 100ms.
use trypema::SuppressionFactorCacheMs;
let cache_ms = SuppressionFactorCacheMs::try_from(250).unwrap();
let _ = cache_ms;
Presets
Absolute (strict allow/reject):
| Scenario | Suggested starting point |
|---|---|
| Typical API | window_size_seconds = 60, rate_group_size_ms = 10 |
| Very latency-sensitive | window_size_seconds = 30, rate_group_size_ms = 1..5 |
| High throughput + many keys | window_size_seconds = 60..120, rate_group_size_ms = 50..100 |
Suppressed (probabilistic near capacity):
| Scenario | Suggested starting point |
|---|---|
| Typical API | window_size_seconds = 60, rate_group_size_ms = 10, hard_limit_factor = 1.5, suppression_factor_cache_ms = 100 |
| Sharp spikes | window_size_seconds = 30..60, rate_group_size_ms = 5..10, hard_limit_factor = 1.5..2.0, suppression_factor_cache_ms = 50..100 |
| High throughput + many keys | window_size_seconds = 60..120, rate_group_size_ms = 50..100, hard_limit_factor = 1.5, suppression_factor_cache_ms = 100..250 |
Rules of thumb
- If you want faster recovery from bursts: reduce
window_size_seconds. - If you need more precise retry hints: reduce
rate_group_size_ms. - If suppressed rejects too early: increase
hard_limit_factorslightly. - If suppressed reacts too slowly to spikes: reduce
suppression_factor_cache_ms(at the cost of more recomputation / Redis reads).
Gotchas
- For both providers,
is_allowed(...)then laterinc(...)is a check-then-act pattern and is not atomic. - If your goal is to admit and record, call
inc(...)directly and use its returnedRateLimitDecision; admission is evaluated as part ofinc(...). - For the local provider, state is per-process and is not shared across instances.

