Configuration & Tuning
Trypema is configured through two option structs: LocalRateLimiterOptions (always required) and RedisRateLimiterOptions (required when Redis features are enabled).
Quick reference
use trypema::{
HardLimitFactor, RateGroupSizeMs, RateLimiterOptions,
SuppressionFactorCacheMs, WindowSizeSeconds,
};
use trypema::local::LocalRateLimiterOptions;
// With redis-tokio or redis-smol:
// use trypema::hybrid::SyncIntervalMs;
// use trypema::redis::RedisRateLimiterOptions;
let options = RateLimiterOptions {
local: LocalRateLimiterOptions {
window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
hard_limit_factor: HardLimitFactor::try_from(1.5).unwrap(),
suppression_factor_cache_ms: SuppressionFactorCacheMs::default(),
},
// redis: RedisRateLimiterOptions {
// connection_manager,
// prefix: None,
// window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
// rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
// hard_limit_factor: HardLimitFactor::try_from(1.5).unwrap(),
// suppression_factor_cache_ms: SuppressionFactorCacheMs::default(),
// sync_interval_ms: SyncIntervalMs::try_from(10).unwrap(),
// },
};
All configuration options
| Option | Type | Default | Valid range | Used by |
|---|---|---|---|---|
window_size_seconds | WindowSizeSeconds | (required) | >= 1 | All strategies |
rate_group_size_ms | RateGroupSizeMs | 100ms | >= 1 | All strategies |
hard_limit_factor | HardLimitFactor | 1.0 | >= 1.0 | Suppressed only |
suppression_factor_cache_ms | SuppressionFactorCacheMs | 100ms | >= 1 | Suppressed only |
connection_manager | ConnectionManager | (required) | -- | Redis/Hybrid |
prefix | Option<RedisKey> | "trypema" | Valid RedisKey | Redis/Hybrid |
sync_interval_ms | SyncIntervalMs | 10ms | >= 1 | Hybrid only |
window_size_seconds, rate_group_size_ms, hard_limit_factor, suppression_factor_cache_ms, and sync_interval_ms identical across all instances.window_size_seconds
What it controls: How far back in time the limiter looks when making admission decisions.
Trade-offs:
| Larger windows (60-300s) | Smaller windows (5-30s) |
|---|---|
| Smooth out burst traffic | Less burst tolerance |
| More forgiving for intermittent usage | More sensitive to spikes |
| Slower recovery after hitting limits | Faster recovery |
| Higher memory per key | Lower memory per key |
Recommendation: Start with 60 seconds.
use trypema::WindowSizeSeconds;
let window = WindowSizeSeconds::try_from(60).unwrap();
// Invalid: must be >= 1
assert!(WindowSizeSeconds::try_from(0).is_err());
rate_group_size_ms
What it controls: The coalescing interval for grouping nearby increments into the same time bucket.
Trade-offs:
| Larger coalescing (50-100ms) | Smaller coalescing (1-20ms) |
|---|---|
| Fewer buckets, lower memory | More buckets, higher memory |
| Better performance | More overhead per increment |
Coarser retry_after_ms estimates | More accurate rejection metadata |
Recommendation: Start with 10ms. Increase to 50-100ms if memory or performance is an issue.
use trypema::RateGroupSizeMs;
let coalescing = RateGroupSizeMs::try_from(10).unwrap();
// Default is 100ms
let default = RateGroupSizeMs::default();
assert_eq!(*default, 100);
// Invalid: must be >= 1
assert!(RateGroupSizeMs::try_from(0).is_err());
hard_limit_factor
What it controls: Hard cutoff multiplier for the suppressed strategy. The absolute strategy ignores this.
hard_limit = rate_limit * hard_limit_factor
- With
hard_limit_factor = 1.0(default): no headroom, suppression starts and reaches 1.0 at the same point (hard cutoff behaviour similar to absolute). - With
hard_limit_factor = 1.5: 50% headroom. Suppression gradually ramps from 0 to 1 over the range fromrate_limittorate_limit * 1.5. - With
hard_limit_factor = 2.0: 100% headroom.
Recommendation: Use 1.5 for the suppressed strategy.
use trypema::HardLimitFactor;
let factor = HardLimitFactor::try_from(1.5).unwrap();
// Default is 1.0
let default = HardLimitFactor::default();
assert_eq!(*default, 1.0);
// Invalid: must be >= 1.0
assert!(HardLimitFactor::try_from(0.5).is_err());
suppression_factor_cache_ms
What it controls: How long the computed suppression factor is cached per key before being recomputed.
- Local: In-memory cache duration per key (using
Instant). - Redis: TTL of the cached suppression factor stored in Redis (using
SET ... PX).
Trade-offs:
| Shorter cache (10-50ms) | Longer cache (100-1000ms) |
|---|---|
| Faster reaction to traffic changes | Slower reaction to spikes |
| More recomputation overhead | Less overhead |
Recommendation: Start with 100ms (the default).
use trypema::SuppressionFactorCacheMs;
let cache_ms = SuppressionFactorCacheMs::try_from(50).unwrap();
// Default is 100ms
let default = SuppressionFactorCacheMs::default();
assert_eq!(*default, 100);
// Invalid: must be >= 1
assert!(SuppressionFactorCacheMs::try_from(0).is_err());
sync_interval_ms (Hybrid only)
What it controls: How often the hybrid provider flushes local increments to Redis. The pure Redis provider ignores this value.
Trade-offs:
| Shorter (5-10ms) | Longer (50-100ms) |
|---|---|
| Less lag between local and Redis state | More lag |
| More Redis writes | Fewer Redis writes |
Recommendation: Start with 10ms (the default). Keep sync_interval_ms <= rate_group_size_ms.
use trypema::hybrid::SyncIntervalMs;
let interval = SyncIntervalMs::try_from(10).unwrap();
// Default is 10ms
let default = SyncIntervalMs::default();
assert_eq!(*default, 10);
// Invalid: must be >= 1
assert!(SyncIntervalMs::try_from(0).is_err());
Presets
Typical API
window_size_seconds = 60
rate_group_size_ms = 10
hard_limit_factor = 1.5
suppression_factor_cache_ms = 100
Sharp spike protection
window_size_seconds = 30..60
rate_group_size_ms = 5..10
hard_limit_factor = 1.5..2.0
suppression_factor_cache_ms = 50..100
High throughput with many keys
window_size_seconds = 60..120
rate_group_size_ms = 50..100
hard_limit_factor = 1.5
suppression_factor_cache_ms = 100..250
Quick fixes
- Suppression kicks in too early? Increase
hard_limit_factorslightly. - Too many stale keys in memory? Reduce cleanup stale-after or use smaller windows.
retry_after_mshints are inaccurate? Decreaserate_group_size_msfor finer buckets.- Redis load too high (hybrid)? Increase
sync_interval_ms.
Next steps
- Cleanup Loop -- manage memory with background cleanup
- Troubleshooting -- common issues and solutions

