Guides

Configuration & Tuning

Every configuration option explained — window size, bucket coalescing, hard limit factor, suppression cache, and sync interval.

Trypema is configured through two option structs: LocalRateLimiterOptions (always required) and RedisRateLimiterOptions (required when Redis features are enabled).

Quick reference

use trypema::{
    HardLimitFactor, RateGroupSizeMs, RateLimiterOptions,
    SuppressionFactorCacheMs, WindowSizeSeconds,
};
use trypema::local::LocalRateLimiterOptions;
// With redis-tokio or redis-smol:
// use trypema::hybrid::SyncIntervalMs;
// use trypema::redis::RedisRateLimiterOptions;

let options = RateLimiterOptions {
    local: LocalRateLimiterOptions {
        window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
        rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
        hard_limit_factor: HardLimitFactor::try_from(1.5).unwrap(),
        suppression_factor_cache_ms: SuppressionFactorCacheMs::default(),
    },
    // redis: RedisRateLimiterOptions {
    //     connection_manager,
    //     prefix: None,
    //     window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
    //     rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
    //     hard_limit_factor: HardLimitFactor::try_from(1.5).unwrap(),
    //     suppression_factor_cache_ms: SuppressionFactorCacheMs::default(),
    //     sync_interval_ms: SyncIntervalMs::try_from(10).unwrap(),
    // },
};

All configuration options

OptionTypeDefaultValid rangeUsed by
window_size_secondsWindowSizeSeconds(required)>= 1All strategies
rate_group_size_msRateGroupSizeMs100ms>= 1All strategies
hard_limit_factorHardLimitFactor1.0>= 1.0Suppressed only
suppression_factor_cache_msSuppressionFactorCacheMs100ms>= 1Suppressed only
connection_managerConnectionManager(required)--Redis/Hybrid
prefixOption<RedisKey>"trypema"Valid RedisKeyRedis/Hybrid
sync_interval_msSyncIntervalMs10ms>= 1Hybrid only
For multi-instance deployments, keep window_size_seconds, rate_group_size_ms, hard_limit_factor, suppression_factor_cache_ms, and sync_interval_ms identical across all instances.

window_size_seconds

What it controls: How far back in time the limiter looks when making admission decisions.

Trade-offs:

Larger windows (60-300s)Smaller windows (5-30s)
Smooth out burst trafficLess burst tolerance
More forgiving for intermittent usageMore sensitive to spikes
Slower recovery after hitting limitsFaster recovery
Higher memory per keyLower memory per key

Recommendation: Start with 60 seconds.

use trypema::WindowSizeSeconds;

let window = WindowSizeSeconds::try_from(60).unwrap();

// Invalid: must be >= 1
assert!(WindowSizeSeconds::try_from(0).is_err());

rate_group_size_ms

What it controls: The coalescing interval for grouping nearby increments into the same time bucket.

Trade-offs:

Larger coalescing (50-100ms)Smaller coalescing (1-20ms)
Fewer buckets, lower memoryMore buckets, higher memory
Better performanceMore overhead per increment
Coarser retry_after_ms estimatesMore accurate rejection metadata

Recommendation: Start with 10ms. Increase to 50-100ms if memory or performance is an issue.

use trypema::RateGroupSizeMs;

let coalescing = RateGroupSizeMs::try_from(10).unwrap();

// Default is 100ms
let default = RateGroupSizeMs::default();
assert_eq!(*default, 100);

// Invalid: must be >= 1
assert!(RateGroupSizeMs::try_from(0).is_err());

hard_limit_factor

What it controls: Hard cutoff multiplier for the suppressed strategy. The absolute strategy ignores this.

hard_limit = rate_limit * hard_limit_factor
  • With hard_limit_factor = 1.0 (default): no headroom, suppression starts and reaches 1.0 at the same point (hard cutoff behaviour similar to absolute).
  • With hard_limit_factor = 1.5: 50% headroom. Suppression gradually ramps from 0 to 1 over the range from rate_limit to rate_limit * 1.5.
  • With hard_limit_factor = 2.0: 100% headroom.

Recommendation: Use 1.5 for the suppressed strategy.

use trypema::HardLimitFactor;

let factor = HardLimitFactor::try_from(1.5).unwrap();

// Default is 1.0
let default = HardLimitFactor::default();
assert_eq!(*default, 1.0);

// Invalid: must be >= 1.0
assert!(HardLimitFactor::try_from(0.5).is_err());

suppression_factor_cache_ms

What it controls: How long the computed suppression factor is cached per key before being recomputed.

  • Local: In-memory cache duration per key (using Instant).
  • Redis: TTL of the cached suppression factor stored in Redis (using SET ... PX).

Trade-offs:

Shorter cache (10-50ms)Longer cache (100-1000ms)
Faster reaction to traffic changesSlower reaction to spikes
More recomputation overheadLess overhead

Recommendation: Start with 100ms (the default).

use trypema::SuppressionFactorCacheMs;

let cache_ms = SuppressionFactorCacheMs::try_from(50).unwrap();

// Default is 100ms
let default = SuppressionFactorCacheMs::default();
assert_eq!(*default, 100);

// Invalid: must be >= 1
assert!(SuppressionFactorCacheMs::try_from(0).is_err());

sync_interval_ms (Hybrid only)

What it controls: How often the hybrid provider flushes local increments to Redis. The pure Redis provider ignores this value.

Trade-offs:

Shorter (5-10ms)Longer (50-100ms)
Less lag between local and Redis stateMore lag
More Redis writesFewer Redis writes

Recommendation: Start with 10ms (the default). Keep sync_interval_ms <= rate_group_size_ms.

use trypema::hybrid::SyncIntervalMs;

let interval = SyncIntervalMs::try_from(10).unwrap();

// Default is 10ms
let default = SyncIntervalMs::default();
assert_eq!(*default, 10);

// Invalid: must be >= 1
assert!(SyncIntervalMs::try_from(0).is_err());

Presets

Typical API

window_size_seconds = 60
rate_group_size_ms = 10
hard_limit_factor = 1.5
suppression_factor_cache_ms = 100

Sharp spike protection

window_size_seconds = 30..60
rate_group_size_ms = 5..10
hard_limit_factor = 1.5..2.0
suppression_factor_cache_ms = 50..100

High throughput with many keys

window_size_seconds = 60..120
rate_group_size_ms = 50..100
hard_limit_factor = 1.5
suppression_factor_cache_ms = 100..250

Quick fixes

  • Suppression kicks in too early? Increase hard_limit_factor slightly.
  • Too many stale keys in memory? Reduce cleanup stale-after or use smaller windows.
  • retry_after_ms hints are inaccurate? Decrease rate_group_size_ms for finer buckets.
  • Redis load too high (hybrid)? Increase sync_interval_ms.

Next steps