Getting Started

Quickstart (Redis)

Distributed rate limiting with the Redis provider — share limits across processes and servers.

The Redis provider executes all operations as atomic Lua scripts against a Redis 7.2+ server. Every inc() call results in one Redis round-trip. Use this when you need rate limits shared across multiple processes or servers.

Step 1: Add dependencies

[dependencies]
trypema = { version = "0.1", features = ["redis-tokio"] }
redis = { version = "1", features = ["aio", "tokio-comp", "connection-manager"] }
tokio = { version = "1", features = ["full"] }

Step 2: Create the rate limiter

The Redis provider requires a redis::aio::ConnectionManager, which handles pooling and automatic reconnection. You also need to provide LocalRateLimiterOptions (the local provider is always available).

use std::sync::Arc;

use trypema::{
    HardLimitFactor, RateGroupSizeMs, RateLimit, RateLimitDecision,
    RateLimiter, RateLimiterOptions, SuppressionFactorCacheMs, WindowSizeSeconds,
};
use trypema::hybrid::SyncIntervalMs;
use trypema::local::LocalRateLimiterOptions;
use trypema::redis::{RedisKey, RedisRateLimiterOptions};

#[tokio::main]
async fn main() -> Result<(), trypema::TrypemaError> {
    // Connect to Redis.
    let client = redis::Client::open("redis://127.0.0.1:6379/").unwrap();
    let connection_manager = client.get_connection_manager().await.unwrap();

    // Shared configuration values.
    let window_size_seconds = WindowSizeSeconds::try_from(60).unwrap();
    let rate_group_size_ms = RateGroupSizeMs::try_from(10).unwrap();
    let hard_limit_factor = HardLimitFactor::try_from(1.5).unwrap();
    let suppression_factor_cache_ms = SuppressionFactorCacheMs::default();
    let sync_interval_ms = SyncIntervalMs::default();

    let rl = Arc::new(RateLimiter::new(RateLimiterOptions {
        local: LocalRateLimiterOptions {
            window_size_seconds,
            rate_group_size_ms,
            hard_limit_factor,
            suppression_factor_cache_ms,
        },
        redis: RedisRateLimiterOptions {
            connection_manager,
            prefix: None, // Defaults to "trypema"
            window_size_seconds,
            rate_group_size_ms,
            hard_limit_factor,
            suppression_factor_cache_ms,
            sync_interval_ms, // Only used by the hybrid provider; Redis ignores it
        },
    }));

    // Start background cleanup (removes stale keys from both local and Redis).
    rl.run_cleanup_loop();

    // Redis keys use a validated newtype. Rules:
    // - Must not be empty
    // - Must be <= 255 bytes
    // - Must not contain ':'
    let key = RedisKey::try_from("user_123".to_string()).unwrap();
    let rate_limit = RateLimit::try_from(5.0).unwrap();

    // --- Absolute strategy ---
    match rl.redis().absolute().inc(&key, &rate_limit, 1).await? {
        RateLimitDecision::Allowed => {
            println!("Request allowed.");
        }
        RateLimitDecision::Rejected { retry_after_ms, .. } => {
            println!("Rejected. Retry in ~{}ms.", retry_after_ms);
        }
        RateLimitDecision::Suppressed { .. } => unreachable!(),
    }

    // --- Suppressed strategy ---
    match rl.redis().suppressed().inc(&key, &rate_limit, 1).await? {
        RateLimitDecision::Allowed => {
            println!("Allowed (below capacity).");
        }
        RateLimitDecision::Suppressed { is_allowed, suppression_factor } => {
            if is_allowed {
                println!("Allowed (suppression at {:.0}%).", suppression_factor * 100.0);
            } else {
                println!("Denied (suppression at {:.0}%).", suppression_factor * 100.0);
            }
        }
        RateLimitDecision::Rejected { .. } => unreachable!(),
    }

    // You can also query the suppression factor without incrementing:
    let factor = rl.redis().suppressed().get_suppression_factor(&key).await?;
    println!("Current suppression factor: {:.3}", factor);

    Ok(())
}

Key differences from the local provider

LocalRedis
KeysAny &strRedisKey (validated: non-empty, <= 255 bytes, no :)
MethodsSynchronousAsync (.await)
Return typeRateLimitDecisionResult<RateLimitDecision, TrypemaError>
StateIn-process onlyShared via Redis across all processes
LatencySub-microsecondNetwork round-trip per call

RedisRateLimiterOptions fields

FieldTypeDefaultDescription
connection_managerConnectionManager(required)Redis connection (handles pooling and reconnection).
prefixOption<RedisKey>"trypema"Namespace prefix for all Redis keys.
window_size_secondsWindowSizeSeconds(required)Same as local.
rate_group_size_msRateGroupSizeMs100Same as local.
hard_limit_factorHardLimitFactor1.0Same as local.
suppression_factor_cache_msSuppressionFactorCacheMs100Same as local.
sync_interval_msSyncIntervalMs10Only used by the hybrid provider. The pure Redis provider ignores this.
For multi-instance deployments, keep window_size_seconds, rate_group_size_ms, hard_limit_factor, and suppression_factor_cache_ms identical across all instances.

Next steps