Redis Provider
The Redis provider executes all operations as atomic Lua scripts against a Redis 7.2+ server. Each inc() call results in one Redis round-trip. Use this when you need rate limits shared across multiple processes or servers.
Access: rl.redis()
When to use
- Multi-process or multi-server deployments
- When all instances need to share rate limiting state
- When per-request Redis latency (network round-trip) is acceptable
If per-request Redis latency is too expensive, consider the Hybrid Provider.
Requirements
- Redis version: >= 7.2.0 (required for Lua script features used internally)
- Async runtime: Tokio (
redis-tokiofeature) or Smol (redis-smolfeature)
Setup
use std::sync::Arc;
use trypema::{
HardLimitFactor, RateGroupSizeMs, RateLimit, RateLimiter,
RateLimiterOptions, SuppressionFactorCacheMs, WindowSizeSeconds,
};
use trypema::hybrid::SyncIntervalMs;
use trypema::local::LocalRateLimiterOptions;
use trypema::redis::{RedisKey, RedisRateLimiterOptions};
#[tokio::main]
async fn main() -> Result<(), trypema::TrypemaError> {
let client = redis::Client::open("redis://127.0.0.1:6379/").unwrap();
let connection_manager = client.get_connection_manager().await.unwrap();
let rl = Arc::new(RateLimiter::new(RateLimiterOptions {
local: LocalRateLimiterOptions {
window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
hard_limit_factor: HardLimitFactor::try_from(1.5).unwrap(),
suppression_factor_cache_ms: SuppressionFactorCacheMs::default(),
},
redis: RedisRateLimiterOptions {
connection_manager,
prefix: None, // defaults to "trypema"
window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
hard_limit_factor: HardLimitFactor::try_from(1.5).unwrap(),
suppression_factor_cache_ms: SuppressionFactorCacheMs::default(),
sync_interval_ms: SyncIntervalMs::default(), // ignored by Redis provider
},
}));
rl.run_cleanup_loop();
let key = RedisKey::try_from("user_123".to_string())?;
let rate = RateLimit::try_from(10.0)?;
// Absolute
let decision = rl.redis().absolute().inc(&key, &rate, 1).await?;
// Suppressed
let decision = rl.redis().suppressed().inc(&key, &rate, 1).await?;
// Query suppression factor (read-only)
let factor = rl.redis().suppressed().get_suppression_factor(&key).await?;
Ok(())
}
sync_interval_ms is part of RedisRateLimiterOptions but is only used by the hybrid provider. The pure Redis provider ignores it.Key constraints (RedisKey)
Redis keys use the RedisKey validated newtype:
- Must not be empty
- Must be <= 255 bytes
- Must not contain
:(colon) -- used internally as a key separator
use trypema::redis::RedisKey;
// Valid
let key = RedisKey::try_from("user_123".to_string()).unwrap();
// Invalid: contains ':'
assert!(RedisKey::try_from("user:123".to_string()).is_err());
// Invalid: empty
assert!(RedisKey::try_from("".to_string()).is_err());
Atomic Lua scripts
Within a single Lua script execution, Redis guarantees atomicity. This avoids TOCTOU (time-of-check-to-time-of-use) races between reading and updating state. However, overall rate limiting across multiple clients remains best-effort.
Server-side timestamps: The Lua scripts use redis.call("TIME") for all timestamp calculations, avoiding issues with client clock skew.
Data model
For each user key K with prefix P and rate type T (either absolute or suppressed), the following Redis keys are used:
Key schema
All per-entity keys follow this pattern:
{prefix}:{user_key}:{rate_type}:{suffix}
Redis key suffixes
| Suffix | Type | Purpose |
|---|---|---|
h | Hash | Sliding window buckets (timestamp_ms -> count) |
a | Sorted Set | Active bucket timestamps (for efficient eviction) |
w | String | Window limit (set on first call, refreshed with EXPIRE) |
t | String | Total count across all buckets |
d | String | Total declined count (suppressed strategy only) |
hd | Hash | Declined counts per bucket (suppressed strategy only) |
sf | String | Cached suppression factor with PX TTL (suppressed strategy only) |
There is also a shared key for cleanup:
{prefix}:active_entities
This is a Sorted Set that tracks all active keys with their last-activity timestamps. The cleanup loop uses it to find and remove stale entries.
Example
With prefix "trypema", key "user_123", and absolute strategy:
trypema:user_123:absolute:h (Hash: bucket counts)
trypema:user_123:absolute:a (Sorted Set: active timestamps)
trypema:user_123:absolute:w (String: window limit)
trypema:user_123:absolute:t (String: total count)
With suppressed strategy, additional keys:
trypema:user_123:suppressed:d (String: total declined)
trypema:user_123:suppressed:hd (Hash: per-bucket declined counts)
trypema:user_123:suppressed:sf (String: cached factor with PX TTL)
Error handling
Redis operations return Result<RateLimitDecision, TrypemaError>. Redis errors (connectivity, script failures, etc.) are propagated as TrypemaError::RedisError. Your application should handle these -- for example, by falling back to the local provider or allowing the request through.
Next steps
- Hybrid Provider -- local fast-path with periodic Redis sync
- Local Provider -- in-process alternative
- Cleanup Loop -- manage stale keys in Redis

