v1.0.0
Release Date: March 2026 Crate: trypemaDocumentation: docs.rs/trypemaRepository: github.com/dev-davexoyinbo/trypemaLicense: MIT | Rust Edition: 2024 | MSRV: Rust 2024 edition
What is Trypema?
Trypema is a high-performance rate limiting library for Rust, designed for concurrency safety, low overhead, and predictable latency. It provides a unified API across three backends (local, Redis, hybrid) and two enforcement strategies (absolute, suppressed), giving you a 3×2 matrix of rate limiters accessible through a single facade.
The name comes from the Koine Greek τρυπήματος (trypematos), meaning "hole" or "opening" — from the biblical passage "through the eye of a needle" (Matthew 19:24, Mark 10:25, Luke 18:25). The rate limiter acts as a narrow gate: requests must pass through the eye of the needle.
Release Highlights
This is the first stable release of Trypema. Key highlights:
- Three providers — Local (in-process), Redis (distributed), and Hybrid (local fast-path with Redis sync) — all accessed through a single
RateLimiterfacade - Two strategies — Absolute (deterministic sliding-window enforcement) and Suppressed (probabilistic degradation inspired by Ably's approach)
- Extreme throughput — Hybrid provider achieves 7–10M ops/s with p50 latency of ~1µs; local provider competitive with
governorat 3.5–7.5M ops/s - Zero unsafe code —
#![forbid(unsafe_code)]enforced across the entire crate - Complete documentation —
#![deny(missing_docs)]ensures every public item is documented - Flexible async runtime support — Choose between Tokio (
redis-tokio) or Smol (redis-smol) for Redis-backed providers - Non-integer rate limits — Support fractional rates like
5.5requests/second viaf64-backedRateLimittype - Bucket coalescing — Configurable time-bucket granularity trades memory for timing precision
- Automatic cleanup — Background cleanup loop with
Weakreferences prevents memory leaks without preventingDrop
Architecture
Provider × Strategy Matrix
Trypema exposes every combination of provider and strategy through a builder-style facade:
RateLimiter
├── .local() → LocalRateLimiterProvider
│ ├── .absolute() → AbsoluteLocalRateLimiter
│ └── .suppressed() → SuppressedLocalRateLimiter
├── .redis() → RedisRateLimiterProvider [requires redis-tokio or redis-smol]
│ ├── .absolute() → AbsoluteRedisRateLimiter
│ └── .suppressed() → SuppressedRedisRateLimiter
└── .hybrid() → HybridRateLimiterProvider [requires redis-tokio or redis-smol]
├── .absolute() → AbsoluteHybridRateLimiter
└── .suppressed() → SuppressedHybridRateLimiter
Providers
Local Provider
- Storage:
DashMapwithahashfor concurrent hash map operations +AtomicU64counters - Latency: Sub-microsecond (p50 ~1µs under load)
- Dependencies: No external services required
- Use case: Single-process rate limiting, embedded systems, CLI tools, or anywhere Redis is unavailable
- Thread safety: Lock-free reads and atomic increments via
DashMapsharding
Redis Provider
- Storage: Redis 7.2+ via atomic Lua scripts
- Latency: One Redis round-trip per
inc()oris_allowed()call (typically 250–500µs) - Timestamps: Server-side via
redis.call("TIME")— immune to client clock skew - Key format:
{prefix}:{rate_type}:{user_key}with configurable prefix - TTL: Automatic key expiration set to
2 × window_size_seconds - Use case: Distributed rate limiting across multiple application instances
Hybrid Provider
- Storage: Local
DashMapfast-path + periodic Redis sync via backgroundRedisCommitteractor - Latency: p50 ~1µs (local path); Redis sync is batched and asynchronous
- Sync mechanism: Background actor flushes accumulated local increments to Redis at configurable intervals (default 10ms)
- State machine: Per-key 3-state machine:
Undefined→Accepting→Rejecting/Suppressing - Thundering herd prevention: Per-key
tokio::sync::Mutexprevents concurrent Redis round-trips for the same key - Inactivity detection: Epoch/watch channel pattern detects when no new increments arrive, avoiding unnecessary Redis flushes
- Use case: High-throughput distributed systems where Redis round-trip latency per request is unacceptable
Strategies
Absolute Strategy
Deterministic sliding-window enforcement. Every request is either Allowed or Rejected — no probabilistic behavior.
- Counts requests within a sliding window of
window_size_seconds - Uses bucket coalescing (
rate_group_size_ms) to merge nearby timestamps into buckets - Window capacity =
rate_limit × window_size_seconds - Rejected responses include best-effort
retry_after_msandremaining_after_waitingmetadata - Admission check and increment are intentionally non-atomic (check-then-act) for throughput
Suppressed Strategy
Probabilistic suppression inspired by Ably's rate limiting approach. Instead of a hard accept/reject boundary, traffic is smoothly degraded as the rate approaches the limit.
- Suppression factor formula:
suppression_factor = 1.0 - (rate_limit / perceived_rate) - Perceived rate:
max(average_rate_in_window, rate_in_last_1000ms)— uses the higher of the two to react quickly to bursts - Hard limit factor: Configurable ceiling (
HardLimitFactor, default 1.0×) beyond which suppression factor is forced to1.0(all requests suppressed) - Suppression factor caching: Computed factor is cached per key for
SuppressionFactorCacheMs(default 100ms) to amortize computation cost - Returns
Suppressed { is_allowed: bool, suppression_factor: f64 }— always checkis_allowed - Tracks both total observed rate and declined count, so accepted usage = observed - declined
Sliding Window Implementation
- Time is divided into coalescing buckets of
rate_group_size_msmilliseconds - Each bucket stores an atomic count and timestamp
- Expired buckets (older than
window_size_seconds) are pruned on read - Bucket coalescing trades timing granularity for memory and performance:
- Larger buckets (50–100ms): fewer allocations, coarser
retry_after_ms - Smaller buckets (1–20ms): more allocations, finer
retry_after_ms
- Larger buckets (50–100ms): fewer allocations, coarser
Rate Limit Stickiness
Rate limits are "sticky" — the first inc() call for a given key stores that key's rate limit for its lifetime in the limiter. Subsequent calls with different rate limits for the same key will use the originally stored limit. This prevents mid-window limit changes from causing inconsistent enforcement.
Public API Surface
Core Types
| Type | Description |
|---|---|
RateLimiter | Main facade; provides .local(), .redis(), .hybrid() |
RateLimiterOptions | Top-level configuration struct |
RateLimitDecision | Enum: Allowed, Rejected { .. }, Suppressed { .. } |
TrypemaError | Error enum covering validation and Redis errors |
Validated Newtypes
All configuration values use validated newtypes with TryFrom conversions that return TrypemaError on invalid input:
| Type | Inner | Default | Validation | Description |
|---|---|---|---|---|
RateLimit | f64 | — | > 0.0 | Per-second rate limit (supports non-integer) |
WindowSizeSeconds | u64 | — | >= 1 | Sliding window duration in seconds |
RateGroupSizeMs | u64 | 100 | >= 1 | Bucket coalescing interval in ms |
HardLimitFactor | f64 | 1.0 | >= 1.0 | Hard cutoff multiplier (suppressed strategy only) |
SuppressionFactorCacheMs | u64 | 100 | >= 1 | Suppression factor cache duration in ms |
RedisKey | String | — | Non-empty, ≤255 bytes, no : | Validated Redis key |
SyncIntervalMs | u64 | 10 | >= 1 | Hybrid provider Redis sync interval in ms |
Provider Types
| Type | Module | Description |
|---|---|---|
LocalRateLimiterProvider | trypema::local | In-process provider |
RedisRateLimiterProvider | trypema::redis | Redis-backed distributed provider |
HybridRateLimiterProvider | trypema::hybrid | Local fast-path + Redis sync |
Strategy Types
| Type | Module | Key method | Returns |
|---|---|---|---|
AbsoluteLocalRateLimiter | trypema::local | inc(key, rate, n) | RateLimitDecision |
SuppressedLocalRateLimiter | trypema::local | inc(key, rate, n) | RateLimitDecision |
AbsoluteRedisRateLimiter | trypema::redis | async inc(key, rate, n) | Result<RateLimitDecision> |
SuppressedRedisRateLimiter | trypema::redis | async inc(key, rate, n) | Result<RateLimitDecision> |
AbsoluteHybridRateLimiter | trypema::hybrid | async inc(key, rate, n) | Result<RateLimitDecision> |
SuppressedHybridRateLimiter | trypema::hybrid | async inc(key, rate, n) | Result<RateLimitDecision> |
Key Methods
On RateLimiter:
| Method | Description |
|---|---|
new(options) | Construct a new rate limiter |
local() | Access the local provider |
redis() | Access the Redis provider (requires feature) |
hybrid() | Access the hybrid provider (requires feature) |
run_cleanup_loop() | Start background cleanup (10min stale, 30s interval) |
run_cleanup_loop_with_config(stale_ms, interval_ms) | Start cleanup with custom timing |
stop_cleanup_loop() | Stop the cleanup loop |
On all strategy types:
| Method | Description |
|---|---|
inc(key, rate, n) | Record n requests and return admission decision |
is_allowed(key, rate, n) | Check admission without recording (local only) |
get_suppression_factor(key, rate) | Get current suppression factor (suppressed strategy only) |
Configuration Reference
Local-Only Configuration
use trypema::{
HardLimitFactor, RateGroupSizeMs, RateLimiter, RateLimiterOptions,
SuppressionFactorCacheMs, WindowSizeSeconds,
};
use trypema::local::LocalRateLimiterOptions;
let rl = RateLimiter::new(RateLimiterOptions {
local: LocalRateLimiterOptions {
window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
hard_limit_factor: HardLimitFactor::default(), // 1.0
suppression_factor_cache_ms: SuppressionFactorCacheMs::default(), // 100ms
},
});
Redis Configuration (with redis-tokio feature)
use trypema::{
HardLimitFactor, RateGroupSizeMs, RateLimiter, RateLimiterOptions,
SuppressionFactorCacheMs, WindowSizeSeconds,
};
use trypema::local::LocalRateLimiterOptions;
use trypema::redis::RedisRateLimiterOptions;
use trypema::hybrid::SyncIntervalMs;
let window = WindowSizeSeconds::try_from(60).unwrap();
let group = RateGroupSizeMs::try_from(10).unwrap();
let hlf = HardLimitFactor::try_from(1.5).unwrap();
let sfc = SuppressionFactorCacheMs::try_from(50).unwrap();
let sync = SyncIntervalMs::default(); // 10ms
let rl = RateLimiter::new(RateLimiterOptions {
local: LocalRateLimiterOptions {
window_size_seconds: window,
rate_group_size_ms: group,
hard_limit_factor: hlf,
suppression_factor_cache_ms: sfc,
},
redis: RedisRateLimiterOptions {
connection_manager: /* redis::aio::ConnectionManager */,
prefix: Some("myapp".to_string()),
window_size_seconds: window,
rate_group_size_ms: group,
hard_limit_factor: hlf,
suppression_factor_cache_ms: sfc,
sync_interval_ms: sync,
},
});
Usage Examples
Basic Local Rate Limiting
use std::sync::Arc;
use trypema::{RateLimit, RateLimitDecision, RateLimiter};
let rl = Arc::new(RateLimiter::new(/* options */));
rl.run_cleanup_loop();
let rate = RateLimit::try_from(10.0).unwrap(); // 10 req/s
match rl.local().absolute().inc("user_123", &rate, 1) {
RateLimitDecision::Allowed => {
// Process request
}
RateLimitDecision::Rejected { retry_after_ms, .. } => {
// Return 429 with Retry-After header
}
_ => unreachable!(),
}
Suppressed Strategy
match rl.local().suppressed().inc("user_123", &rate, 1) {
RateLimitDecision::Suppressed { is_allowed, suppression_factor } => {
if is_allowed {
// Process request
} else {
// Gracefully degrade
}
}
_ => unreachable!(),
}
Distributed Rate Limiting (Redis)
use trypema::redis::RedisKey;
let key = RedisKey::try_from("user_123".to_string()).unwrap();
let rate = RateLimit::try_from(100.0).unwrap();
let decision = rl.redis().absolute().inc(&key, &rate, 1).await?;
High-Throughput Distributed (Hybrid)
let decision = rl.hybrid().absolute().inc(&key, &rate, 1).await?;
Feature Flags
| Feature | Description | Activates |
|---|---|---|
| (none) | Local provider only | dashmap, rand, strum, thiserror, tracing, ahash |
redis-tokio | Redis + Hybrid providers via Tokio | redis (tokio-comp, aio, connection-manager, script), tokio, async-trait, futures |
redis-smol | Redis + Hybrid providers via Smol | redis (smol-comp, aio, connection-manager, script), smol, tokio (sync only), async-trait, futures |
redis-tokio and redis-smol are mutually exclusive. Enabling both produces a compile-time error.
# Local only (no async runtime needed)
[dependencies]
trypema = "1.0"
# With Redis via Tokio
[dependencies]
trypema = { version = "1.0", features = ["redis-tokio"] }
# With Redis via Smol
[dependencies]
trypema = { version = "1.0", features = ["redis-smol"] }
Performance
All benchmarks were run on a single host. Redis benchmarks use a local Redis 7.2+ instance.
Throughput Summary
| Provider | Strategy | Ops/s (typical) | p50 Latency |
|---|---|---|---|
| Local | Absolute | 3.5–6.1M | ~1µs |
| Local | Suppressed | 3.6–7.5M | ~1µs |
| Redis | Absolute | 33–46K | 350–460µs |
| Redis | Suppressed | 28–39K | 400–560µs |
| Hybrid | Absolute | 5.7–10.7M | ~1µs |
| Hybrid | Suppressed | 2.3–9.5M | ~1µs |
Comparison with Other Libraries
Redis-backed (16 threads, 10 keys, 30s):
| Limiter | Ops/s | p50 (µs) |
|---|---|---|
redis-cell (CL.THROTTLE) | 55–62K | 253–279 |
| GCRA (Lua) | 44–51K | 314–327 |
| Trypema Redis (Absolute) | 42–46K | 346–374 |
| Trypema Redis (Suppressed) | 37–39K | 402–408 |
| Trypema Hybrid (Absolute) | 8.3–10.7M | ~1 |
| Trypema Hybrid (Suppressed) | 7.1–9.1M | ~1 |
Local (in-process, hot key, 16 threads, 30s):
| Limiter | Ops/s | p50 (µs) |
|---|---|---|
governor (GCRA) | 4.9M | 1 |
| Trypema (Absolute) | 3.5M | 1 |
| Trypema (Suppressed) | 3.6M | 1 |
burster (SlidingWindowLog) | 413K | 6 |
Local (in-process, 100K uniform keys, 16 threads, 30s):
| Limiter | Ops/s | p50 (µs) |
|---|---|---|
| Trypema (Suppressed) | 7.5M | 1 |
governor (GCRA) | 6.3M | 1 |
| Trypema (Absolute) | 6.1M | 1 |
burster (SlidingWindowLog) | 55K | 105 |
The Hybrid provider delivers 100–200× higher throughput than pure Redis providers while maintaining distributed state consistency through periodic sync.
Safety Guarantees
#![forbid(unsafe_code)]— No unsafe Rust anywhere in the crate#![deny(missing_docs)]— Every public type, function, method, and variant is documented- Validated newtypes — All configuration parameters are validated at construction time via
TryFrom, making invalid states unrepresentable - Compile-time exclusivity —
redis-tokioandredis-smolfeatures are enforced as mutually exclusive viacompile_error!
Cleanup & Memory Management
The cleanup loop removes stale keys to prevent unbounded memory growth:
let rl = Arc::new(RateLimiter::new(options));
// Default: stale after 10 minutes, cleanup every 30 seconds
rl.run_cleanup_loop();
// Custom timing
rl.run_cleanup_loop_with_config(
5 * 60 * 1000, // stale_after_ms: 5 minutes
15 * 1000, // cleanup_interval_ms: 15 seconds
);
rl.stop_cleanup_loop();
The cleanup loop holds only a Weak<RateLimiter> reference — when all Arc<RateLimiter> references are dropped, the loop exits automatically. Both run_cleanup_loop() and stop_cleanup_loop() are idempotent.
Error Handling
All errors are represented by TrypemaError:
| Variant | When | Feature-gated |
|---|---|---|
InvalidRateLimit(String) | RateLimit::try_from(0.0) or negative | No |
InvalidWindowSizeSeconds(String) | WindowSizeSeconds::try_from(0) | No |
InvalidRateGroupSizeMs(String) | RateGroupSizeMs::try_from(0) | No |
InvalidHardLimitFactor(String) | HardLimitFactor::try_from(0.5) | No |
InvalidSuppressionFactorCacheMs(String) | SuppressionFactorCacheMs::try_from(0) | No |
InvalidRedisKey(String) | Empty, >255 bytes, or contains : | redis-tokio / redis-smol |
RedisError(redis::RedisError) | Connection/command/protocol failure | redis-tokio / redis-smol |
UnexpectedRedisScriptResult { .. } | Lua script returned unexpected result | redis-tokio / redis-smol |
CustomError(String) | Internal/extension use | No |
Redis Requirements
- Version: Redis 7.2+ (uses
redis.call("TIME")in Lua scripts for server-side timestamps) - Connection: Requires a
redis::aio::ConnectionManagerfor automatic reconnection - Key format:
{prefix}:{rate_type}:{user_key}— the:separator is whyRedisKeyforbids:in user-provided keys - Key TTL: Automatically set to
2 × window_size_seconds
Dependencies
Core (always included)
| Crate | Version | Purpose |
|---|---|---|
dashmap | 6.1.0 | Concurrent hash map for per-key state |
ahash | 0.8.12 | Fast hashing for DashMap |
rand | 0.10.0 | Random number generation for suppression |
strum / strum_macros | 0.28.0 | Enum Display derive |
thiserror | 2.0.18 | Error derive macro |
tracing | 0.1.44 | Structured logging (cleanup warnings) |
Optional (Redis features)
| Crate | Version | Feature |
|---|---|---|
redis | 1.0.4 | Redis client (with script, aio, connection-manager) |
tokio | 1.50.0 | Async runtime (redis-tokio: full; redis-smol: sync only) |
smol | 2.0.2 | Async runtime (redis-smol only) |
futures | 0.3.32 | Async utilities |
async-trait | 0.1.89 | Async trait support |
Design Decisions
Best-Effort Concurrency (Check-then-Act)
The admission check and increment are intentionally not atomic. Under extreme concurrency, slightly more requests may be admitted than the strict limit. This is a deliberate trade-off for higher throughput and predictable tail latency. For strict enforcement, use lower rate_group_size_ms values.
Sticky Rate Limits
The first inc() call for a key stores that key's rate limit for the lifetime of the key. This prevents mid-window limit changes from causing inconsistent enforcement and eliminates races between concurrent calls with different limits for the same key.
Server-Side Timestamps (Redis)
Redis Lua scripts use redis.call("TIME") rather than client-provided timestamps. This eliminates clock skew issues in distributed deployments.
Hybrid State Machine
The hybrid provider uses a 3-state machine per key (Undefined → Accepting → Rejecting/Suppressing) to minimise Redis round-trips. State transitions happen only when Redis sync results arrive, keeping the fast path entirely local.
Known Limitations
- No strict linearizability — Admission checks are best-effort for throughput
- Hybrid sync lag — Local state may be up to
sync_interval_ms(default 10ms) behind Redis; distributed over-admission is possible within that window - Redis provider throughput — Limited by Redis single-thread execution (~30–60K ops/s per key); use the hybrid provider for higher throughput
- Rejection metadata accuracy —
retry_after_msandremaining_after_waitingare best-effort estimates - Single Redis instance — No built-in Redis Cluster or Sentinel support
Roadmap
Planned for future releases:
- Metrics and observability hooks — Callbacks or trait-based hooks for integrating with metrics systems (Prometheus, OpenTelemetry, etc.)
Links
- Crate: crates.io/crates/trypema
- Documentation: docs.rs/trypema
- Repository: github.com/dev-davexoyinbo/trypema
- Homepage: trypema.davidoyinbo.com
- License: MIT

