Changelog

v1.0.0

First stable release of Trypema — three providers, two strategies, extreme throughput, and zero unsafe code.

Release Date: March 2026 Crate: trypemaDocumentation: docs.rs/trypemaRepository: github.com/dev-davexoyinbo/trypemaLicense: MIT | Rust Edition: 2024 | MSRV: Rust 2024 edition


What is Trypema?

Trypema is a high-performance rate limiting library for Rust, designed for concurrency safety, low overhead, and predictable latency. It provides a unified API across three backends (local, Redis, hybrid) and two enforcement strategies (absolute, suppressed), giving you a 3×2 matrix of rate limiters accessible through a single facade.

The name comes from the Koine Greek τρυπήματος (trypematos), meaning "hole" or "opening" — from the biblical passage "through the eye of a needle" (Matthew 19:24, Mark 10:25, Luke 18:25). The rate limiter acts as a narrow gate: requests must pass through the eye of the needle.


Release Highlights

This is the first stable release of Trypema. Key highlights:

  • Three providers — Local (in-process), Redis (distributed), and Hybrid (local fast-path with Redis sync) — all accessed through a single RateLimiter facade
  • Two strategies — Absolute (deterministic sliding-window enforcement) and Suppressed (probabilistic degradation inspired by Ably's approach)
  • Extreme throughput — Hybrid provider achieves 7–10M ops/s with p50 latency of ~1µs; local provider competitive with governor at 3.5–7.5M ops/s
  • Zero unsafe code#![forbid(unsafe_code)] enforced across the entire crate
  • Complete documentation#![deny(missing_docs)] ensures every public item is documented
  • Flexible async runtime support — Choose between Tokio (redis-tokio) or Smol (redis-smol) for Redis-backed providers
  • Non-integer rate limits — Support fractional rates like 5.5 requests/second via f64-backed RateLimit type
  • Bucket coalescing — Configurable time-bucket granularity trades memory for timing precision
  • Automatic cleanup — Background cleanup loop with Weak references prevents memory leaks without preventing Drop

Architecture

Provider × Strategy Matrix

Trypema exposes every combination of provider and strategy through a builder-style facade:

RateLimiter
├── .local()     → LocalRateLimiterProvider
│   ├── .absolute()   → AbsoluteLocalRateLimiter
│   └── .suppressed() → SuppressedLocalRateLimiter
├── .redis()     → RedisRateLimiterProvider       [requires redis-tokio or redis-smol]
│   ├── .absolute()   → AbsoluteRedisRateLimiter
│   └── .suppressed() → SuppressedRedisRateLimiter
└── .hybrid()    → HybridRateLimiterProvider      [requires redis-tokio or redis-smol]
    ├── .absolute()   → AbsoluteHybridRateLimiter
    └── .suppressed() → SuppressedHybridRateLimiter

Providers

Local Provider

  • Storage: DashMap with ahash for concurrent hash map operations + AtomicU64 counters
  • Latency: Sub-microsecond (p50 ~1µs under load)
  • Dependencies: No external services required
  • Use case: Single-process rate limiting, embedded systems, CLI tools, or anywhere Redis is unavailable
  • Thread safety: Lock-free reads and atomic increments via DashMap sharding

Redis Provider

  • Storage: Redis 7.2+ via atomic Lua scripts
  • Latency: One Redis round-trip per inc() or is_allowed() call (typically 250–500µs)
  • Timestamps: Server-side via redis.call("TIME") — immune to client clock skew
  • Key format: {prefix}:{rate_type}:{user_key} with configurable prefix
  • TTL: Automatic key expiration set to 2 × window_size_seconds
  • Use case: Distributed rate limiting across multiple application instances

Hybrid Provider

  • Storage: Local DashMap fast-path + periodic Redis sync via background RedisCommitter actor
  • Latency: p50 ~1µs (local path); Redis sync is batched and asynchronous
  • Sync mechanism: Background actor flushes accumulated local increments to Redis at configurable intervals (default 10ms)
  • State machine: Per-key 3-state machine: UndefinedAcceptingRejecting/Suppressing
  • Thundering herd prevention: Per-key tokio::sync::Mutex prevents concurrent Redis round-trips for the same key
  • Inactivity detection: Epoch/watch channel pattern detects when no new increments arrive, avoiding unnecessary Redis flushes
  • Use case: High-throughput distributed systems where Redis round-trip latency per request is unacceptable

Strategies

Absolute Strategy

Deterministic sliding-window enforcement. Every request is either Allowed or Rejected — no probabilistic behavior.

  • Counts requests within a sliding window of window_size_seconds
  • Uses bucket coalescing (rate_group_size_ms) to merge nearby timestamps into buckets
  • Window capacity = rate_limit × window_size_seconds
  • Rejected responses include best-effort retry_after_ms and remaining_after_waiting metadata
  • Admission check and increment are intentionally non-atomic (check-then-act) for throughput

Suppressed Strategy

Probabilistic suppression inspired by Ably's rate limiting approach. Instead of a hard accept/reject boundary, traffic is smoothly degraded as the rate approaches the limit.

  • Suppression factor formula: suppression_factor = 1.0 - (rate_limit / perceived_rate)
  • Perceived rate: max(average_rate_in_window, rate_in_last_1000ms) — uses the higher of the two to react quickly to bursts
  • Hard limit factor: Configurable ceiling (HardLimitFactor, default 1.0×) beyond which suppression factor is forced to 1.0 (all requests suppressed)
  • Suppression factor caching: Computed factor is cached per key for SuppressionFactorCacheMs (default 100ms) to amortize computation cost
  • Returns Suppressed { is_allowed: bool, suppression_factor: f64 } — always check is_allowed
  • Tracks both total observed rate and declined count, so accepted usage = observed - declined

Sliding Window Implementation

  • Time is divided into coalescing buckets of rate_group_size_ms milliseconds
  • Each bucket stores an atomic count and timestamp
  • Expired buckets (older than window_size_seconds) are pruned on read
  • Bucket coalescing trades timing granularity for memory and performance:
    • Larger buckets (50–100ms): fewer allocations, coarser retry_after_ms
    • Smaller buckets (1–20ms): more allocations, finer retry_after_ms

Rate Limit Stickiness

Rate limits are "sticky" — the first inc() call for a given key stores that key's rate limit for its lifetime in the limiter. Subsequent calls with different rate limits for the same key will use the originally stored limit. This prevents mid-window limit changes from causing inconsistent enforcement.


Public API Surface

Core Types

TypeDescription
RateLimiterMain facade; provides .local(), .redis(), .hybrid()
RateLimiterOptionsTop-level configuration struct
RateLimitDecisionEnum: Allowed, Rejected { .. }, Suppressed { .. }
TrypemaErrorError enum covering validation and Redis errors

Validated Newtypes

All configuration values use validated newtypes with TryFrom conversions that return TrypemaError on invalid input:

TypeInnerDefaultValidationDescription
RateLimitf64> 0.0Per-second rate limit (supports non-integer)
WindowSizeSecondsu64>= 1Sliding window duration in seconds
RateGroupSizeMsu64100>= 1Bucket coalescing interval in ms
HardLimitFactorf641.0>= 1.0Hard cutoff multiplier (suppressed strategy only)
SuppressionFactorCacheMsu64100>= 1Suppression factor cache duration in ms
RedisKeyStringNon-empty, ≤255 bytes, no :Validated Redis key
SyncIntervalMsu6410>= 1Hybrid provider Redis sync interval in ms

Provider Types

TypeModuleDescription
LocalRateLimiterProvidertrypema::localIn-process provider
RedisRateLimiterProvidertrypema::redisRedis-backed distributed provider
HybridRateLimiterProvidertrypema::hybridLocal fast-path + Redis sync

Strategy Types

TypeModuleKey methodReturns
AbsoluteLocalRateLimitertrypema::localinc(key, rate, n)RateLimitDecision
SuppressedLocalRateLimitertrypema::localinc(key, rate, n)RateLimitDecision
AbsoluteRedisRateLimitertrypema::redisasync inc(key, rate, n)Result<RateLimitDecision>
SuppressedRedisRateLimitertrypema::redisasync inc(key, rate, n)Result<RateLimitDecision>
AbsoluteHybridRateLimitertrypema::hybridasync inc(key, rate, n)Result<RateLimitDecision>
SuppressedHybridRateLimitertrypema::hybridasync inc(key, rate, n)Result<RateLimitDecision>

Key Methods

On RateLimiter:

MethodDescription
new(options)Construct a new rate limiter
local()Access the local provider
redis()Access the Redis provider (requires feature)
hybrid()Access the hybrid provider (requires feature)
run_cleanup_loop()Start background cleanup (10min stale, 30s interval)
run_cleanup_loop_with_config(stale_ms, interval_ms)Start cleanup with custom timing
stop_cleanup_loop()Stop the cleanup loop

On all strategy types:

MethodDescription
inc(key, rate, n)Record n requests and return admission decision
is_allowed(key, rate, n)Check admission without recording (local only)
get_suppression_factor(key, rate)Get current suppression factor (suppressed strategy only)

Configuration Reference

Local-Only Configuration

use trypema::{
    HardLimitFactor, RateGroupSizeMs, RateLimiter, RateLimiterOptions,
    SuppressionFactorCacheMs, WindowSizeSeconds,
};
use trypema::local::LocalRateLimiterOptions;

let rl = RateLimiter::new(RateLimiterOptions {
    local: LocalRateLimiterOptions {
        window_size_seconds: WindowSizeSeconds::try_from(60).unwrap(),
        rate_group_size_ms: RateGroupSizeMs::try_from(10).unwrap(),
        hard_limit_factor: HardLimitFactor::default(),           // 1.0
        suppression_factor_cache_ms: SuppressionFactorCacheMs::default(), // 100ms
    },
});

Redis Configuration (with redis-tokio feature)

use trypema::{
    HardLimitFactor, RateGroupSizeMs, RateLimiter, RateLimiterOptions,
    SuppressionFactorCacheMs, WindowSizeSeconds,
};
use trypema::local::LocalRateLimiterOptions;
use trypema::redis::RedisRateLimiterOptions;
use trypema::hybrid::SyncIntervalMs;

let window = WindowSizeSeconds::try_from(60).unwrap();
let group = RateGroupSizeMs::try_from(10).unwrap();
let hlf = HardLimitFactor::try_from(1.5).unwrap();
let sfc = SuppressionFactorCacheMs::try_from(50).unwrap();
let sync = SyncIntervalMs::default(); // 10ms

let rl = RateLimiter::new(RateLimiterOptions {
    local: LocalRateLimiterOptions {
        window_size_seconds: window,
        rate_group_size_ms: group,
        hard_limit_factor: hlf,
        suppression_factor_cache_ms: sfc,
    },
    redis: RedisRateLimiterOptions {
        connection_manager: /* redis::aio::ConnectionManager */,
        prefix: Some("myapp".to_string()),
        window_size_seconds: window,
        rate_group_size_ms: group,
        hard_limit_factor: hlf,
        suppression_factor_cache_ms: sfc,
        sync_interval_ms: sync,
    },
});

Usage Examples

Basic Local Rate Limiting

use std::sync::Arc;
use trypema::{RateLimit, RateLimitDecision, RateLimiter};

let rl = Arc::new(RateLimiter::new(/* options */));
rl.run_cleanup_loop();

let rate = RateLimit::try_from(10.0).unwrap(); // 10 req/s

match rl.local().absolute().inc("user_123", &rate, 1) {
    RateLimitDecision::Allowed => {
        // Process request
    }
    RateLimitDecision::Rejected { retry_after_ms, .. } => {
        // Return 429 with Retry-After header
    }
    _ => unreachable!(),
}

Suppressed Strategy

match rl.local().suppressed().inc("user_123", &rate, 1) {
    RateLimitDecision::Suppressed { is_allowed, suppression_factor } => {
        if is_allowed {
            // Process request
        } else {
            // Gracefully degrade
        }
    }
    _ => unreachable!(),
}

Distributed Rate Limiting (Redis)

use trypema::redis::RedisKey;

let key = RedisKey::try_from("user_123".to_string()).unwrap();
let rate = RateLimit::try_from(100.0).unwrap();

let decision = rl.redis().absolute().inc(&key, &rate, 1).await?;

High-Throughput Distributed (Hybrid)

let decision = rl.hybrid().absolute().inc(&key, &rate, 1).await?;

Feature Flags

FeatureDescriptionActivates
(none)Local provider onlydashmap, rand, strum, thiserror, tracing, ahash
redis-tokioRedis + Hybrid providers via Tokioredis (tokio-comp, aio, connection-manager, script), tokio, async-trait, futures
redis-smolRedis + Hybrid providers via Smolredis (smol-comp, aio, connection-manager, script), smol, tokio (sync only), async-trait, futures

redis-tokio and redis-smol are mutually exclusive. Enabling both produces a compile-time error.

# Local only (no async runtime needed)
[dependencies]
trypema = "1.0"

# With Redis via Tokio
[dependencies]
trypema = { version = "1.0", features = ["redis-tokio"] }

# With Redis via Smol
[dependencies]
trypema = { version = "1.0", features = ["redis-smol"] }

Performance

All benchmarks were run on a single host. Redis benchmarks use a local Redis 7.2+ instance.

Throughput Summary

ProviderStrategyOps/s (typical)p50 Latency
LocalAbsolute3.5–6.1M~1µs
LocalSuppressed3.6–7.5M~1µs
RedisAbsolute33–46K350–460µs
RedisSuppressed28–39K400–560µs
HybridAbsolute5.7–10.7M~1µs
HybridSuppressed2.3–9.5M~1µs

Comparison with Other Libraries

Redis-backed (16 threads, 10 keys, 30s):

LimiterOps/sp50 (µs)
redis-cell (CL.THROTTLE)55–62K253–279
GCRA (Lua)44–51K314–327
Trypema Redis (Absolute)42–46K346–374
Trypema Redis (Suppressed)37–39K402–408
Trypema Hybrid (Absolute)8.3–10.7M~1
Trypema Hybrid (Suppressed)7.1–9.1M~1

Local (in-process, hot key, 16 threads, 30s):

LimiterOps/sp50 (µs)
governor (GCRA)4.9M1
Trypema (Absolute)3.5M1
Trypema (Suppressed)3.6M1
burster (SlidingWindowLog)413K6

Local (in-process, 100K uniform keys, 16 threads, 30s):

LimiterOps/sp50 (µs)
Trypema (Suppressed)7.5M1
governor (GCRA)6.3M1
Trypema (Absolute)6.1M1
burster (SlidingWindowLog)55K105

The Hybrid provider delivers 100–200× higher throughput than pure Redis providers while maintaining distributed state consistency through periodic sync.


Safety Guarantees

  • #![forbid(unsafe_code)] — No unsafe Rust anywhere in the crate
  • #![deny(missing_docs)] — Every public type, function, method, and variant is documented
  • Validated newtypes — All configuration parameters are validated at construction time via TryFrom, making invalid states unrepresentable
  • Compile-time exclusivityredis-tokio and redis-smol features are enforced as mutually exclusive via compile_error!

Cleanup & Memory Management

The cleanup loop removes stale keys to prevent unbounded memory growth:

let rl = Arc::new(RateLimiter::new(options));

// Default: stale after 10 minutes, cleanup every 30 seconds
rl.run_cleanup_loop();

// Custom timing
rl.run_cleanup_loop_with_config(
    5 * 60 * 1000,  // stale_after_ms: 5 minutes
    15 * 1000,      // cleanup_interval_ms: 15 seconds
);

rl.stop_cleanup_loop();

The cleanup loop holds only a Weak<RateLimiter> reference — when all Arc<RateLimiter> references are dropped, the loop exits automatically. Both run_cleanup_loop() and stop_cleanup_loop() are idempotent.


Error Handling

All errors are represented by TrypemaError:

VariantWhenFeature-gated
InvalidRateLimit(String)RateLimit::try_from(0.0) or negativeNo
InvalidWindowSizeSeconds(String)WindowSizeSeconds::try_from(0)No
InvalidRateGroupSizeMs(String)RateGroupSizeMs::try_from(0)No
InvalidHardLimitFactor(String)HardLimitFactor::try_from(0.5)No
InvalidSuppressionFactorCacheMs(String)SuppressionFactorCacheMs::try_from(0)No
InvalidRedisKey(String)Empty, >255 bytes, or contains :redis-tokio / redis-smol
RedisError(redis::RedisError)Connection/command/protocol failureredis-tokio / redis-smol
UnexpectedRedisScriptResult { .. }Lua script returned unexpected resultredis-tokio / redis-smol
CustomError(String)Internal/extension useNo

Redis Requirements

  • Version: Redis 7.2+ (uses redis.call("TIME") in Lua scripts for server-side timestamps)
  • Connection: Requires a redis::aio::ConnectionManager for automatic reconnection
  • Key format: {prefix}:{rate_type}:{user_key} — the : separator is why RedisKey forbids : in user-provided keys
  • Key TTL: Automatically set to 2 × window_size_seconds

Dependencies

Core (always included)

CrateVersionPurpose
dashmap6.1.0Concurrent hash map for per-key state
ahash0.8.12Fast hashing for DashMap
rand0.10.0Random number generation for suppression
strum / strum_macros0.28.0Enum Display derive
thiserror2.0.18Error derive macro
tracing0.1.44Structured logging (cleanup warnings)

Optional (Redis features)

CrateVersionFeature
redis1.0.4Redis client (with script, aio, connection-manager)
tokio1.50.0Async runtime (redis-tokio: full; redis-smol: sync only)
smol2.0.2Async runtime (redis-smol only)
futures0.3.32Async utilities
async-trait0.1.89Async trait support

Design Decisions

Best-Effort Concurrency (Check-then-Act)

The admission check and increment are intentionally not atomic. Under extreme concurrency, slightly more requests may be admitted than the strict limit. This is a deliberate trade-off for higher throughput and predictable tail latency. For strict enforcement, use lower rate_group_size_ms values.

Sticky Rate Limits

The first inc() call for a key stores that key's rate limit for the lifetime of the key. This prevents mid-window limit changes from causing inconsistent enforcement and eliminates races between concurrent calls with different limits for the same key.

Server-Side Timestamps (Redis)

Redis Lua scripts use redis.call("TIME") rather than client-provided timestamps. This eliminates clock skew issues in distributed deployments.

Hybrid State Machine

The hybrid provider uses a 3-state machine per key (UndefinedAcceptingRejecting/Suppressing) to minimise Redis round-trips. State transitions happen only when Redis sync results arrive, keeping the fast path entirely local.


Known Limitations

  • No strict linearizability — Admission checks are best-effort for throughput
  • Hybrid sync lag — Local state may be up to sync_interval_ms (default 10ms) behind Redis; distributed over-admission is possible within that window
  • Redis provider throughput — Limited by Redis single-thread execution (~30–60K ops/s per key); use the hybrid provider for higher throughput
  • Rejection metadata accuracyretry_after_ms and remaining_after_waiting are best-effort estimates
  • Single Redis instance — No built-in Redis Cluster or Sentinel support

Roadmap

Planned for future releases:

  • Metrics and observability hooks — Callbacks or trait-based hooks for integrating with metrics systems (Prometheus, OpenTelemetry, etc.)