Guides
Cleanup Loop
Managing stale keys and memory growth.
Trypema evicts expired buckets during normal operations (lazy eviction), but stale per-key state can still accumulate with high key cardinality.
The cleanup loop proactively evicts keys that have been inactive for a while.
Start the loop
Create your RateLimiter once (usually in an Arc) and start cleanup:
use std::sync::Arc;
use trypema::RateLimiter;
fn start(rl: Arc<RateLimiter>) {
rl.run_cleanup_loop();
}
Defaults and custom timing
Defaults:
stale_after_ms = 10 minutescleanup_interval_ms = 30 seconds
Override them with run_cleanup_loop_with_config(stale_after_ms, cleanup_interval_ms):
use std::sync::Arc;
use trypema::RateLimiter;
fn start(rl: Arc<RateLimiter>) {
// Evict keys inactive for 30 minutes, run every 60 seconds.
rl.run_cleanup_loop_with_config(30 * 60 * 1000, 60 * 1000);
}
Lifecycle and ownership
The cleanup loop does not hold a strong reference to the RateLimiter.
Internally it keeps a Weak<RateLimiter>, which means:
- The loop will not keep the limiter alive.
- You can safely drop all
Arc<RateLimiter>references. - Once all strong
Arcreferences are gone, the loop exits by itself.
Provider behavior
Local:
- Cleanup runs in a background thread.
- It evicts inactive keys from the in-process maps.
Redis:
- Normal operations still do lazy eviction of buckets.
- The cleanup loop uses a Redis-side index of active entities (members last seen) and deletes per-entity keys that have gone stale.
Depending on workload and key cardinality, you may still want external Redis monitoring/housekeeping.
Runtime notes
redis-tokio: Redis cleanup is spawned onto the current Tokio runtime. Callrun_cleanup_loop()from within a Tokio runtime context.redis-smol: Redis cleanup runs as a detached Smol task and only makes progress if you drive an executor.

