-
Notifications
You must be signed in to change notification settings - Fork 39
Description
Is there an existing issue that is already proposing this?
- I have searched the existing issues
Is your feature request related to a problem? Please describe it
When using cacheable package with nonBlocking: true for a 2-layer cache architecture (in-memory primary + Redis secondary), there's no way to pass the configured Cacheable instance to CacheModule while preserving the nonBlocking behavior.
Current Behavior
The CacheModule.registerAsync() accepts:
store: <single store> - Single store
stores: [<array of stores>] - Multiple independent stores
When passing stores: [cacheable.primary, cacheable.secondary], NestJS receives two separate Keyv stores and manages them independently. The nonBlocking logic from the Cacheable wrapper is completely ignored.
import { Cacheable, CacheableMemory } from 'cacheable';
import KeyvRedis from '@keyv/redis';
import { Keyv } from 'keyv';
// This configuration works perfectly with nonBlocking
const cacheable = new Cacheable({
primary: new Keyv({ store: new CacheableMemory({ ttl: '1h', lruSize: 10000 }) }),
secondary: new Keyv({ store: new KeyvRedis('redis://localhost:6379') }),
nonBlocking: true, // Reads from primary only, writes to Redis in background
ttl: '7d'
});
// But there's no way to pass this to CacheModule while preserving nonBlocking
CacheModule.registerAsync({
useFactory: () => ({
// Option 1: Doesn't work - Cacheable doesn't implement Store interface
store: cacheable,
// Option 2: Works but loses nonBlocking - NestJS manages stores independently
stores: [cacheable.primary, cacheable.secondary],
})
})Reference implementation: https://github.com/ever-co/ever-gauzy/pull/9172/files#diff-4cc151d269bd6f576784a7ebc953ca859223d6d1267b813497271f9b213f762fL277
Environment
@nestjs/cache-manager: 3.0.1
cache-manager: 6.4.2
cacheable: 2.1.1
@keyv/redis: 5.1.3
keyv: 5.5.3Describe the solution you'd like
Expected Behavior
CacheModule should support passing a Cacheable instance directly, or provide a way to configure nonBlocking mode for multi-layer caching.
Use Case
2-layer non-blocking cache:
✅ Read: From primary (in-memory) only - ultra fast
✅ Write: Primary in sync + Redis in background (non-blocking) - no latency
This is critical for high-performance applications where Redis latency shouldn't block operations.
Teachability, documentation, adoption, migration strategy
Users would be able to pass a Cacheable instance directly to CacheModule:
import { CacheModule } from '@nestjs/cache-manager';
import { Cacheable, CacheableMemory } from 'cacheable';
import KeyvRedis from '@keyv/redis';
import { Keyv } from 'keyv';
@Module({
imports: [
CacheModule.registerAsync({
useFactory: () => {
const cacheable = new Cacheable({
primary: new Keyv({ store: new CacheableMemory({ ttl: '1h', lruSize: 10000 }) }),
secondary: new Keyv({ store: new KeyvRedis('redis://localhost:6379') }),
nonBlocking: true, // Reads from primary only, writes to Redis in background
ttl: '7d'
});
return {
store: cacheable // Pass Cacheable instance directly
};
}
})
]
})
export class AppModule {}What is the motivation / use case for changing the behavior?
In high-performance applications, Redis latency should not block critical operations. Current multi-store implementation writes to all stores synchronously, meaning every cache write waits for Redis to respond (typically 1-5ms, but can spike to 50-100ms under load or network issues).
Concrete Use Case
application (ever-gauzy) is an open-source Business Management Platform with:
High-frequency API requests (employee data, time tracking, projects)
Multi-tenant architecture with distributed instances
Need for both fast local cache AND distributed Redis cache
Current problem:
// With stores: [primary, secondary]
await cacheManager.set('key', data); // ❌ Waits for Redis (1-5ms latency)What we need:
// With Cacheable nonBlocking: true
await cacheable.set('key', data); // ✅ Writes to memory instantly, Redis in backgroundPerformance Impact
- Without nonBlocking: Every cache write adds 1-5ms latency (or more if Redis is slow/unavailable)
- With nonBlocking: Cache writes are instant (~0.1ms), Redis updated asynchronously
- At scale: For 1000 requests/sec with caching, this saves 1-5 seconds of total latency