Cache System
Comprehensive guide to SveltyCMS dual-layer caching system with Redis & MongoDB, TTL configuration, metrics, and predictive warming.
Last updated: 3/5/2026
Cache System
Overview
SveltyCMS implements a dual-layer caching system combining Redis (L1: in-memory) and Supported Databases (L2: persistent storage) to deliver sub-millisecond data access. The system features sophisticated TTL (Time-To-Live) management, predictive prefetching, and real-time performance monitoring.
Architecture
Dual-Layer Strategy
Cache Hierarchy
- L1: Redis (Volatile): Ultra-fast storage for session data and hot content. Automatic eviction via LRU.
- L2: Database (Persistent): Cross-restart persistence using the system’s primary database (MongoDB, MariaDB, etc.).
- Source DB: The authoritative source of truth, accessed only on full cache misses.
8 Cache Categories
The system categorizes cache entries to apply specialized TTL strategies:
| Category | Default TTL | Description |
|---|---|---|
| SCHEMA | 10 min | Collection definitions and field schemas |
| WIDGET | 10 min | Widget configurations and settings |
| THEME | 5 min | Design system tokens and UI themes |
| CONTENT | 3 min | Frequently updated collection entries |
| MEDIA | 5 min | File metadata and thumbnail paths |
| SESSION | 24 hours | User authentication and preferences |
| USER | 1 min | User profile and permission data |
| API | 5 min | External service responses and expensive queries |
Core Infrastructure
CacheService (cache-service.ts)
The CacheService orchestrates the dual-layer logic and integrates with the system settings for dynamic TTL updates.
export enum CacheCategory {
SCHEMA = 'schema',
WIDGET = 'widget',
THEME = 'theme',
CONTENT = 'content',
MEDIA = 'media',
SESSION = 'session',
USER = 'user',
API = 'api'
}
// Example usage
await cacheService.setWithCategory('user:123', userData, CacheCategory.USER, tenantId);
Cache Metrics (CacheMetrics.ts)
Real-time monitoring tracks hit rates and latency across all categories and tenants.
- Hits/Misses: Tracked per category and tenant.
- Hit Rate: Overall efficiency target > 90%.
- Response Time: Continuous monitoring of L1/L2 overhead.
- Prometheus Export: Native support for Grafana dashboards.
Predictive Warming (CacheWarmingService.ts)
Proactive caching reduces “cold start” latency:
- Startup Warming: Pre-loads schemas and system settings.
- Predictive Prefetching: When a
useris fetched, the system automatically prefetches theirpermissionsandroles. - Background Execution: Warming tasks run in parallel without blocking user requests.
Implementation Reference
Cache Store Methods
import { cacheService } from '@src/databases/cache-service';
// Set cache value with category (recommended)
await cacheService.setWithCategory('key', data, CacheCategory.API);
// Set with explicit TTL (seconds)
await cacheService.set('key', data, 3600);
// Pattern invalidation
await cacheService.deletePattern('query:collections:*');
// Category invalidation
await cacheService.invalidateCategory(CacheCategory.SCHEMA);
Monitoring & Debugging
Admin users can monitor cache performance via System Settings > Cache:
- Cache Stats: View real-time hit/miss ratios.
- TTL Settings: Adjust category lifespans without restarting.
- Clear Cache: Manual flush for maintenance.
Performance Benchmarks
Typical production metrics (Steady State):
| Operation | L1/L2 Cache Latency | Status |
|---|---|---|
| Get User | < 10ms | High Perf |
| List Collections | 1.2ms | Sub-ms Response |
| Search Content | 2.5ms | Optimized |
| Dashboard Load | 5ms | Ultra-Fast |
Related Documentation
- Authentication System - Session caching details
- Database Methods - Low-level DB interactions
- Multi-Tenancy - Tenant isolation in cache keys