Caching
TensorDB maintains two LRU caches to minimize disk I/O.
Block Cache
Caches decompressed SSTable data blocks in memory.
- Eviction: LRU (Least Recently Used)
- Default size: 32MB (
block_cache_bytes) - Scope: Shared across all shards
When a data block is read from disk, it’s stored in the block cache. Subsequent reads of the same block are served from memory.
Index Cache
Caches SSTable block indexes for fast key-to-block mapping.
- Eviction: LRU
- Default entries: 1024 (
index_cache_entries) - Scope: Shared across all shards
Each SSTable has a block index that maps key ranges to block offsets. Caching these indexes avoids re-reading them from disk.
Cache Hit Monitoring
The block cache tracks hit and miss counts with atomic counters. Use SHOW STATS to see the overall cache hit rate:
SHOW STATS;-- Returns rows including:-- {"metric": "cache_hit_rate", "value": "0.8523"}-- {"metric": "cache_hits", "value": "12345"}-- {"metric": "cache_misses", "value": "2150"}-- {"metric": "cache_bytes", "value": "16777216"}-- {"metric": "cache_entries", "value": "1024"}You can also see per-query cache statistics via EXPLAIN ANALYZE:
EXPLAIN ANALYZE SELECT * FROM users WHERE id = 'u1';-- cache_hits: 2The cache system monitors access patterns for optimal performance.
Configuration
| Parameter | Default | Description |
|---|---|---|
block_cache_bytes | 32MB | Total block cache budget |
index_cache_entries | 1024 | Maximum cached index blocks |
Tuning
- Read-heavy workloads: Increase
block_cache_bytesto fit your working set - Many SSTables: Increase
index_cache_entriesto avoid re-reading indexes - Memory-constrained: Reduce cache sizes; bloom filters will compensate