Zenoh Journal Backend Builds Persistent Sequence
A new Zenoh backend is rising - one that writes every change as a permanent, sequenced entry, not just the latest. No more overwriting history. This crate lays the groundwork: it adds a workspace member that wraps RocksDB with durable history persistence, using a global sequence counter that survives restarts and ensures all entries are retrievable. Think of it as turning a simple log into a time-ordered archive - where each mutation gets its own spot in the chain. This isn’t just storage; it’s accountability in data form. The structure is lean: no heavy logic, just scaffolding that hooks into the trait system and RocksDB’s column families with big-endian sequence keys. For developers, this means richer audit trails without rewriting core storage - just plug in and trust the sequence survives crashes. The sequence isn’t per key, but globally tracked - so subscribing to a key prefix gets all entries in order. This approach aligns with modern US data culture, where transparency and durability are non-negotiable. It’s not just about storing data - it’s about preserving context, one chronological step at a time.
This backend defines a new standard: durable, sequential history with RocksDB persistence. It returns a volume with Capability { persistence: Durable, history: All }, signaling full read and append access. The sequence counter lives in RocksDB, stored with column family bytes in big-endian order, matching numeric expectations. This design choice keeps code simple, tests reliable, and respects downstream consumers who expect predictable, append-only logs. No real storage logic here - just the skeleton that enables durability and sequencing.
Culturally, this mirrors a growing US trend: systems built to withstand failure, where every action leaves a trace. Like journaling with a timestamped notebook, this backend lets apps see not just what changed, but when - and why it matters. The sequence counter survives process crashes because it’s written to disk, not memory. That’s resilience in a box.
There’s a hidden assumption here: sequence numbers grow globally, not per key. That means multiple keys can interleave cleanly - no clash, just order. The declare_plugin! macro wakes the system, hooking into the workspace as a trusted backend. The stubbed Storage impl compiles via StorageConfig, proving the trait contract holds. Unit tests verify sequential writes produce distinct entries, and the counter resumes after simulated restarts - proving persistence works. Even today’s CI checks ensure the build compiles, and workspace integration locks it in place. This isn’t just code - it’s a promise: your data stays intact, one entry at a time.
Is your backend ready to remember every change? When will your system treat history not as an afterthought, but as core architecture?