Skip to content

Commit 5cf86b9

Browse files
committed
Secondary WAL failover store must be durable
Fixes DOC-14018
1 parent c021c80 commit 5cf86b9

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

src/current/v25.3/wal-failover.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -470,6 +470,12 @@ Store _A_ will failover to store _B_, store _B_ will failover to store _C_, and
470470

471471
However, the WAL failback operation will not cascade back until **all drives are available** - that is, if store _A_'s disk unstalls while store _B_ is still stalled, store _C_ will not failback to store _A_ until _B_ also becomes available again. In other words, _C_ must failback to _B_, which must then failback to _A_.
472472

473+
### 13. Can I use an ephemeral disk for the secondary storage device?
474+
475+
No, the secondary (failover) disk **must be durable and retain its data across VM or instance restarts**. Using an ephemeral volume (for example, the root volume of a cloud VM that is recreated on reboot) risks permanent data loss: if CockroachDB has failed over recent WAL entries to that disk and the disk is subsequently wiped, the node will start up with an incomplete [Raft log]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) and will refuse to join the cluster. In this scenario the node must be treated as lost and replaced.
476+
477+
Always provision the failover disk with the same persistence guarantees as the primary store.
478+
473479
## Video demo: WAL failover
474480

475481
For a demo of WAL Failover in CockroachDB and what happens when you enable or disable it, play the following video:

0 commit comments

Comments
 (0)