8 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article discusses Alpenglow, a proposed consensus protocol for Solana that aims to improve decision latency and simplify the consensus process. It contrasts Alpenglow with the existing TowerBFT protocol, highlighting its potential for faster finality and enhanced Byzantine fault tolerance.
If you do, here's more
Consensus protocols are essential for state machine replication (SMR) in distributed systems. They help multiple parties agree on a single, consistent log of entries. In permissionless ledgers, these protocols must handle Byzantine behavior while maintaining performance. Solana uses TowerBFT for consensus, alongside Proof of History (PoH) for timekeeping and Turbine for block dissemination. While this setup maximizes throughput via pipelining, it also introduces a significant decision latency—averaging around 13.2 seconds due to a depth-based commitment rule requiring confirmation of 32 blocks.
Alpenglow is a new consensus protocol proposed for Solana. It aims to reduce decision latency while keeping the advantages of continuous pipelining and balanced proposal dissemination. Alpenglow combines an optimistic fast path for favorable conditions and a concurrent slow path for when conditions aren't ideal. It targets a resilience bound of 3f + 2p + 1, where f is the maximum number of Byzantine processes tolerated. In optimal conditions, Alpenglow can reach a decision in one voting round with at least 3f + p + 1 correct processes.
The article explains the current state of Solana’s consensus mechanisms, detailing how TowerBFT operates without clear resilience bounds and how it interacts with Turbine and PoH. TowerBFT's confirmation process relies on accumulating votes, but its commitment requires extending a confirmed block by 31 additional blocks, leading to the noted latency. Turbine facilitates data propagation by breaking blocks into shreds, which are sent through a layered structure to ensure efficient communication among validators. This decoupling of data dissemination from consensus aims to maintain high throughput, even when consensus processes slow down.
Questions about this article
No questions yet.