In the following post I’m discussing transient storage as one type of resource that is capped per block. The proposed method can be applied to other resources as well, e.g., persistent_storage_mass, compute_mass.
Prepaid Transient Storage: A Proposal
Fullnodes on commodity hardware require strict transient storage limits. Currently, a 200GB cap per pruning epoch (~42 hours) is enforced. The enforcement mechanism uniformly distributes this cap across blocks, allocating an equal fraction of storage per block. This approach limits flexibility and prevents occasional large transactions from utilizing excess available storage, even when the global transient storage cap is not exceeded.
Problem with the current enforcement mechanism
The current mechanism enforces a fixed per-block limit:
transient_storage_mass_epoch_cap / total_blocks_per_epoch
This method prevents miners from accommodating natural fluctuations in transaction size. If some blocks use less than their allocated storage, the excess cannot be transferred to other blocks. As a result, transactions requiring more storage than a single block’s allocation are not feasible, even when total transient storage remains within the global cap.
Example usecase 1: supporting high peak txn demand (aka elastic throughput)
If a miner expects high transaction demand within an epoch, such as during peak hours, or needs to guarantee block space for users who have prepaid for transaction approval, they can mine underutilized blocks at the beginning of an epoch. This reserves transient storage for future blocks within the same epoch, ensuring that miners can handle anticipated transaction surges efficiently and allocate block space as needed. (Implicitly, I’m assuming here the method is applied to compute mass as well.)
The idea to support elastic throughput came to me from Gregory Maxwell and Meni Rosenfeld’s elastic block cap ideas (around the big block debates), https://bitcointalk.org/index.php?topic=1078521.msg11517847#msg11517847. The gap between max capacity and peak capacity is of greater importance in Bitcoin vs Kaspa, since the former (supposedly) employs no pruning. Practically though, Kaspa’s pruning epoch of ~42 hours corresponds to more than 4 months’ worth of data growth in Bitcoin. In short, the peak vs avg gap is relevant to Kaspa in-pruning-epoch as well.
Example usecase 2: supporting native STARK rollups
A zk rollup entity may seek to implement native STARK verification using arithmetic field operations. Unlike SNARKs, STARKs do not require a trusted setup and offer quantum resistance, making them especially attractive for zk rollup infra. However, STARK proofs and the verifier size are significantly larger than SNARK proofs and verification scripts, potentially exceeding a few hundreds of KBs. Under the current enforcement mechanism, such proofs may not fit within a single block, making STARK-based rollups cumbersome or requiring them to go through a SNARK reduction, which is a legit construction, though slightly compromises the trustless property (it’s not too bad, since PLONK SNARK setup is universal updatable).
Proposed Solution: prepaid transient storage
To enable occasional publication of large blocks while maintaining the global transient storage cap, miners should be able to accumulate transient storage credits by underutilizing previous blocks. The credits here are used metaphorically, as a conceptual concept within the consensus/fullnode.
When a miner produces a block B
, the transient storage consumed is recorded as transient_storage_mass(B)
. If transient_storage_mass(B) < transient_storage_mass_cap
, the difference is stored as credit. In a future block X
, the miner may prove via digital signature that it is the miner of block B, and utilize:
transient_storage_mass_cap + (transient_storage_mass_cap - transient_storage_mass(B))
Generalizing this across multiple previously mined blocks B_1, ..., B_n
, the total allowable transient storage in block X
is:
C = (n+1) * transient_storage_mass_cap - Σ transient_storage_mass(B_i)
The full node then deducts the usage proportionally from the previously mined blocks:
transient_storage_mass(B_i) -= C/n
This mechanism enables miners to accumulate storage credits and later use them for transactions requiring more storage in a single block, ensuring better resource allocation while adhering to the global cap.
Elastic throughput and DAGKNIGHT (DK)
Recall that larger blocks propagate slower, which widens the DAG. Fortunately, the DK protocol can handle dynamic DAG widths, by readjusting the parameter k
in real time. Even with DK, some hard cap on individual blocks’ sizes must be applied, e.g., each block shouldn’t exceed 2 MB.
Solo miners and the prepaid approach
One not accustomed to Kaspa’s high block creation rate – upcoming 10 per second – might find this whole approach of prepaid block space awkward. However, with 10 bps and north, it is likely that the mining market will change and adjust. In particular, some service providers – e.g., wallets, rollup/prover teams – may find it profitable to either mine only or primarily their own users’ txns or to engage in some agreement with existing miners. This gives rise to the notion that mined blocks – the economics of mining txns – will sometimes reflect the needs of specific entities and sectors, alongside ordinary generic mining nodes. (All of the latter rambling describes offchain economics; no entity will receive privileged treatment from consensus’ POV.)
Notes
- Emphasizing again that this approach can be applied to other resource constraints as well, such as persistent storage and compute mass limits. Though, it seems particularly relevant for transient storage.
- One caveat of this approach is that, by providing proof of mining of previous blocks, miners link their mined blocks, thereby reducing their anonymity. However, most miners seem to not actively conceal their identity, so this is unlikely to be a significant issue.
- Post pruning all blocks’ credits must be zeroed, b/c the global transient storage cap is relevant for and enforced per pruning epochs (thank you @coderofstuff for this comment and general proofreading)