When a block in a DAG is mined on top of a very old state, by some faulty or malicious miner, validating it might consume too many resources. Basically, validating a block amounts to rolling back the current state, undoing all updates that happened outside its context (:= the set of blocks in its past), and checking whether its own txns are consistent with the UTXO set of its context. If the block is mined on an old part of the DAG, this process is prohibitive. This problem came to be known within the core devs as the twelve minutes problem, which is a much better and descriptive name than the opaque void name CPU attack.
To be clear, this is not an attack on consensus, as this outlier block wouldnāt cause a reorganization of the ordering of events (assuming you use a secure DAG algorithm such as GHOSTDAG etc.). But our insistence on checking that every block in the DAG is valid implies that we must rollback and effectively simulate a reorg in order to calculate the UTXO per the context of each and every block, even though we are only interested in the UTXO set of the entire DAG (aka of the virtual block of the DAG); this is both inefficient and opens up a CPU attack vector, even for attackers with a very small hash rate.
This is a problem unique to blockDAGs, because in blockchains you only care about the state of a block if it is part of the canonical chain, which usually implies that the calculation of its state is straightforward.
This problem admits three possible solutions:
- Use a [small] finality window. This is the simplest solution, and the ādirtiestā oneāit caps the depth of the reorg. However, we want the finality window size to depend on assumptions on the network and on social consensus, rather than on our algorithmic ability to reorg. More on this here [link to be added]. The same goes for a pruning window. So this is not the preferred option.
- Allow invalid blocks. I.e., disable the check of whether a block is valid, and, instead, just compute the state of the UTXO set of the virtual block of the DAG. This is essentially going with the data availability (DA) model, where blocksā data are not validated in the base consensus [other post expanding on this coming soon].
- Allow invalid Red blocks. This is my preferred option. I propose a new rule: block X is invalid if its transactions are inconsistent with the UTXO set of its past or it points at an invalid block Y such that Y is in Xās blue set (i.e., the blue set of the DAG that consists of X.past). Specifically, we allow block X to point at block Y and skip checking its validity in case Y is not Blue according to X. In other words, you only need to validate all of your own blockās Blue blocks. The save on the cost comes from that the diff between the state of the virtual and the state of new blue blocks is small (read: constant), by construction. Note that it could be the case the X1 and X2 both point at an invalid block Y, but only X2 is invalidated by that ā in case Y is in the blue set of X2 but not in that of X1. Note further that itās okay for block Z to point at X2 and to not check whether Y ā and, by implication, whether X2 ā is valid or not, unless X2 is in the blue set of Z (in which case the state of the DAG Z.past would have a small diff to the state of the DAGs Y.past and X2.past).
Intuitively, there are two extremes. One is the DA model (option 2) where you do not validate blocks at all, which is equivalent to saying that you calculate only the UTXO set of the virtual block; Iāll expand on the challenges of this in a separate post, but for now note that it seems incompatible with SPV mode support. The other extreme is you validate all blocks, i.e., calculate the UTXO set of every block, as in our current design, and then you are open to CPU attacks in the same order of magnitude as the finality window (option 1). The middle ground I propose (option 3) is validating the UTXO set of every Blue block, and is one which supports SPV nodes (which calculate the set of Blue blocks via headers), and avoids CPU attacks.