Switching to a BAW based DAA

We have recently decided to change PHANTOM such that the blocks are not scored by blue score but rather, by accumulated blue work. The work of each block is simply the expected number of hashes required to find a nonce for the block (that is, if the hash function has a domain of size N, and the difficulty target is to have a hash at most p, the work is N/p). The accumulated blue work of a block is the sum of works of all blocks in its blue past.

This decision is meant to secure the network against attacks which use timestamp manipulations to create parallel chains of the same length while minimizing the required amount of work.

To demonstrate this point, consider the following scenario: assume that the network had a sudden boost in hash rate. Say that the desired rate is one block per second, but that a spike in popularity caused that 100 blocks were created within 10 seconds. The difficulty adjustment then increased the difficulty by a factor of 10, and the network stabilized again on the desired rate of one block per second, which it retained for a period of, say, one day. Now imagine an attacker who wants to trick the network to reorg to a new chain which splits from the current chain before the spike in hash rate occurred. She could achieve that by making blocks whose timestamps are one second apart. The difficulty of these blocks would not increase, and the attacker needs less than 10% of the computational power to create a longer chain this way. Only after catching up with the network, the attacker would condense 100 more blocks into 10 seconds, increasing the difficulty but creating a longer chain whose last block has a time step which doesn’t run into the future. Note that now the attacker chain has a higher blue score than the honest chain, yet its chain has much less accumulated work. Thus, using accumulated work to rank blocks denies such attacks.

Another nice implication of blue accumulated work is that it affords a nice solution to the problem of selecting a difficulty window: the difficulty of a block is calculated from the set of N “latest” blocks in its past. In a chain structure, “latest” has a very clear meaning. In a DAG structure? Not so much. Most reasonable metrics for “latestness” are attackable: timestamps cannot be trusted, past size could be cheaply manipulated. Since the difficulty window is selected from the entire past (and not just the blue past), manipulating the blue score with decreasing difficulty is also a viable attack vector: an attacker could create a long, cheap chain, whose blue scores are very high, and publish it, the next honest block will point to it and choose its blocks as “latest”. The upshot of this is a cheap attack which would artificially decrease the difficulty of the entire network.

A good measurement for “latestness” of a block is the accumulated work, that is, the total work of all (not just blue) blocks in its past. In a well connected network, if one the length of time between the creation of two blocks is larger than the propagation delay, then the former block is assured to be in the past of the latter, whereby the latter has more work. An attacker which wishes to tamper with the difficulty will have to create a side chain whose accumulated work can compete with that of the honest network.

The argument for using accumulated blue work rather than accumulated work is a bit more delicate. We first note that since an attacker has to beat the honest network in terms of work, it is still impossible for them to carry a difficulty decreasing attack. A difficulty increasing attack is still a possibility though: in case the network has red blocks, and the attacker arranges their DAG so that all blocks are blue, they accumulate blue work slightly faster than dictated by the portion of their computational power. This attack has two huge limitations. The first is that it is only possible for an attacker whose computational power is very close to that of the honest network (by how much exactly depends on the parameters, but even for permissible parameters this comes out above 40% of the total rate). The second is that such an attack is very gradual, and the increase in difficulty is very limited: as the difficulty increases, the rate of red blocks created by the honest network is decreasing, and once it goes below a certain threshold (which goes to 0 as the computational power of the attacker approaches 50%), the attacker will not be able to manipulate difficulty any longer. In other words, the attacker can only increase difficulty in the scenario where there are red blocks, and could never increase it to the extent that the creation rate of blue blocks by the honest network is slowed down.

It is now a good point to ask what are the ramifications of this change on the way we understand the security of our protocol. The stochastic process in the analysis of the GHOSTDAG protocol remains, to the most part, the same. In scenarios where no freeloading occurs, at any point in time, the attacker chain and honest chain accumulate score in a rate proportional to their computational power. Freeloading was originally mitigated by the freeloader bound, which put a constant limitation on the advantage an attacker can hold after using honest blocks to increase their score. The bound itself is stated in terms of blue work. Since the anticone condition is still discrete (in the sense that the size of the blue anticone is limited by the amount of blocks, regardless of their work), the freeloader bound could reinterpreted as a bound on the number of blocks whose work is included in the accumulated blue work of the offloading block. The work gained by including these blocks can be bounded by making very mild assumptions on the divergence of the global hash rate, or they could be mitigated by introducing a limitation on the sum of work of blue anticone blocks (e.g. to be at most k times the work of the selected parent).

We finally note that introducing a blue accumulated work rule, the difficulty adjustment is still targeting the amount of blocks per second, so that the blue score (as opposed to accumulated blue work) still remains a measure of time. Thus using blue score to determine the finality and pruning blocks remains justified as before.

Nice post! Re your final note, I’m slightly confused. If the DA is targeting blocks per second, why do you say that blue score (rather than past size) measures time?

Yes this should have been more elaborated.

In the honest setting, well connected setting, there are almost no red blocks, so the ration of blue score to past size is (1-delta), so in this scenario blue past size makes sense.

Now consider a split (e.g. a genuine network split, or an adversary withholding blocks). By analogy, imagine that you have a clock on each side, and once the split ends, you look at both clocks and determine how long has passed. Which of the two makes more sense: 1. say that the time that has passed, is the maximal time measured by the clock which seems more accurate or, 2. say that the time that has passed is the sum of times measured by both clocks?

Clearly, the first choice is very reasonable while the second one is highly absurd. But going by blue score of the side of the split with more accumulated work (which I likened to a “more accurate clock”) is exactly the first option, whereas going with past size is exactly the second!

Followup to this discussion here: Difficulty adjustment and time measurement in DAGs