Difficulty adjustment and time measurement in DAGs

Q1: What quantity acts as a (rough) time measurement in a POW-powered DAG?
A1: The number of blocks whose timestamps were considered by the difficulty adjustment algorithm (DAA), when adjusting the difficulty.

Q2: Why is time measurement important?
A2: It is (and has to be) used for regulating coin minting and for enforcing relative time-lock scripts.

======

A1 is quite tautological but still I want to point out why the blue score, or the pastSize (roughly the number of blocks in the DAG) do not act as such measurements.

The problem with blueScore is that an attacker can withhold its blocks, publish them late so that they are not part of the blue set, and manipulate thus time measurement. This would lead to us underestimating the time that has elapsed.

The problem with pastSize is that post a network split (which did not last longer than the pruning window) both branches will have already adjusted their block rates (i.e., mine with lower difficulty) which would lead to us overestimating the time that has elapsed, as pointed out by @Deshe.

======

Notes and reminders:

  • Recall that Kaspa DAA uses a sliding window, in the line of algorithms developed and investigated by Zawy12 in https://github.com/zawy12/difficulty-algorithms. Roughly, the idea is to consider the last N blocks (in Kaspa: the N blocks with highest blue score in the DAG) – aka the DAA window – and use their average and min and max timestamps to adjust the blockrate, in a manner similar to that done in Bitcoin.

  • Technically, A1 translates to maintaining, for each block, the number of blocks that are members of its merged set and that were used in the DAA (i.e., that were part of its DAA window). I emphasize “blocks that are members of the merged set” because, as @elichai2 points out, we select the highest blue score blocks in a block’s past when calculating the DAA, and every block inherits the blue-red colouring of its selected parent, which in turn implies that a block will not consider in its DAA window blocks in its selected parent’s past that were not already considered by its selected parent.

  • While every block will have its own time measurement – i.e., its own count of the number of blocks that participated in the DAA windows throughout its past set – the dictating one is of course the block that won the consensus battlefield, i.e., the selected tip of the virtual.

I want to emphasize a big difference between the under-estimation in blue score vs. the over-estimation in past size.

But before that some context: one of the reasons that we use a clock is to implement statements such as “the finality block should be a week old”. Currently we use blue score to measure it. That is, we define that the difference in blue score between the finality block and the virtual is the expected increase in blue score during a week of operation. When we discuss a different metric of time, we also implicitly discuss the consequential changes in the implementation of choice of the finality block (and consequently, the pruning block).

The under-estimation in the former case is bound by the computational power of the adversary, and in particular could be manipulated at most by a factor of two (that is, an almost 50% adversary can convince us that time passes almost twice as slow as it really does).

On the other hand, the over-estimation in the latter case is unbounded*. In particular, if we use past size to measure time it is very cheap to convince us that time passed 1000 times faster than it actually does.

Another point is the ramifications of the attack: by increasing our clock speed, an attacker could force us to choose finality (and consequentially, pruning) blocks which were made very late. Technically, they could make so that the finality block is the selected parent of the selected tip! On the other hand, I fail to see any dire consequence of decreasing our clock speed, especially not when it is increased by a factor of at most two, and at a very considerable PoW expanse.

*That’s not completely true due to the bounded merge rule. Recall that there is a parameter L which bounds the maximal merge set size of a valid block. This implies that the past size can not increase by more than L. The implications for the attack is that the attacker can’t actually make our clock arbitrarily faster, but they can still increase it by a factor of almost L (a more exact expression is [L - average merge set size in the rest of the network], but since L >> k this is pretty much the same). So maybe an attacker can’t force that the finality block is the selected parent of the selected tip, but for, say, L=1000, it can convince us that a block is a week old when it is actually ten minutes old.

Good point. Do you then suggest we modify the pruning/finality choices according to the time measurement proposed herein?

BTW, a bad (though not dire) consequence of decreasing clock speed is the deviation from minting/inflation schedule.

I always kind of assumed this is the case. When we outlined the finality/pruning we always used real world time terminology e.g. “the finality window is a week long”.

The way pruning/finality are implemented is under the implicit assumption that blue score is a viable approximation of time. If we decide that other metrics are more accurate and less attackable, then I think we should adjust pruning accordingly.

On a second thought, I am not sure if the implementation complexity doesn’t outweigh the benefit. Are you bothered by the fact that a 33% attacker can make finality blocks 1.5 older than they should be? To an extent which justifies altering the finality/pruning block selection mechanism? I am not fluent enough in the engineering implications to make the call.

AFAIR in finality/pruning blueScore is used for time approximation as a side effect, the main reason is for the pruning proof which heavily leans on k which is only true under blueScore.

IMHO as long as blueScore gives us a lower bound on time then it’s good enough for finality/pruning, but that’s not true for the monetary policy and lock times

Lower bound is sufficient. If I understand correctly, an attacker can manipulate these points by a factor of 2 at most (assuming an attacker < 50%)

It is a bit more nuanced than that. The proof uses k in several different ways. Let w’(B) be the “accumulated difficulty window size” of B. That is, the size of the union of difficulty windows of B and the blocks in its selected chain (equivalent recursive defn: w’(B) = w’(B.SelectedParent) + |B.DifficultyWindow / B.SelectedParent.Past|, w’(Genesis) = 0). It is no longer true that w’(B) - w’(B.SelectedParent) <= k, but this is not a real issue because we can still say that w’(B) - w’(B.SelectedParent) <= diffWindowSize.

The other assumption that we make, that there would not be a split below a certain depth, can still be stated in terms of k.

The upshot, I think, is that the security proof should carry over but some of the k factors in the pruning block depth would have to be changed to diffWindowSize.

@msutton what do you think?

I am fairly convinced that this is indeed the case. An alpha attacker can slow down the clock by a factor of at most (1 - alpha).