DAA relies on time stamps for retargeting, and this is a problem as timestamps are not reliable, both for adversarial reasons and due to natural clock drift across large networks.
This poses an inherent tension: being too lax means giving more power to difficulty attacks, whereas being too strict would cause honest blocks to be dropped.
There should be a mechanism for determining and enforcing a leeway for time signature deviation. I describe one here.
The mechanism relies on the ability to decide, given two blocks, which one is “older”. This decision should not be based on time stamps as this leads to circular reasoning. From our point of view, we can assume that the design of the difficulty adjustment algorithm already contains a way to decide block chronology, and we only require that the DAA and the timestamp verification use the same way. (At the time this post was written, the selected metric was blue accumulated work, you can read about the reasoning behind this decision in this post.)
Say we decide our deviation tolerance should be N\lambda where \lambda is the expected block delay. The mechanism acts differently for checking if a block is too much in the past, and for checking if a block is too much into the future. There is also a difference in how we chose to handle both this case.
To verify that a block is not too much into the past, we check that its time signature is at least as late as the median timestamp of the last 2\mathcal{F}-1 blocks in its past. If it is, then the block is considered invalid and is rejected indefinitely. The reason we can be confidant about rejecting the block is that the information required to avoid this situation is completely known to the miner of the block.
To verify that a block is not too much into the future, we have to rely on the system clock. If the timestamp on the block is later than t + \mathcal{F}\lambda where t is the local clock, the block is dropped. In this case, we do not consider the block rejected, and we trust that if it is an honest block it would be transmitted again (another approach would be to hold on to the block, but delay its inclusion into the DAG until the condition is satisfied).
The reason this is crucial is because we rely on system clock, which is not in consensus. In particular, if we invalidated blocks which are too much into the future, an adversary could use well timed blocks to split the network. Also, it would cause nodes with too much of a negative clock drift to reject honest blocks.
How \mathcal{F} should be chosen depends on the particular DAA algorithm. The current algorithm allows a simple bound on the factor by which timestamp manipulation could affect the difficulty adjustment.
We quantify this with regards to the particular DAA algorithm at the time this post was written, which is described here and here.
In this algorithm, we have a difficulty window size N, the correction factor to the difficulty is the ratio between the following two quantities:
- The expected time it should take to create N blocks, N\lambda, and
- the approximate time it took to create the blocks in the window, which is the difference between the minimal and maximal timestamps in the window, call this quantity R.
The adjustment factor is thus \alpha = R/N\lambda.
The choice of \mathcal{F} also relies on a bound e on the drift between clocks in the network (which could be a very good approximation to the assumption that clock delay distributes like \mathcal{N}(0,e^2)).
Assuming that in days of peace R\approx N\lambda, the worst an attacker can do is to push the least timestamp about \lambda F into the past, and the latest timestamp exactly \lambda F into the future, getting at R\approx N\lambda + 2\lambda\mathcal{F}=\lambda(N+2\mathcal{F}.
In this case we get that \alpha \approx N/(N+2\mathcal{F}) or \frac{1}{\alpha} \approx 1 + 2\frac{\mathcal{F}}{N}.
This means we can’t make \mathcal{F} irresponsibly large, e.g. if \mathcal{F} = N/2 then an attacker can potentially make the difficulty half as easier than necessary by placing just two blocks. (Of course, this attack will only be as effective for a single block, and will have a declining effect for the next \mathcal{F} blocks.
If we want to limit timestamp attacks to a create a multiplicative error of at most (1+\delta) we need to have 2\frac{F}{N}\le \delta or 2\mathcal{F}{\delta}\le N.
In can be crudely stated that large \frac{F}{N} protects us from synchronization problems at the expense of allowing some difficulty manipulation, while choosing small \frac{F}{N} protects us from such manipulations while making the algorithm less responsive. It might be possible to ameliorate this lack of responsiveness by means other than decreasing N, such as choosing different averaging mechanisms.
Let \delta be the maximal timestamp manipulation we allow. Assuming that the DAA works properly then (during times in which the global hash rate does not change too frantically) the block delay will be at least \lambda/(1+\delta). If we want to prevent honest nodes from dropping honest blocks we should require that \mathcal{F} \ge (1+\delta)e/\lambda.
In particular, if we allow \delta=10\%, and assume e=120 seconds and \lambda = 1 seconds, we get that we need \mathcal{F} \ge (1+\delta)e/\lambda=132 and N\ge 2\mathcal{F}/\delta = 2640.