An atlas of the various constants and dependencies thereof

The purpose of this post is to list and describe some of the various constants which parametrize the system and the relations thereof, provide sources to more in depth discussion when available and fill the gaps when not.

Some of the constants described below appear in the code, whereas the others are defined for the sake of theoretic discussion. In the former case I will state how the variable under discussion is used in the codebase.

The maximal network delay d_\max

A bound on the time it takes for the entire network to learn of a block since it was broadcasted by its miner.

The tolerance parameter \delta

A lot of the structural properties of the DAG are only guaranteed with high probability. Adjusting some parameters allows us to control that probability to make it so desired properties hold “most of the time”, when we say “most of the time” we actually mean 1-\delta of the time.

In the entire system the only such hard assumption we make is on the anticone size (cf. k below), and the rest is derived. Hence, \delta designates a bound on the expected fraction of blocks whose anticone is larger than k.

When discussing parameters which exist as variables in the codebase, I will state the name of the variable.

All of the descriptions and properties above are assumed in the honest scenario unless stated otherwise. This does not mean we assume that the network is honest, only that some of the variables have meaningful interpretation in the honest settings. All security guarantees are proven in the honest majority setting.

The block rate \lambda
Name in codebase: for implementation reasons, it is more convenient to use the reciprocal 1/\lambda, which represents block delay rather than block rate. This appears in the code as defaultTargetTimePerBlock.

Conceptual meaning: The expected number of blocks created in a period of time of length D is D/\lambda.

Operational meaning: the difficulty adjustment algorithm is parametrized to adjust the difficulty such that if H is the global hash rate then the expected number of hashes required to mine a block is H\lambda.

The parent cap C
Name in codebase: defaultMaxBlockParents

Operational meaning: a cap on the number of blocks a given block could point to.

This parameter was introduced as a means to control the size of a block. An unlimited number of parents implies that an attacker could inflate the blocksize by posting many parallel blocks, thus causing the network delay of the block to exceed d_\max, violating the assumptions on which the security of PHANTOM relies.

The effective merge delay d_\mbox{eff}

The effective merge delay is a period of time d_\mbox{eff} such that if block B' was created at least d_\mbox{eff} after the block B was reported to the network then B\in B'.Past with probability at least 1-\delta.

Dependence on other variables: If C\ge \lambda d_\max then we have d_\mbox{eff} = d_\max. If C = \alpha \lambda d_\max then, in the worst case in which there are \lambda d_\max tips it would take at least 1/\alpha blocks to merge them, making d_\mbox{eff} \ge d_\max + \alpha/\lambda.

It is impossible to upper bound d_\mbox{eff} without further assumptions. Generally speaking, if there are more than C tips it could be that one of them would be indefinitely starved. This could be mitigated in a variety of ways, either by refining the definition of d_\mbox{eff} to allow starvation of some blocks, by incentivizing miners to point at slightly older blocks by giving (some or all) of the block rewards of red blocks to the merging block, by forcing miners to randomly choose the pointed tips etc… This is beyond the scope of this post. We assume for now that d_\mbox{eff} \le d_\max + \eta\alpha/\lambda, leaving the specification of \eta to whoever specifies the chosen approach.

The blue anticone size limit k

Name in codebase: defaultGHOSTDAGK

Conceptual meaning: most of the time, the anticone size should be less than k

Operational meaning: Let B be a block, and let C\in B.BlueSet, then |C.AntiCone \cap B.BlueSet| \le k

Dependence on other constants: Once d_\max, \lambda and \delta above are decided upon, k is chosen such that, in expectation, a fraction of at least 1-\delta of the blocks admit anticones of size at most k. For d_\max = 50, \lambda = 1/sec and \delta = 0.05 this evaluates to k=18.

The calculation is based on the formula which appears in the PHANTOM paper, subsection 4.2. The analysis therein assumes d_\mbox{eff} = d_\max. If the system is parametrized such that the equality does not hold, d_\mbox{eff} should be used rather than d_\max.

The merge set size limit L
Name in codebase: defaultMergeSetSizeLimit

Conceptual meaning: A limit of the number of blocks a new block can introduce to the state.

Operational meaning: Any block B must satisfy that |B.Past \setminus B.SelectedParent.Past|\le L + 1 (the +1 is because the set in the LHS conains B.SelectedParent).

The finality depth \phi and pruning depth \pi

Name in codebase: \phi is called defaultFinalityDuration, \pi is not defined but rather calculated in the function PruningDepth().

Conceptual meaning of \phi: represents a period of time after which the selected chain may not be changed. That is, if a block in the selected chain is old enough, it is promised that it is in the selected chain.

Conceptual meaning of \pi: represents a period of time such that blocks which were created in the anticone of a selected chain block older than this period of time will never affect the state of the UTXO set.

Operational meaning of \phi: \phi is used to define the pruning block F as the newest block in Virtual.SelectedChain whose blue score is not larger than Virtual.BlueScore - \phi. In the event where including a new block would cause the selected chain to reorganize such that it does not include the current F, we alert the community that the network is split. The split is then manually resolved.

Operational meaning of \pi: \pi is used to define the pruning block P as the newest block in Virtual.SelectedChain whose blue score is not larger than Virtual.BlueScore - \pi. The validation rules, thoroughly discussed here, are designed such that if at the time B was discovered it was in the anticone of P, it would never be in the past of the virtual block (almost certainly, barring a more-than-half attack).

Dependence on each other and variables: \phi is chosen to represent an arbitrary real time length deemed reasonable by external considerations, hence it does not strongly depend on other variables. The only form of weak dependence is that it must be orders of magnitude larger than the expected number of blocks during the effective convergence time, i.e. \phi \gg \lambda d_\mbox{eff}.

The discussion in the linked post describes validation rules, relative to which it was formally proven that the pruning mechanism is secure given that \pi = 2\phi + 4Lk + 2k + 2. The dependence of \pi on k also implies dependence on d_\mbox{eff} and \lambda, However, this dependence is accounted for in the calculation of k.

The difficulty window size \mathcal{N}, the timestamp deviation tolerance \mathcal{F}
Name in codebase: \mathcal{N} is called defaultDifficultyAdjustmentWindowSize, \mathcal{F} is called defaultTimestampDeviationTolerance.

Conceptual meaning: \mathcal{N} describes how deep into the past of a block we should look into when calculating its difficulty target. \mathcal{F} describes how tolerant we are to timestamps deviating from the expected.

The operational meaning is thoroughly discussed here. The only dependence on other variables lies in the fact that \mathcal{F} relies on measuring a real time period (currently two minutes) by block count, which depends on \lambda.