Concrete proposal for a synchronously composable verifiable programs architecture

Cowritten with @michaelsutton.

High level concepts, terminology:

  1. Terminology, Solana-inspired: Accounts hold state data, verifiable Programs/vProgs own accounts and define their state transition logic, transactions declare in advance their read/write accounts.

  2. Each vProg is practically a mini zkVM which commits and progresses its state to L1 through its own sovereign covenant.

  3. vProgs have complete sovereignty over their throughput and state size regulation; each vProg defines its own corresponding constants and scale, and in particular regulates its own state growth. A txn requiring permanent storage from vp1 must pay for this according to the gas scale and STORM constants of vp1.

  4. vProgs are mutually trustless: vp2 never relies on correct execution or state availability of vp1.

Composability:

  1. Sync composability feature enables txns that create dependency between accounts (read state of this account, use as input to write to another account) belonging to different vProgs whilst maintaining sole ownership over the state, which includes in particular the ability to enforce cross vProg atomicity.

  2. Reminder why crucial to optimize for sync composability: Without native syncompo, users and liquidity will gradually flow to rollup entities, which offer syncompo and unified execution environments, yet whose inherent incentive is to win it all and remain as single parasitic entity—rollups have no incentive to interop and defragment. Native syncompo vProg design remedies this by optimizing for deployment of vProgs directly on L1 through no intermediaries. By replacing Solana’s account-centric Programs with vPrograms we inherit its coherent standards and unified liquidity without bloating L1 state and minimal full node HW requirements. See this thread https://x.com/divine_economy/status/1884243869136740361

  3. vProgs are synchronously composable w/o compromising their sovereignty; thanks to the account-centric design, the dependency created by a syncompo transaction is limited to the relevant account (and its scope, see below) rather than to the entire vProg, eg an SPL account token transfer doen’t create dependency on rest of SPL accounts. Rule of thumb: roughly speaking, if a txn is parallelizable in a Programs setup, it creates no syncompo dependecy in a vPrograms setup.

  4. The design intends to maximize inclusivenss of vProgs, zkVMs, proof systems. Still, receiving dependency from any vProg is unsafe, some prerequisites are needed eg vProg sourcecode availability (see below), VM familiarity (for on-site execution) and gas scale conversion (point 6), etc. Under research: (i) precise characterization of features requiring vetting; (2) automation of the filtering process through some standard + proof that covenant locks a vProg adhering to the standard (in applied jargon this is referred to as zk-SBOM, which practically seems like rather ordinary usage of Merkle Trees to prove some properties and structure of the program; still a useful term to distinguish proofs of program properties from proofs of correct execution).

  5. Async composability is enabled too, but introduces execution uncertainty to the transactor, since asyncompo implies either lack of atomicity or execution latency in the order of num_of _dependencies*proof_latency. Contrast to the following design goal under syncompo: keep confirmation times in order of L1 sequencing latency rather than proof latencies (without this requirement the architecure challenge is order of magnitude easier). Note that as long as we adhere to this goal there’s no need for timebound proof settlement mentioned here.

Details on sync composability:

  1. The trusless cross-vProg communication requires that a syncompo txn that reads from vp1 and writes to an account owned by vp2 must provide all relevant witness data (that hasn’t been prvsly provided to vp2!), and must gas-pay vp2 for the remaining resources its scope consumes. scope:=state transitions between this txn and backwards to the latest witness that was already zkp-anchored. The latest submitted witness hence the scope depend on the pov of the callee vProg, which is mostly determinable by the structure of the computation DAG (unrelated to blockDAG) but note:

  2. A naive approach towards computation of scope would infer it solely from the topology of the computation DAG, but this fails to account for cases where a txn declared a read to an account yet failed to impelement that read. A read-fail can be treated by (i) requiring txns to begin their execution with reading all declared-read accounts, and (ii) using gas commitments inside txns to reason, without executing the txn, on whether the read instructions are sufficiently gas-funded. It is crucial to additionally be convinced that failure to write to declared accounts creates no negative consequences; this topic is delicate, and requires careful attention, analysis, and separate post.

  3. Storage of witnesses: As noted in article 10 in parenthesis, witness data is assumed to be stored by the receiving vProg. This storage can be defined by convention to be transient and therefore pruned post pruning epoch. The alternative permanent convention is doable too, but seems practically unnecessary.

Validity proofs:

  1. Validity proofs aka zkp have two vital roles in this system: communicating the state to L1, and preventing the explosion of txn scopes due to cascading dependnecies. The lower proof latency is, the smaller scopes become, the cheaper and more feasible sync composable txns become.

  2. Each vProgs has its own, ideally permissionless, set of provers which advance its state through its L1 covenant. In principle, its provers are able to prove the entire execution of syncompo txns, including segments which belong to other vProgs. A vProg thus controls its own liveness.

  3. To enable also the optimistic case where provers are responsive and collaborative, each should be able to submit to L1 conditional proofs regarding its own segment of the execution, and once conditional proofs to all of the txn’s components are submitted, these are stitched together to one proof that can advance each party’s covenant. conditional proof:=proof whose input is the state commitment of the (potentially unproven yet) output of the prvs segment of the txn, and which becomes actionable only once the prvs segment was proven. See relevant post.

Economics:

  1. Syncompo txns create two types of externalities on the callee vProg, one is the witness and scope computation, the other is the computation of the new txn which creates dependency hence introduces sequentiality into the computation. In terms of a stateful node of the callee vProg, the former externality is the added CPU cycles in a separate core that the node runs in parallel, the latter externality is the added computation depth due to dependency/sequentiality. vProgs are advised to define parallelism-aware gas functions, eg the Weighted Area function from this paper.

  2. vProgs which initiate steady frequent sync composable txns with other vProgs (concretely, syncompo_frequency >1/proof_latency) might find it beneficial to initiate and fund account continuous dependency (CAD). When vp1 initiates a CAD from account A1 it owns to vp2, vp2 continuously monitors and computes the state of A1, an externality that is funded and gas-paid by the CAD issuer. Validity proofs might still be a better approach for the vProg since they reduce its users’ costs (=size of txn scopes) of interacting with all other vProgs, in one shot.

  3. Online cost-sharing mechanisms can be applied to smooth and share gas costs among syncompo transactors or/and with the CAD issuer. Assuming such a mechanism means that the CAD issuer doesn’t need to fund the continuous dependency from A1 (a vp1 account) to vp2, rather it merely places an initial deposit which gets continuously refilled by future transactors.

Miscellaneous:

  1. The above design is agnostic to the question how vProgs ensure their state availability. Instead it focuses on the narrower challenge of syncing (caller vProg) substates relevant to each cross-vProg execution, and which are by construction reconstructable from on-chain data, regardless of the availability of the entire state. Observe that loss of full state availability affects no other vProg, per our design (article 10, trustlessness).

  2. In contrast to state availability, vProgs do need a guarantee on the source code availability of vProgs with which they are syncomposable. One way to enforce this is through cryptographic proofs of replication that can attest eg to some fraction of miners holding all relevant vProgs. Such attestation will be used and enforced in the same manner discussed in article 8.

  3. Broadcasting witness data offchain will remove lots of inefficiency and add lots of complexity; if proof latency is too high and witness data too large for feasible high throughput, the complexity might be worth it. One path is to require on-chain witnesses only in case the callee vProg’s proofs are delayed by more than a predetermined threshold. Main caveat: compromises on the design goal of conf times ~ sequencing latency (article 9 above), unless the user of the callee vProg is willing to assume or trust that witness data of caller vProg will never go missing.

  4. Enshrined covenant: To ease coordination of standards across composable vProgs, it makes sense to develop a canonical covenant which compliant vProgs will instantiate (to enforce eg SBOM, see article 8), or/and to deploy a canonical meta vProg that will handle computation of txn scopes. The details of this are under research.

Timeline:

  1. Timeline for yellow paper draft: end of month.

  2. Timeline for production ready testnet: call for developers to enter the L1<>L2 channel and mention their availability and potential contribution Telegram: View @kasparnd


Acknowledgment and thanks to @FreshAir08 and @Hans_Moog for extensive discussions, contributions, and constructive criticism on the architecture.

10 Likes

Can’t write in @kaspamd, so replying here:

Igra architecture is fully aligned from the beginning with these ideas, and we will be happy to contribute to the development in any capacity needed.

Can’t wait to get our hands dirty!

5 Likes

Hey.
Love how this design keeps vProgs fully sovereign while enabling native sync composability — a beautiful way to avoid rollup lock-in while unifying L1 liquidity. Which do you see as the bigger challenge in practice: low proof latency, controlling witness/scope size, or standardizing cross-vProg rules — and how will you tackle it?
BR

1 Like

This proposal offers a strong foundation for synchronous composability in L1, but it needs clarification on risk mitigation mechanisms for lost or corrupted witness data, as well as cross-vProg interoperability standards that guarantee security across different VMs. Furthermore, the gas fee-sharing model should be tested through real-world economic simulations to avoid creating negative incentives like spam transactions or state bloat.