USL Architecture Analysis

An in-depth look at the architecture of π²’s USL


USL serves a wide variety of users within the Computations Layer. As the user base grows, the rate of computation transactions received by USL is expected to increase. Because of this, Sequencers must be capable of efficiently managing many computation transactions.

The proof generation pipeline, run by provers, is expected to be computationally demanding. To process a substantial amount of transactions in a reasonable timeframe, proof generation must be as parallelized as possible, both within a single prover and across multiple provers. Additionally, users should be encouraged to contribute to this process to maintain reasonable performance.

Regular performance evaluation and monitoring are crucial for identifying potential bottlenecks and implementing performance optimizations.

Blockchain Design

Pi Squared’s USL Network is decentralized, necessitating the design of its underlying protocols. This includes determining the consensus protocol, the block format, and the structure of transactions, among other things.

For instance, a computation transaction may specify certain parameters, such as the semantics to be used and whether concrete or symbolic execution is desired. If these parameters are wrong, a π² node may be unable to validate the transaction and, therefore, may have to ignore it. Additionally, we need to develop tools for node management, transaction processing, and performance analysis.

Trust base

USL strives to minimize the trust base to only two components:

  1. The formal semantics of the language or system used for computation.

  2. The Zero-Knowledge (ZK) verifier(s) of the ZK systems that generate the ZK proofs

To ensure their correctness, it is crucial to validate these two components thoroughly, preferably through formal verification. A bug could mutilate executions and allow false proofs. Once the Matching Logic proof-checking program is rigorously checked for correctness, its implementation is not anticipated to change. However, the formal specification of a system in K is expected to change as the system evolves over time. Therefore, we need a process that governs the acceptance of a new Matching Logic proof checker implementation, a new formal system specification, or an update to an existing system’s specification into USL.

The π² Network is both decentralized and permissionless. It operates using a consensus protocol, and its security is maintained as long as the supermajority of nodes adheres to the protocol honestly. It is important to note that initial versions of USL, where the π² Network is centralized, will require the operation of trusted nodes.

Provers in the Prover Pool are permissionless, as they generate ZKPs that affirm the mathematical correctness of executions. The π² Network then verifies these. However, it is worth noting that initial USL versions, which use signatures instead of ZKPs, would require trusted provers.

ZK backends

The current architecture assumes a fixed ZK backend to generate the ZK prover and verifier. This backend can either be an off-the-shelf solution, like RISC-Zero or Cairo, that implements the Matching Logic proof checker or a custom ZK circuit. However, the choice of ZK system influences the design and implementation of aggregation (by the aggregator nodes) and verification (by the L1 verifier contract).

Different ZK-based systems use their own ZK technologies, which may not be compatible with each other. As a result, a ZKP certificate generated using the Cairo toolchain might not be verifiable using a RISC-Zero verifier. Generally, when producing certificates, a ZK engine generates a tightly linked prover-verifier pair, which might be incompatible with pairs from other ZK systems or by different instantiations of the same ZK system.

Supporting different ZK systems simultaneously will require further investigation. The following factors need to be considered:

  1. ZKP transactions will likely need to specify which ZK system should be used.

  2. The L1 verifier contract will likely require a different verifier function for each supported ZK type. This could involve a separate verifier contract for each ZK type, among other requirements.

  3. When a node encounters a ZKP certificate, it should identify the correct verifier to use (e.g., by reading the information from the certificate's message) and run it on the certificate.

  4. Certificates that use unknown verifiers to USL will fail validation. Since ZKP Certificate verifiers are part of the Sequencers’ node software, adding new verifiers to USL (or upgrading or dropping existing ones) will require node updates.

It is worth noting that introducing a new verifier means accepting a new ZKP system into USL’s trust base. This may best be accomplished in such an open system through a community-managed approval process (like a decentralized autonomous organization or DAO).

ZKP Aggregation

We can include an optional Aggregator Network managed by aggregator nodes. These nodes aggregate a list of ZKPs into a new aggregate ZKP and calculate the overall state delta by combining all block deltas corresponding to this list. They also compute the new state Merkle root based on the aggregated state delta. ZKP aggregation is an optimization technique that significantly helps increase transaction throughput by reducing verification overhead and data requirements in the lower layers.

Like the Sequencer Network, the Aggregator Network is decentralized and permissionless, meaning any user can participate. This requires the network to run its own validation and consensus protocol for aggregation.

Reusing Proofs

USL does not currently reuse proofs of previously proved computations or lemmas. This limitation warrants investigation for two primary reasons:

  1. Mathematical proofs are naturally decomposed into subproofs of smaller lemmas.

  2. Performance could improve by reusing existing subproofs instead of re-computing them.

Provers can cache intermediate results and their proofs in local databases. These databases are consulted when the prover attempts to generate a proof for a given computation. Alternatively, we could maintain a shared database of ZK proofs that is accessible to all providers. Another option is for provers to maintain these ZK proofs within the USL blockchain state. There are several design choices to consider and many more details to explore in this regard.

Alternative Designs

The previous sections described a generic and universal design. It attaches correctness guarantees to all valid blocks of transactions, regardless of their origin. All rollups and execution engines on top can seamlessly utilize the services provided by USL as long as they conform to the Execution Layer Interface.

This interface defines the structure of transactions and their submission process for validation. Existing sequencing layers and any app or rollup directly connecting to USL must be adapted to communicate with USL through the Execution Layer Interface. Regarding the Sequencing Layer, there are at least two less appealing alternatives.

USL as a Rollup on the Sequencing Layer

In this design, the Execution Layer submits transactions to USL. It would then immediately forward these transactions to the Sequencing Layer for sequencing. Once sequenced, the blocks return to USL for validation. Here, they may optionally pass through the prover pipeline to generate ZKPs. USL maintains its own rollup state via the L1 contracts and a Data Availability Layer.

An advantage of this approach is its simpler design, as it does not require changes to the Sequencing Layer. However, a disadvantage is that the pipeline will only apply to our rollup. Other rollups won’t utilize this pipeline.

USL Embedded in Sequencers

In this design, transactions are fully validated during sequencing, which may invoke the prover pipeline. This pipeline is executed using a Prover Pool.

An advantage is that this pipeline is executed for every transaction, regardless of its source. A disadvantage is that it can slow down sequencing. It might even be infeasible for more complex computations.

Last updated