Assessment reports>Gasp Node and Monorepo>Design>Rolldown Solidity contract

Rolldown Solidity contract

The rolldown contract is the main L1 contract.

It provides entry points allowing users to initiate a deposit of either the L1 native currency or an ERC-20 asset. When a deposit is made, user funds are transferred to the rolldown contract, which holds them in custody until a withdrawal is made. Upon successful completion of a deposit, the rolldown contract emits an event containing the deposit information (including amount, asset, and recipient). The event is detected by sequencers that relay it to the L2 network, crediting the recipient.

The contract also receives and stores Merkle tree root hashes from the updater component. The Merkle tree summarizes events happening on Gasp L2 and contains two types of leaves: withdrawal requests to be completed and cancellation requests. Once the Merkle root hash for a set of requests is provided to the rolldown contract, the withdrawal or cancellation requests are finalized individually by providing the request data together with a proof showing its inclusion in the tree. In the case of withdrawal requests, finalization also entails transferring the funds to the withdrawal recipient.

Note: In the code revision under review, a Merkle tree root hash submitted by the updater is not verified in any way; as such, the updater is a centralized and trusted system component. This is not reported as a finding since the Gasp team was aware of this limitation beforehand and discussed the work-in-progress state of the components during early stages of the engagement. Their awareness is also made apparent by comments in the rolldown contract update_l1_from_l2 function and by other documentation shared with us, which document the intent to verify the submitted Merkle tree root against the consensus determined by EigenLayer.

Sequencer

The sequencer component is responsible for relaying L1 events to Gasp L2. Gasp architecture assumes that multiple parties will independently operate their own sequencers.

Sequencers are registered on Gasp L2 by staking assets that can be slashed in case of misbehavior. One sequencer at a time is selected to be allowed to propose an update to the L2. This is referred to as giving the selected sequencer a "read right", because the sequencer reads events pending submission from the L1 rolldown contract and relays them to the L2. The events submitted by the sequencer with read rights are put in a queue for a number of blocks. During this time, other sequencers verify the submitted information.

The other sequencers are granted cancel rights that allow them to dispute an event submitted to the L2 by the selected sequencer with read rights. Sequencers with cancel rights monitor the L1 updates sent to the L2 and verify them against their own view of the L1 state; if the sequencer with read rights is found misbehaving, the other sequencers submit a cancellation request within the dispute period, which prevents the incorrect update from taking effect and slashes the misbehaving operator's stake.

Gasp collator node and rollup pallet

Gasp L2 is built using the Substrate framework, including several off-the-shelf pallets. Our review focused on the bespoke rolldown pallet, which handles deposits and withdrawals.

The update_l2_from_l1 extrinsic can be invoked by the sequencer with read rights to provide an L1 update (including information about a deposit made on an L1 chain). The update is kept pending for a dispute period, during which other sequencers can invoke the cancel_requests_from_l1 extrinsic to cancel the update if they detect that the update does not correctly reflect the L1 state.

After the dispute period has passed, pending updates are processed, eventually leading to the minting of the tokens that represent the deposit to the intended recipient. The tokens can be traded using the functionality exposed by the standard xyk pallet AMM.

The rollup pallet exposes a withdraw extrinsic that can be used by token owners to initiate a withdrawal of the L2 tokens back to the L1. Withdrawal requests are queued up and normally processed in batches. A batch of withdrawals is created automatically after a certain configurable amount of L2 blocks have occurred or manually via explicit request of an L2 user using the create_batch extrinsic. A cryptographic commitment of the L2 updates contained in each batch is relayed back to the L1 in the form of a Merkle tree root hash.

Administrative extrinsics

The pallet exposes several extrinsics for administrative purposes, which can only be invoked by the root origin. These extrinsics allow to forcefully submit a deposit and cancel an L1 update, forcing creation of a batch of updates to be sent to the L1 as well as triggering a refund of a failed deposit (e.g., due to overflow or other errors in minting the L2 tokens).

The existence of these administrative extrinsics further increases the importance of ensuring access to the root origin cannot be abused, as it effectively allows to sidestep the distributed validation otherwise ensured by the multiple independent sequencers. In the Substrate configuration under review, the root origin is accessible to the members of a council via the sudo pallet. Both the council and the sudo functionality are implemented by default pallets. Since the project is not deployed yet, we did not evaluate how the council is managed and who its members are.

Aggregator

The aggregator component detects if a batch of L2 updates is pending and invokes the AVS Task Manager contract to create a task asking operators to provide a signed witness attesting the validity of the batch. Note that the aggregator is in charge of determining the number of operators needed to reach the quorum. This design choice entrusts great power upon the aggregator, as the number of operators needed to reach the quorum is fundamental to the security of the system.

EigenLayer operators participating in Gasp consensus will pick up the task, independently derive the L2 state by executing the relevant block, and provide a signed attestation of the state they derived.

The signed attestation is submitted by the operators to an endpoint exposed by the aggregator. The aggregator service accumulates the received signatures (delegating cryptographic operations to the off-the-shelf EigenLayer BLS signatures aggregator). When enough signatures attesting the same state have been collected and a quorum is reached, the signatures (in the form of a single aggregate BLS signature) are submitted to the L1 Task Manager contract, which verifies the aggregate signature and marks the task as finalized.

Kicker thread

The aggregator is also responsible for holding EigenLayer operators responsible for inactivity. A thread periodically checks which operators have not participated in any of the last N (configurable) blocks. Any inactive operator found is ejected from the list of operators associated with Gasp AVS.

EigenLayer AVS implementation contracts

Gasp uses EigenLayer to establish a consensus about the L2 state on the L1. Integrating with EigenLayer requires implementing several contracts. Our review included the following:

  • FinalizerServiceManager. The AVS service manager handles operator registration and deregistration; this contract extends the default EigenLayer service manager to add a waiting period before an operator can register after updating their stake. It also implements operator ejection, which allows a privileged address (currently the aggregator service) to forcibly remove an operator.

  • FinalizerTaskManager. This contract manages the life cycle of a task. New tasks can be created by the aggregator (or forcefully created by the contract owner). A response can then be submitted by the aggregator and is marked as successfully completed if the weight of the operators that have signed it reaches the required quorum. The contract owner can also force a response.

  • OperatorStateRetrieverExtended. This contract inherits from EigenLayer OperatorStateRetriever. It implements some utility methods used by the aggregator to get the state of an operator.

Finalizer

One instance of the finalizer component is run by each AVS operator. The finalizer retrieves AVS tasks from the task manager contract. It reproduces the L2 state by independently reexecuting the relevant block. The execution results (hashes representing the L2 state) are signed and submitted back to the aggregator service.

Note: The finalizer service is not currently including the Merkle tree root hash as part of the response provided by the finalizer for a given task. As discussed in previous sections, this is not reported as a finding due to the clear awareness of the Gasp team about this issue and the work-in-progress state of the codebase.

Updater

The updater component is responsible for submitting Merkle root hashes of the trees that contain withdrawal and cancel requests to the L1 rolldown contract.

The updater subscribes to events emitted by the FinalizerTaskManager contract, signaling a task was completed (meaning a quorum of operators submitted a matching response).

Note: It is currently implemented as a centralized trusted component, as the provided Merkle root is not validated in any way by the rolldown contract. This is not reported as a finding due to the work-in-progress state of the project. The Gasp team was aware of this limitation and mentioned it in early conversations. This is also made apparent by comments in the rolldown contract update_l1_from_l2 function, which document the intent to verify the submitted Merkle tree root.

Gasp plans to modify the finalizer, task manager, and rolldown contract components to include the Merkle tree root hash in the response provided by operators to AVS tasks. This would allow the rolldown contract to verify the Merkle tree root hash directly against the information submitted to the EigenLayer contracts, removing the reliance on the updater as a trusted component.

Zellic © 2025Back to top ↑