Assessment reports>Proof of Data Possession>Discussion>Lack of duplicate rootId check in scheduleRemovals()

Lack of duplicate rootId check in scheduleRemovals()

The scheduleRemovals() function of the PDPVerifier contract allows a proof-set owner to enqueue rootIds for removal during the nextProvingPeriod function call. However, this function does not verify uniqueness of rootIds, allowing the same rootId to be added multiple times.

If the proof-set owner unintentionally or accidentally provides duplicate rootIds, it will not affect the execution of removeRoots() since sumTreeRemove() will not modify the state when processing an already removed root. However, these unnecessary entries still consume gas during nextProvingPeriod() execution, leading to inefficiencies.

function scheduleRemovals(uint256 setId, uint256[] calldata rootIds, bytes calldata extraData) public {
    require(extraData.length <= EXTRA_DATA_MAX_SIZE, "Extra data too large");
    require(proofSetLive(setId), "Proof set not live");
    require(proofSetOwner[setId] == msg.sender, "Only the owner can schedule removal of roots");
    require(rootIds.length + scheduledRemovals[setId].length <= MAX_ENQUEUED_REMOVALS, "Too many removals wait for next proving period to schedule");

    for (uint256 i = 0; i < rootIds.length; i++){
        require(rootIds[i] < nextRootId[setId], "Can only schedule removal of existing roots");
        scheduledRemovals[setId].push(rootIds[i]);
    }
    [...]
}

Remediation

Filecoin provided the following response:

Since duplicate removal messages is avoidable user error we’d prefer to make this case expensive for the tradeoff of not doing duplicate removal schedule filtering and making the common case slightly less expensive.

Zellic © 2025Back to top ↑