Assessment reports>Barretenberg Bigfield>Critical findings>Range check in ,unsafe_evaluate_multiply_add, may be ineffective
Category: Coding Mistakes

Range check in unsafe_evaluate_multiply_add may be ineffective

Critical Severity
Critical Impact
High Likelihood

Description

This finding concerns the unsafe_evaluate_multiply_add and the unsafe_evaluate_multiple_multiply_add functions when HasPlookup<Builder> is true. We discuss the case of unsafe_evaluate_multiply_add; the situation for unsafe_evaluate_multiple_multiply_add is analogous.

The signature of this first function is

void bigfield<Builder, T>::unsafe_evaluate_multiply_add(const bigfield& input_left, const bigfield& input_to_mul, const std::vector<bigfield>& to_add, const bigfield& input_quotient, const std::vector<bigfield>& input_remainders)

and the function is intended to check that input_left*input_to_mul + (to_add[0]..to_add[-1]) - input_quotient*modulus - (input_remainders[0]+..+input_remainders[-1]) = 0 modulo the CRT modulus where is the native circuit modulus. To carry out this check, congruence to modulo and is checked separately, and for the former check, evaluate_non_native_field_multiplication is used:

const auto [lo_idx, hi_idx] = ctx->evaluate_non_native_field_multiplication(witnesses, false);

As described in section ref↗, that function does not itself constrain the identity to hold, but it is required that the caller perform a range check to verify that the two return values are at most 251264705520246785319773662051664216, a 118-bit number.

The range check performed is as follows:

bb::fr neg_prime = -bb::fr(uint256_t(target_basis.modulus));
field_t<Builder>::evaluate_polynomial_identity(left.prime_basis_limb, to_mul.prime_basis_limb, quotient.prime_basis_limb * neg_prime, -remainder_prime_limb);

field_t lo = field_t<Builder>::from_witness_index(ctx, lo_idx);
field_t hi = field_t<Builder>::from_witness_index(ctx, hi_idx);
const uint64_t carry_lo_msb = max_lo_bits - (2 * NUM_LIMB_BITS);
const uint64_t carry_hi_msb = max_hi_bits - (2 * NUM_LIMB_BITS);

// if both the hi and lo output limbs have less than 70 bits, we can use our custom
// limb accumulation gate (accumulates 2 field elements, each composed of 5 14-bit limbs, in 3 gates)
if (carry_lo_msb <= 70 && carry_hi_msb <= 70) {
    ctx->range_constrain_two_limbs(
        hi.witness_index, lo.witness_index, size_t(carry_lo_msb), size_t(carry_hi_msb));
} else {
    ctx->decompose_into_default_range(hi.normalize().witness_index, carry_hi_msb);
    ctx->decompose_into_default_range(lo.normalize().witness_index, carry_lo_msb);
}

As can be seen here, the values are range checked for max_lo_bits - (2 * NUM_LIMB_BITS) and max_hi_bits - (2 * NUM_LIMB_BITS) bits. Should those values be 118 bits or larger, then an overflow will be possible, breaking soundness of the check of the identity. Thus, for correct functioning, it is necessary that max_lo_bits and max_hi_bits are at most 253.

These are defined earlier in the code as follows:

const uint512_t max_lo = max_r0 + (max_r1 << NUM_LIMB_BITS) + max_a0;
const uint512_t max_hi = max_r2 + (max_r3 << NUM_LIMB_BITS) + max_a1;

uint64_t max_lo_bits = (max_lo.get_msb() + 1);
uint64_t max_hi_bits = max_hi.get_msb() + 1;
if ((max_lo_bits & 1ULL) == 1ULL) {
    ++max_lo_bits;
}
if ((max_hi_bits & 1ULL) == 1ULL) {
    ++max_hi_bits;
}

Before the last increment here, the two values may be at most 252, so max_lo and max_hi may be at most numbers with a bit width of 252. This in turn implies that max_r1 and max_r3 may be at most 184 bits wide.

These are defined as follows:

   uint512_t max_b0 = (left.binary_basis_limbs[1].maximum_value * to_mul.binary_basis_limbs[0].maximum_value);
   max_b0 += (neg_modulus_limbs_u256[1] * quotient.binary_basis_limbs[0].maximum_value);
   uint512_t max_b1 = (left.binary_basis_limbs[0].maximum_value * to_mul.binary_basis_limbs[1].maximum_value);
   max_b1 += (neg_modulus_limbs_u256[0] * quotient.binary_basis_limbs[1].maximum_value);
   uint512_t max_c0 = (left.binary_basis_limbs[1].maximum_value * to_mul.binary_basis_limbs[1].maximum_value);

   // ...

to_mul.binary_basis_limbs[0].maximum_value);
   max_d0 += (neg_modulus_limbs_u256[3] * quotient.binary_basis_limbs[0].maximum_value);
   uint512_t max_d1 = (left.binary_basis_limbs[2].maximum_value * to_mul.binary_basis_limbs[1].maximum_value);
   max_d1 += (neg_modulus_limbs_u256[2] * quotient.binary_basis_limbs[1].maximum_value);
   uint512_t max_d2 = (left.binary_basis_limbs[1].maximum_value * to_mul.binary_basis_limbs[2].maximum_value);
   max_d2 += (neg_modulus_limbs_u256[1] * quotient.binary_basis_limbs[2].maximum_value);
   uint512_t max_d3 = (left.binary_basis_limbs[0].maximum_value * to_mul.binary_basis_limbs[3].maximum_value);
   max_d3 += (neg_modulus_limbs_u256[0] * quotient.binary_basis_limbs[3].maximum_value);

   // ...

   const uint512_t max_r1 = max_b0 + max_b1;
   const uint512_t max_r2 = max_c0 + max_c1 + max_c2;
   const uint512_t max_r3 = max_d0 + max_d1 + max_d2 + max_d3;

Now if left.binary_basis_limbs[1].maximum_value and to_mul.binary_basis_limbs[0].maximum_value are each 117 bits wide, then max_b0 and hence max_r1 can be up to 233 bits wide, which is more than 184. Similarly, if left.binary_basis_limbs[1].maximum_value and to_mul.binary_basis_limbs[2] are each 117 bits wide, then max_d1 and hence max_r3 can be up to 233 bits wide.

It would appear that limbs of such width should be supported by unsafe_evaluate_multiply_add; as such, limbs pass reduction_check. A call of unsafe_evaluate_multiply_add with inputs triggering this bug can in particular be reached through operator*.

Impact

It is possible to prove incorrect multiply-add identities for some bigfield elements. In particular, it is possible to prove incorrect results for the product of some bigfield elements that have sufficiently large limbs.

Recommendations

As a first measure, assert that max_lo_bits and max_hi_bits are at most 253. This will convert this soundness issue into a completeness issue. For completeness, it may be possible to keep the code for unsafe_evaluate_multiply_add and merely make stricter assumptions on the inputs. Otherwise, changes to the function's logic would be required.

Remediation

Zellic Ā© 2025Back to top ↑