Precision loss in reward rate calculation
Description
When calculating the effective reward rate, the effective_reward_rate
function uses an order of operations that is not ideal; we recommend multiplying before dividing in cases where there is little risk of overflow to improve calculation precision.
Impact
The effective reward rate may be slightly lower than intended.
Recommendations
Change the order of the following operations:
fun effective_reward_rate(
stats_config: &StatsConfig,
rewards: u128,
balance_at_last_update: u128,
time_delta: u128,
): u128 {
- (rewards * stats_config.rate_normalizer / balance_at_last_update) *
- stats_config.time_normalizer / time_delta
+ (rewards * stats_config.rate_normalizer*stats_config.time_normalizer)/
+ (balance_at_last_update * time_delta)
}
Remediation
In response to this finding, Move Labs noted that:
We have two normalizers just so that we can have double control over precision. rate_normalizer will be as large as possible that still ensures no overflows in the first mul_div. Then time_normalizer could be any other reasonable value for precision.
Multiplying the normalizers first, as in the recommendation is the same as using just one normalizer. We are hoping to get additional precision if necessary using two normalizers.