Skip to content

Conversation

@dynco-nym
Copy link
Contributor

@dynco-nym dynco-nym commented Dec 11, 2025

New APIs

  • define measurement
  • retire measurement

Existing APIs

  • extended so that measurement kind needs to be specified
  • all performance metrics now need to be defined under some measurement kind

To do

  • more unit testing of edge cases
  • handle staleness check before deploying (commented in the code about it)

This change is Reviewable

try_load_performance returns per kind

load_measurement_kind & unit test

Naming

stale submission unit test & note
@dynco-nym dynco-nym requested a review from jstuczyn December 11, 2025 15:54
@github-actions
Copy link

Thank you for making this first PR

"common/nym-common",
"common/config",
"common/cosmwasm-smart-contracts/coconut-dkg",
"contracts/performance",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't add it to the main workspace - all contracts have their own sub workspace

let mut tester = init_contract_tester();
let admin = tester.admin_msg();
let new_measurement = MeasurementKind::from("new-measurement");

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit-ish. technically do validate it does indeed require admin privileges, you have to check that it does error out from any other account

let mut non_existent_measurement_kind = Vec::new();

// 3. submit it
if self.node_bonded(deps.as_ref(), first.node_id)? {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is just me thinking aloud. I wonder, would it perhaps simplify things if batch submission was always guaranteed to be for one kind at once?


let key = (epoch_id, data.node_id);
let updated = match self.results.may_load(storage, key)? {
let key = (epoch_id, data.node_id, data.measurement_kind.clone());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could remove all the kind cloning all over the place if you changed your map key to pub(crate) results: Map<(EpochId, NodeId, &'static str), NodeResults>,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you can do that: this string is user-defined at runtime, you can't store it into 'static str

.min(retrieval_limits::NODE_EPOCH_PERFORMANCE_MAX_LIMIT) as usize;

let start = start_after.map(Bound::exclusive);
let start = start_after.map(|node_id| Bound::exclusive((node_id + 1, String::new())));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. wait, why are we skipping one node?
  2. if you have to provide a rather opaque value to the bound, i.e. String::new(), perhaps the whole query signature should be changed? look at queries in other contracts, for pagination the caller is responsible for providing all elements of the subkey

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

String::new() is intended to start at "" onwards for MeasurementKind. I.e. it's supposed to capture all measurement kinds under a certain node & epoch.

In other pagination cases in the contract, you can "cut" anywhere across nodes & epochs. This signature guarantees that you cannot cut across measurement kinds of a node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how this would be more ergonomic to paginate through

page 1

(epoch_1, node_1, measurement_1)
(epoch_1, node_1, measurement_2)
(epoch_1, node_1, measurement_3)
(epoch_1, node_2, measurement_1)
(epoch_1, node_2, measurement_2)

page 2

(epoch_1, node_2, measurement_3)
(epoch_1, node_3, measurement_1)
(epoch_1, node_3, measurement_2)
(epoch_1, node_3, measurement_3)
(epoch_1, node_4, measurement_1)

but I'm open to it if you think it's more valuable

// because API aggregates per NodeId, and the storage doesn't, we have to
// first collect all different measurements for a node and use an
// intermediary struct to map from storage to the object returned on the API
let mut measurements_per_node: HashMap<NodeId, Vec<(MeasurementKind, NodeResults)>> =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're just overcomplicating it here. we should just read the data as it's in the storage and let the caller sort it themselves

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

storage layout is a bit clunky due to cw-storage constraints. Don't we care about API being nicer?

.results
.range(deps.storage, start, None, Order::Ascending)
.take(limit)
// we can't cut a pagination limit here becasue we don't want to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not. just change the query parameters and let the caller resume from where it's meant to resume from

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants