Summary
pmsims already supports several individual performance metrics, including calibration-based metrics, but the sample-size search currently optimises against one metric at a time. We do not yet support defining the minimum sample size as the smallest n that satisfies multiple metric-specific criteria simultaneously.
So the remaining feature gap is not adding calibration itself. It is adding a multi-metric decision rule.
Current state
- The wrappers accept a single
metric for the active search.
- Internal parsing still only uses the first metric when a vector is supplied.
simulate_custom() works with one metric_function at a time.
- Users who want multiple criteria currently have to run separate analyses and combine the results manually.
Desired behavior
Allow users to define a minimum sample size using more than one performance requirement, for example:
- discrimination threshold plus calibration threshold
- calibration slope plus R-squared
- C-index plus calibration for survival models
A straightforward first version would be:
- run the search for one metric
- continue or restart the search for the next metric over an updated range
- return the smallest
n satisfying all specified criteria
- report the per-metric intermediate minima as part of the result
Design questions
- Should the wrappers accept a vector of metric names, or should this only be exposed via
simulate_custom() first?
- Should
simulate_custom() accept a list of metric functions and target values?
- Should the final minimum be the maximum of separate per-metric minima, or should the search be staged sequentially over shrinking bounds?
- How should the returned
pmsims object store and print per-metric results?
- Do we need plotting and summary methods for multi-metric outputs from the start, or can that come later?
Minimum acceptance criteria
- users can specify more than one metric-specific criterion
- the result clearly reports the minimum sample size for each criterion and the final combined minimum
- documentation explains the search strategy and how combined criteria are resolved
- tests cover at least one binary, one continuous, or one survival multi-metric example
Goal
Support sample-size recommendations based on multiple performance criteria, rather than a single active metric.
Summary
pmsimsalready supports several individual performance metrics, including calibration-based metrics, but the sample-size search currently optimises against one metric at a time. We do not yet support defining the minimum sample size as the smallestnthat satisfies multiple metric-specific criteria simultaneously.So the remaining feature gap is not adding calibration itself. It is adding a multi-metric decision rule.
Current state
metricfor the active search.simulate_custom()works with onemetric_functionat a time.Desired behavior
Allow users to define a minimum sample size using more than one performance requirement, for example:
A straightforward first version would be:
nsatisfying all specified criteriaDesign questions
simulate_custom()first?simulate_custom()accept a list of metric functions and target values?pmsimsobject store and print per-metric results?Minimum acceptance criteria
Goal
Support sample-size recommendations based on multiple performance criteria, rather than a single active metric.