You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/evaluate_pipeline.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,12 +63,12 @@ def evaluate_pipeline(
63
63
64
64
#### Cost
65
65
66
-
Along with the return of the `loss`, the `evaluate_pipeline=` function would optionally need to return a `cost` in certain cases. Specifically when the `max_cost_total` parameter is being utilized in the `neps.run` function.
66
+
Along with the return of the `loss`, the `evaluate_pipeline=` function would optionally need to return a `cost` in certain cases. Specifically when the `cost_to_spend` parameter is being utilized in the `neps.run` function.
67
67
68
68
69
69
!!! note
70
70
71
-
`max_cost_total` sums the cost from all returned configuration results and checks whether the maximum allowed cost has been reached (if so, the search will come to an end).
71
+
`cost_to_spend` sums the cost from all returned configuration results and checks whether the maximum allowed cost has been reached (if so, the search will come to an end).
72
72
73
73
```python
74
74
import neps
@@ -97,7 +97,7 @@ if __name__ == "__main__":
97
97
evaluate_pipeline=evaluate_pipeline,
98
98
pipeline_space=pipeline_space, # Assuming the pipeline space is defined
Copy file name to clipboardExpand all lines: docs/reference/neps_run.md
+18-19Lines changed: 18 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,9 +45,9 @@ See the following for more:
45
45
* What goes in and what goes out of [`evaluate_pipeline()`](../reference/evaluate_pipeline.md)?
46
46
47
47
## Budget, how long to run?
48
-
To define a budget, provide `max_evaluations_total=` to [`neps.run()`][neps.api.run],
48
+
To define a budget, provide `evaluations_to_spend=` to [`neps.run()`][neps.api.run],
49
49
to specify the total number of evaluations to conduct before halting the optimization process,
50
-
or `max_cost_total=` to specify a cost threshold for your own custom cost metric, such as time, energy, or monetary, as returned by each evaluation of the pipeline .
50
+
or `cost_to_spend=` to specify a cost threshold for your own custom cost metric, such as time, energy, or monetary, as returned by each evaluation of the pipeline .
# Increase the total number of trials from 10 as set previously to 50
103
-
max_evaluations_total=50,
103
+
evaluations_to_spend=50,
104
104
)
105
105
```
106
106
107
107
If the run previously stopped due to reaching a budget and you specify the same budget, the worker will immediatly stop as it will remember the amount of budget it used previously.
108
108
109
109
## Overwriting a Run
110
110
111
-
To overwrite a run, simply provide the same `root_directory=` to [`neps.run()`][neps.api.run] as before, with the `overwrite_working_directory=True` argument.
111
+
To overwrite a run, simply provide the same `root_directory=` to [`neps.run()`][neps.api.run] as before, with the `overwrite_root_directory=True` argument.
112
112
113
113
```python
114
114
neps.run(
115
115
...,
116
116
root_directory="path/to/previous_result_dir",
117
-
overwrite_working_directory=True,
117
+
overwrite_root_directory=True,
118
118
)
119
119
```
120
120
@@ -125,9 +125,6 @@ neps.run(
125
125
## Getting the results
126
126
The results of the optimization process are stored in the `root_directory=`
127
127
provided to [`neps.run()`][neps.api.run].
128
-
To obtain a summary of the optimization process, you can enable the
129
-
`post_run_summary=True` argument in [`neps.run()`][neps.api.run],
130
-
while will generate a summary csv after the run has finished.
131
128
132
129
=== "Result Directory"
133
130
@@ -143,17 +140,19 @@ while will generate a summary csv after the run has finished.
143
140
│ └── config_2
144
141
│ ├── config.yaml
145
142
│ └── metadata.json
146
-
├── summary # Only if post_run_summary=True
143
+
├── summary
147
144
│ ├── full.csv
148
145
│ └── short.csv
146
+
│ ├── best_config_trajectory.txt
147
+
│ └── best_config.txt
149
148
├── optimizer_info.yaml # The optimizer's configuration
150
149
└── optimizer_state.pkl # The optimizer's state, shared between workers
151
150
```
152
151
153
152
=== "python"
154
153
155
154
```python
156
-
neps.run(..., post_run_summary=True)
155
+
neps.run(..., write_summary_to_disk=True)
157
156
```
158
157
159
158
To capture the results of the optimization process, you can use tensorbaord logging with various utilities to integrate
@@ -174,20 +173,20 @@ Any new workers that come online will automatically pick up work and work togeth
1. Limits the number of evaluations for this specific call of [`neps.run()`][neps.api.run].
185
-
2. Evaluations in-progress count towards max_evaluations_total, halting new ones when this limit is reached.
186
-
Setting this to `True` enables continuous sampling of new evaluations until the total of completed ones meets max_evaluations_total, optimizing resource use in time-sensitive scenarios.
184
+
2. Evaluations in-progress count towards evaluations_to_spend, halting new ones when this limit is reached.
185
+
Setting this to `True` enables continuous sampling of new evaluations until the total of completed ones meets evaluations_to_spend, optimizing resource use in time-sensitive scenarios.
187
186
188
187
!!! warning
189
188
190
-
Ensure `overwrite_working_directory=False` to prevent newly spawned workers from deleting the shared directory!
189
+
Ensure `overwrite_root_directory=False` to prevent newly spawned workers from deleting the shared directory!
191
190
192
191
193
192
=== "Shell"
@@ -227,7 +226,7 @@ neps.run(
227
226
228
227
!!! note
229
228
230
-
Any runs that error will still count towards the total `max_evaluations_total` or `max_evaluations_per_run`.
229
+
Any runs that error will still count towards the total `evaluations_to_spend` or `max_evaluations_per_run`.
0 commit comments