Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAINT: raise error is no partial_fit in hyperparameter search #840

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions dask_ml/model_selection/_incremental.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,14 @@ async def _fit(
models: Dict[int, Tuple[Model, Meta]] = {}
scores: Dict[int, Meta] = {}

if not hasattr(model, "partial_fit"):
raise ValueError(
f"model={model} does not implement `partial_fit`, a "
"requirement for doing incremental hyperparameter "
"optimization. For more detail, see\n\n"
" https://ml.dask.org/hyper-parameter-search.html#hyperparameter-scaling"
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hooray for informative error messages!


logger.info("[CV%s] creating %d models", prefix, len(params))
for ident, param in enumerate(params):
model = client.submit(_create_model, original_model, ident, **param)
Expand Down
50 changes: 31 additions & 19 deletions docs/source/hyper-parameter-search.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@
Hyper Parameter Search
======================

*Tools to perform hyperparameter optimization of Scikit-Learn API-compatible
models using Dask, and to scale hyperparameter optimization to* **larger
data and/or larger searches.**
*Hyperparameter optimization becomes difficult with* **large data** *and/or*
**complicated searches,** *and choosing the right tool requires some
consideration. All of Dask-ML's hyperparameter optimization focuses on using
Scikit-Learn API-compatible models.*

Hyperparameter searches are a required process in machine learning. Briefly,
machine learning models require certain "hyperparameters", model parameters
Expand All @@ -19,10 +20,14 @@ performance is desired and/or with massive datasets, which is common when
preparing for production or a paper publication. The following section
clarifies the issues that can occur:

* ":ref:`hyperparameter.scaling`" mentions problems that often occur in
hyperparameter optimization searches.
* ":ref:`hyperparameter.issues`" mentions two key problems that often occur in
hyperparameter optimization searches: large data and "compute constrained."
* ":ref:`hyperparameter.scaling`" mentions the tools to workaround these two
issues, and the requirements for each tool.

Tools that address these problems are expanded upon in these sections:
This section will explain which tools Dask-ML has, and in what cases they
should be used. Tools that address these problems are expanded upon in these
sections:

1. ":ref:`hyperparameter.drop-in`" details classes that mirror the Scikit-learn
estimators but work nicely with Dask objects and can offer better
Expand All @@ -32,10 +37,10 @@ Tools that address these problems are expanded upon in these sections:
3. ":ref:`hyperparameter.adaptive`" details classes that avoid extra
computation and find high-performing hyperparameters more quickly.

.. _hyperparameter.scaling:
.. _hyperparameter.issues:

Scaling hyperparameter searches
-------------------------------
Issues in hyperparameter searches
---------------------------------

Dask-ML provides classes to avoid the two most common issues in hyperparameter
optimization, when the hyperparameter search is...
Expand Down Expand Up @@ -92,9 +97,19 @@ model may require specialized hardware like GPUs:
... "average": [True, False],
... }
>>>
>>> # Compute constrained; only 1 CUDA GPU available
>>> # model = model.cuda() # PyTorch syntax
>>>

These issues are independent, and both can happen the same time.

These issues are independent and both can happen the same time. Dask-ML has
tools to address all 4 combinations. Let's look at each case.
.. _hyperparameter.scaling:

Scaling hyperparameter searches
-------------------------------

Dask-ML has tools to address all 4 combinations of "compute constrained" and
"memory constrained." Let's look at each case.

Neither compute nor memory constrained
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -131,7 +146,7 @@ Memory constrained, but not compute constrained
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This case happens when the data doesn't fit in memory but there aren't many
hyperparameters to search over. The data doesn't fit in memory, so it makes
hyperparameters to search over. The data doesn't fit in memory, so it only makes
sense to call ``partial_fit`` on each chunk of a Dask Array/Dataframe. This
estimator does that:

Expand Down Expand Up @@ -162,10 +177,8 @@ hardware like GPUs. The best class for this case is
Briefly, this estimator is easy to use, has strong mathematical motivation and
performs remarkably well. For more detail, see
":ref:`hyperparameter.hyperband-params`" and
":ref:`hyperparameter.hyperband-perf`".

Two other adaptive hyperparameter optimization algorithms are implemented in these
classes:
":ref:`hyperparameter.hyperband-perf`". Two other adaptive hyperparameter
optimization algorithms are implemented:

.. autosummary::
dask_ml.model_selection.SuccessiveHalvingSearchCV
Expand All @@ -175,9 +188,8 @@ The input parameters for these classes are more difficult to configure.

All of these searches can reduce time to solution by (cleverly) deciding which
parameters to evaluate. That is, these searches *adapt* to history to decide
which parameters to continue evaluating. All of these estimators support
ignoring models models with decreasing score via the ``patience`` and ``tol``
parameters.
which parameters to continue evaluating. With that, it's natural to require
that the models implement ``partial_fit``.

Another way to limit computation is to avoid repeated work during during the
searches. This is especially useful with expensive preprocessing, which is
Expand Down
23 changes: 22 additions & 1 deletion tests/model_selection/test_incremental.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
)
from scipy.stats import uniform
from sklearn.base import clone
from sklearn.cluster import MiniBatchKMeans
from sklearn.cluster import KMeans, MiniBatchKMeans
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import ParameterGrid, ParameterSampler
from sklearn.utils import check_random_state
Expand All @@ -33,6 +33,7 @@
HyperbandSearchCV,
IncrementalSearchCV,
InverseDecaySearchCV,
RandomizedSearchCV,
)
from dask_ml.model_selection._incremental import _partial_fit, _score, fit
from dask_ml.model_selection.utils_test import LinearFunction, _MaybeLinearFunction
Expand Down Expand Up @@ -855,6 +856,26 @@ def test_warns_scores_per_fit(c, s, a, b):
yield search.fit(X, y)


@gen_cluster(client=True)
async def test_raises_if_no_partial_fit(c, s, a, b):
X, y = make_classification(n_samples=20, n_features=3, chunks=(10, -1))
X, y = await c.gather(c.compute([X, y]))
assert isinstance(X, np.ndarray)
assert isinstance(y, np.ndarray)

params = {"n_init": list(range(1, 10))}
model = KMeans(max_iter=5, verbose=1, algorithm="elkan")

search = IncrementalSearchCV(model, params)
with pytest.raises(ValueError, match="does not implement `partial_fit`"):
await search.fit(X, y)

# no partial_fit, but works with a passive search
search2 = RandomizedSearchCV(model, params, n_iter=2)
await search2.fit(X, y)
assert search2.best_score_


@gen_cluster(client=True)
async def test_model_future(c, s, a, b):
X, y = make_classification(n_samples=100, n_features=5, chunks=10)
Expand Down