diff --git a/.gitignore b/.gitignore index b7faf40..36c6042 100644 --- a/.gitignore +++ b/.gitignore @@ -205,3 +205,9 @@ cython_debug/ marimo/_static/ marimo/_lsp/ __marimo__/ + +# Outputs +outputs/ + +# .DS_Store +.DS_Store \ No newline at end of file diff --git a/README.md b/README.md index 74a6356..7c47635 100644 --- a/README.md +++ b/README.md @@ -17,16 +17,10 @@ curl -LsSf https://astral.sh/uv/install.sh | sh uv pip install -e . ``` -This installs `autosim` in editable mode along with its runtime dependencies: -- `numpy>=1.24` -- `scipy>=1.10` -- `tqdm>=4.65` -- `torch>=2.0` - ### Install development dependencies (includes pytest) ```bash -uv sync --group dev +uv sync --extra dev ``` ## Running tests @@ -36,3 +30,91 @@ Once dev dependencies are installed: ```bash uv run pytest ``` + +## Generate training data (Hydra CLI) + +Use the tiny CLI to generate train/valid/test splits from any simulator that +inherits `SpatioTemporalSimulator`: + +```bash +uv run autosim +``` + +List available simulator configs: + +```bash +uv run autosim list +``` + +Simulator defaults now live in package configs under +`src/autosim/configs/simulator` and can be selected via config groups. + +Available simulator config names: +`advection_diffusion`, `advection_diffusion_multichannel`, `compressible_fluid_2d`, +`conditioned_navier_stokes_2d`, `epidemic`, `flow_problem`, `gray_scott`, +`gross_pitaevskii_equation_2d`, `hydrodynamics_2d`, `lattice_boltzmann`, +`projectile`, `projectile_multioutput`, `reaction_diffusion`, `seir_simulator`, +`shallow_water2d`. + +Override simulator and dataset settings from the command line via Hydra: + +```bash +uv run autosim \ + simulator=shallow_water2d \ + simulator.nx=32 \ + simulator.ny=32 \ + simulator.T=10.0 \ + dataset.n_train=50 dataset.n_valid=10 dataset.n_test=10 \ + dataset.output_dir=examples/experimental/generated_datasets/shallow_water_small \ + seed=123 overwrite=true +``` + +Use a faster built-in simulator config: + +```bash +uv run autosim \ + simulator=advection_diffusion \ + simulator.n=16 simulator.T=0.2 simulator.dt=0.1 \ + dataset.n_train=1 dataset.n_valid=1 dataset.n_test=1 +``` + +Optionally save example rollout videos for selected batch indices after generation: + +```bash +uv run autosim \ + simulator=advection_diffusion_multichannel \ + dataset.n_train=4 dataset.n_valid=1 dataset.n_test=1 \ + visualize.enabled=true \ + visualize.split=train \ + visualize.batch_indices=[0,2] \ + visualize.file_ext=gif +``` + +By default videos are written under +`/examples//batch_.`. +Use `visualize.file_ext=mp4` if ffmpeg is available. + + Generate one combined dataset from ordered strata values (single sweep key): + + ```bash + uv run autosim \ + simulator=gray_scott \ + stratify.enabled=true \ + stratify.key=simulator.pattern \ + stratify.values=[gliders,bubbles,maze,worms,spirals,spots] \ + dataset.n_train=240 dataset.n_valid=24 dataset.n_test=24 \ + dataset.output_dir=outputs/gray_scott_combined + ``` + + When stratification is enabled, each split size is divided equally across strata, + and results are concatenated in the exact order of `stratify.values`. + If a split size is not divisible by the number of strata, an error is raised. + +Bring your own simulator subclass (no registry needed): + +```bash +uv run autosim \ + simulator._target_=my_package.my_module.MySimulator \ + ++simulator.my_arg=42 \ + simulator.log_level=warning +``` diff --git a/examples/README.md b/examples/README.md index 665cc4d..397ec08 100644 --- a/examples/README.md +++ b/examples/README.md @@ -6,10 +6,12 @@ These simulation families are organized from foundational pattern dynamics to in ### Pattern-formation families +- [Reaction-Diffusion](experimental/00_00_reaction_diffusion.ipynb): A spectral (FFT-based) two-species reaction-diffusion generator that produces diverse spatiotemporal patterns — spirals, spots, and labyrinthine textures — across reaction and diffusion parameter regimes. - [Gray-Scott](experimental/00_01_gray_scott.ipynb): A spectral ETDRK4 reaction-diffusion generator that spans diverse morphologies (spots, spirals, worms, and maze-like regimes) via feed/kill parameters. ### Weather-like and transport families +- [Advection-Diffusion](experimental/01_00_advection_diffusion.ipynb): A 2D incompressible vorticity–streamfunction solver with spectral Poisson inversion that generates vorticity fields across a range of viscosities and forcing strengths. - [Shallow-Water 2D](experimental/01_01_shallow_water_equation.ipynb): A geophysical fluid model that evolves height and horizontal velocity fields to capture wave propagation, rotation effects, and balanced flow structure. ### Classical fluid dynamics families diff --git a/examples/experimental/00_00_reaction_diffusion.ipynb b/examples/experimental/00_00_reaction_diffusion.ipynb new file mode 100644 index 0000000..7826baf --- /dev/null +++ b/examples/experimental/00_00_reaction_diffusion.ipynb @@ -0,0 +1,133 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "0", + "metadata": {}, + "source": [ + "# Reaction-Diffusion Simulator\n", + "\n", + "This notebook runs the `ReactionDiffusion` simulator — a coupled PDE system modelling two interacting species $u$ and $v$ with diffusion and nonlinear reaction kinetics.\n", + "\n", + "- **State variables**: `u` and `v` (species concentrations).\n", + "- **Conditioning variables**: `beta` (reaction rate strength), `d` (ratio of diffusion coefficients).\n", + "- **Dynamics**: spectral (FFT-based) spatial differentiation combined with `scipy` RK45 time integration.\n", + "- **Boundary conditions**: periodic in both spatial directions.\n", + "\n", + "### Why this is useful\n", + "\n", + "Reaction-diffusion systems produce a rich variety of spatiotemporal patterns — stripes, spots, spirals, and labyrinthine textures — depending on the parameter regime. This makes them a natural benchmark for learning PDEs that exhibit diverse structure from low-dimensional parameters.\n" + ] + }, + { + "cell_type": "markdown", + "id": "1", + "metadata": {}, + "source": [ + "## Governing equations\n", + "\n", + "The two-species reaction-diffusion system takes the form:\n", + "\n", + "$$\n", + "\\frac{\\partial u}{\\partial t} = D_u \\nabla^2 u + R_u(u, v; \\beta),\n", + "\\qquad\n", + "\\frac{\\partial v}{\\partial t} = D_v \\nabla^2 v + R_v(u, v; \\beta),\n", + "$$\n", + "\n", + "where $D_v / D_u = d$ and $R_u$, $R_v$ are the nonlinear reaction terms parameterised by $\\beta$.\n", + "\n", + "### Boundary conditions\n", + "\n", + "Periodic in both spatial directions on the domain $[-L/2, L/2]^2$:\n", + "\n", + "$$\n", + "u(-L/2, y, t) = u(L/2, y, t), \\qquad u(x, -L/2, t) = u(x, L/2, t),\n", + "$$\n", + "\n", + "and likewise for $v$.\n", + "\n", + "### Parameters\n", + "\n", + "- **`beta`**: controls the strength of the reaction nonlinearity. Larger values push the system further from equilibrium.\n", + "- **`d`**: ratio of diffusion coefficients $D_v / D_u$. Small `d` means species $v$ diffuses much more slowly, favouring pattern formation.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2", + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import HTML\n", + "\n", + "from autosim.experimental.simulations import ReactionDiffusion\n", + "from autosim.utils import plot_spatiotemporal_video" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3", + "metadata": {}, + "outputs": [], + "source": [ + "sim = ReactionDiffusion(\n", + " return_timeseries=True,\n", + " log_level=\"progress_bar\",\n", + " n=64,\n", + " L=20,\n", + " T=32.2,\n", + " dt=0.1,\n", + " parameters_range={\n", + " \"beta\": (1.0, 2.0),\n", + " \"d\": (0.05, 0.3),\n", + " },\n", + ")\n", + "\n", + "batch = sim.forward_samples_spatiotemporal(n=2, random_seed=42)\n", + "\n", + "print(\"data shape:\", batch[\"data\"].shape, \"[batch, time, x, y, channels={u, v}]\")\n", + "print(\"constant_scalars shape:\", batch[\"constant_scalars\"].shape)\n", + "print(\"sampled params (beta, d):\", batch[\"constant_scalars\"])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4", + "metadata": {}, + "outputs": [], + "source": [ + "anim = plot_spatiotemporal_video(\n", + " batch[\"data\"],\n", + " batch_idx=0,\n", + " channel_names=[\"u\", \"v\"],\n", + " preserve_aspect=True,\n", + ")\n", + "HTML(anim.to_jshtml())" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "autosim (3.11.15)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.15" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/examples/experimental/00_01_gray_scott.ipynb b/examples/experimental/00_01_gray_scott.ipynb index 71b4156..53467af 100644 --- a/examples/experimental/00_01_gray_scott.ipynb +++ b/examples/experimental/00_01_gray_scott.ipynb @@ -104,7 +104,7 @@ " # T=300.0,\n", " # dt=1.0,\n", " # snapshot_dt=1.0,\n", - " T=1280.0,\n", + " T=1284.0,\n", " dt=1.0,\n", " snapshot_dt=4.0,\n", " initial_condition=\"gaussians\",\n", @@ -216,7 +216,7 @@ ], "metadata": { "kernelspec": { - "display_name": "autosim (3.11.14)", + "display_name": "autosim (3.11.15)", "language": "python", "name": "python3" }, @@ -230,7 +230,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.14" + "version": "3.11.15" } }, "nbformat": 4, diff --git a/examples/experimental/01_00_advection_diffusion.ipynb b/examples/experimental/01_00_advection_diffusion.ipynb new file mode 100644 index 0000000..c4ae598 --- /dev/null +++ b/examples/experimental/01_00_advection_diffusion.ipynb @@ -0,0 +1,148 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "0", + "metadata": {}, + "source": [ + "# Advection-Diffusion (Multichannel) Simulator\n", + "\n", + "This notebook runs the `AdvectionDiffusionMultichannel` simulator — a 2D incompressible fluid solver driven by a vorticity initial condition.\n", + "\n", + "- **State variables**: `vorticity`, `u` (x-velocity), `v` (y-velocity), `streamfunction`.\n", + "- **Conditioning variables**: `nu` (kinematic viscosity), `mu` (advection / forcing strength).\n", + "- **Dynamics**: spectral Poisson solve (FFT) for streamfunction; central finite differences for spatial derivatives; `scipy` RK45 time integration.\n", + "- **Boundary conditions**: periodic in both spatial directions.\n", + "\n", + "### Why this is useful\n", + "\n", + "Exposing all four fluid channels simultaneously makes this a natural benchmark for multi-target PDE surrogate models. The rich vorticity dynamics across a range of viscosities and forcing strengths produce diverse trajectories — from laminar diffusion to turbulent-like swirl patterns.\n" + ] + }, + { + "cell_type": "markdown", + "id": "1", + "metadata": {}, + "source": [ + "## Governing equations\n", + "\n", + "The vorticity–streamfunction formulation of the 2D incompressible Navier-Stokes equations:\n", + "\n", + "$$\n", + "\\frac{\\partial \\omega}{\\partial t} + (\\mathbf{u} \\cdot \\nabla)\\omega = \\nu \\nabla^2 \\omega + \\mu f(\\omega),\n", + "$$\n", + "\n", + "with velocity recovered from the streamfunction $\\psi$ via\n", + "\n", + "$$\n", + "\\nabla^2 \\psi = -\\omega,\n", + "\\qquad\n", + "u = \\frac{\\partial \\psi}{\\partial y},\\ v = -\\frac{\\partial \\psi}{\\partial x}.\n", + "$$\n", + "\n", + "### Boundary conditions\n", + "\n", + "Periodic in both spatial directions on the domain $[0, L]^2$:\n", + "\n", + "$$\n", + "\\omega(0, y, t) = \\omega(L, y, t), \\qquad \\omega(x, 0, t) = \\omega(x, L, t).\n", + "$$\n", + "\n", + "### Parameters\n", + "\n", + "- **`nu`**: kinematic viscosity. Smaller values produce more active, longer-lived vortex structures.\n", + "- **`mu`**: advection / forcing strength controlling the intensity of nonlinear transport.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2", + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import HTML\n", + "\n", + "from autosim.simulations import AdvectionDiffusionMultichannel\n", + "from autosim.utils import plot_spatiotemporal_video" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3", + "metadata": {}, + "outputs": [], + "source": [ + "sim = AdvectionDiffusionMultichannel(\n", + " return_timeseries=True,\n", + " log_level=\"progress_bar\",\n", + " n=64,\n", + " L=10.0,\n", + " T=80.25,\n", + " dt=0.25,\n", + " output_indices=[0], # vorticity\n", + " parameters_range={\n", + " \"nu\": (0.0001, 0.01),\n", + " \"mu\": (0.5, 2.0),\n", + " },\n", + ")\n", + "\n", + "batch = sim.forward_samples_spatiotemporal(n=2, random_seed=42)\n", + "\n", + "print(\n", + " \"data shape:\",\n", + " batch[\"data\"].shape,\n", + " \"[batch, time, x, y, channels={vorticity}]\",\n", + ")\n", + "print(\"constant_scalars shape:\", batch[\"constant_scalars\"].shape)\n", + "print(\"sampled params (nu, mu):\", batch[\"constant_scalars\"])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4", + "metadata": {}, + "outputs": [], + "source": [ + "anim = plot_spatiotemporal_video(\n", + " batch[\"data\"],\n", + " batch_idx=0,\n", + " channel_names=[\"vorticity\"],\n", + " preserve_aspect=True,\n", + ")\n", + "HTML(anim.to_jshtml())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "autosim (3.11.15)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.15" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/examples/experimental/02_01_lattice_boltzmann.ipynb b/examples/experimental/02_01_lattice_boltzmann.ipynb index f5ad0f1..6509cb2 100644 --- a/examples/experimental/02_01_lattice_boltzmann.ipynb +++ b/examples/experimental/02_01_lattice_boltzmann.ipynb @@ -104,7 +104,7 @@ " return_timeseries=True,\n", " width=128,\n", " height=32,\n", - " T=12.8,\n", + " T=12.9,\n", " dt=12.8 / 400, # Buffer\n", " n_saved_frames=321,\n", " parameters_range={\n", @@ -249,7 +249,7 @@ ], "metadata": { "kernelspec": { - "display_name": "autosim (3.11.14)", + "display_name": "autosim (3.11.15)", "language": "python", "name": "python3" }, @@ -263,7 +263,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.14" + "version": "3.11.15" } }, "nbformat": 4, diff --git a/pyproject.toml b/pyproject.toml index 1f29e0f..38efc76 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -17,8 +17,12 @@ dependencies = [ "matplotlib", "scikit-learn>=1.7.2", "einops>=0.8.2", + "hydra-core>=1.3", ] +[project.scripts] +autosim = "autosim.cli:main" + [project.optional-dependencies] dev = [ "ipykernel>=7.1.0", @@ -99,3 +103,19 @@ convention = "numpy" [tool.ruff.lint.per-file-ignores] "tests/*.py" = ["D"] + +# https://docs.astral.sh/uv/guides/integration/pytorch/#using-a-pytorch-index +# CUDA wheels (used automatically on GH if available) +[[tool.uv.index]] +name = "pytorch-cu126" +url = "https://download.pytorch.org/whl/cu126" +explicit = true + +[tool.uv.sources] +autoemulate = { git = "https://github.com/alan-turing-institute/autoemulate.git" } +torch = [ + { index = "pytorch-cu126", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, +] +torchvision = [ + { index = "pytorch-cu126", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, +] diff --git a/src/autosim/cli.py b/src/autosim/cli.py new file mode 100644 index 0000000..d302581 --- /dev/null +++ b/src/autosim/cli.py @@ -0,0 +1,360 @@ +from __future__ import annotations + +import argparse +import sys +import uuid +from pathlib import Path +from typing import Any + +import hydra +import torch +from hydra.utils import get_original_cwd, instantiate +from omegaconf import OmegaConf + +from autosim.simulations.base import SpatioTemporalSimulator +from autosim.utils import plot_spatiotemporal_video + +if not OmegaConf.has_resolver("shortuuid"): + OmegaConf.register_new_resolver( + "shortuuid", lambda n=7: uuid.uuid4().hex[: int(n)], use_cache=True + ) + + +def build_simulator(simulator_cfg: Any) -> SpatioTemporalSimulator: + """Instantiate and validate a spatiotemporal simulator from Hydra config.""" + simulator = instantiate(simulator_cfg) + if not isinstance(simulator, SpatioTemporalSimulator): + msg = ( + "Configured simulator must inherit from SpatioTemporalSimulator for " + "dataset generation. For non-spatiotemporal simulators, use their " + "forward/forward_batch API directly." + ) + raise TypeError(msg) + return simulator + + +def generate_dataset_splits( + sim: SpatioTemporalSimulator, + n_train: int, + n_valid: int, + n_test: int, + base_seed: int | None = None, + ensure_exact_n: bool = False, +) -> dict[str, dict[str, Any]]: + """Generate train/valid/test splits from a simulator.""" + # Reserve disjoint seed ranges so retries in one split cannot collide with + # initial or retry seeds in another. + split_seed_stride = ( + SpatioTemporalSimulator._retry_budget(max(n_train, n_valid, n_test)) + 1 + ) + + def get_seed(offset: int) -> int | None: + if base_seed is None: + return None + return base_seed + offset * split_seed_stride + + train = sim.forward_samples_spatiotemporal( + n=n_train, + random_seed=get_seed(0), + ensure_exact_n=ensure_exact_n, + ) + valid = sim.forward_samples_spatiotemporal( + n=n_valid, + random_seed=get_seed(1), + ensure_exact_n=ensure_exact_n, + ) + test = sim.forward_samples_spatiotemporal( + n=n_test, + random_seed=get_seed(2), + ensure_exact_n=ensure_exact_n, + ) + return {"train": train, "valid": valid, "test": test} + + +def save_dataset_splits( + splits: dict[str, dict[str, Any]], + output_dir: str | Path, + overwrite: bool = False, +) -> None: + """Persist split dictionaries to `output_dir/{split}/data.pt`.""" + output_path = Path(output_dir) + expected_files = [ + output_path / split / "data.pt" for split in ("train", "valid", "test") + ] + if not overwrite and any(path.exists() for path in expected_files): + msg = ( + f"Refusing to overwrite existing dataset files in '{output_path}'. " + "Set overwrite=true to replace them." + ) + raise FileExistsError(msg) + + for split_name, payload in splits.items(): + split_dir = output_path / split_name + split_dir.mkdir(parents=True, exist_ok=True) + torch.save(payload, split_dir / "data.pt") + + +def save_resolved_config(cfg: Any, output_dir: str | Path) -> None: + """Persist the fully resolved Hydra config next to generated datasets.""" + output_path = Path(output_dir) + output_path.mkdir(parents=True, exist_ok=True) + resolved_cfg_path = output_path / "resolved_config.yaml" + resolved_yaml = OmegaConf.to_yaml(cfg, resolve=True) + resolved_cfg_path.write_text(resolved_yaml, encoding="utf-8") + + +def save_example_videos( + splits: dict[str, dict[str, Any]], + output_dir: str | Path, + visualize_cfg: Any | None, + channel_names: list[str] | None = None, +) -> None: + """Optionally render example videos for selected batch indices. + + Expected data shape is ``[batch, time, x, y, channels]``. + """ + if visualize_cfg is None or not bool(visualize_cfg.get("enabled", False)): + return + + split_name = str(visualize_cfg.get("split", "train")) + if split_name not in splits: + msg = f"visualize.split='{split_name}' not found in generated splits." + raise ValueError(msg) + + split_payload = splits[split_name] + data = split_payload.get("data") + if not isinstance(data, torch.Tensor) or data.ndim != 5: + msg = ( + "visualization expects split payload 'data' as a 5D torch.Tensor " + "with shape [batch,time,x,y,channels]." + ) + raise ValueError(msg) + + batch_indices_cfg = visualize_cfg.get("batch_indices", []) + batch_indices = [int(idx) for idx in batch_indices_cfg] + if not batch_indices: + return + for idx in batch_indices: + if idx < 0 or idx >= data.shape[0]: + msg = ( + f"visualize batch index {idx} is out of range for split " + f"'{split_name}' with batch size {data.shape[0]}." + ) + raise ValueError(msg) + + fps = int(visualize_cfg.get("fps", 5)) + if fps <= 0: + msg = "visualize.fps must be positive." + raise ValueError(msg) + + file_ext = str(visualize_cfg.get("file_ext", "gif")).lstrip(".").lower() + if file_ext not in {"gif", "mp4"}: + msg = "visualize.file_ext must be one of ['gif', 'mp4']." + raise ValueError(msg) + + videos_dir = Path(output_dir) / "examples" / split_name + videos_dir.mkdir(parents=True, exist_ok=True) + + overwrite = bool(visualize_cfg.get("overwrite", True)) + preserve_aspect = bool(visualize_cfg.get("preserve_aspect", False)) + + configured_channel_names = visualize_cfg.get("channel_names", None) + resolved_channel_names = channel_names + if configured_channel_names is not None: + resolved_channel_names = [str(name) for name in configured_channel_names] + + for idx in batch_indices: + save_path = videos_dir / f"batch_{idx}.{file_ext}" + if save_path.exists() and not overwrite: + continue + plot_spatiotemporal_video( + true=data, + batch_idx=idx, + fps=fps, + save_path=str(save_path), + channel_names=resolved_channel_names, + preserve_aspect=preserve_aspect, + ) + + +def get_per_strata_counts( + n_train: int, + n_valid: int, + n_test: int, + n_strata: int, +) -> tuple[int, int, int]: + """Get per-strata split sizes, requiring exact divisibility.""" + if n_strata <= 0: + msg = "Number of strata must be positive." + raise ValueError(msg) + + for split_name, total in ( + ("train", n_train), + ("valid", n_valid), + ("test", n_test), + ): + if total % n_strata != 0: + msg = ( + f"dataset.n_{split_name}={total} must be divisible by " + f"number of strata ({n_strata})." + ) + raise ValueError(msg) + + return n_train // n_strata, n_valid // n_strata, n_test // n_strata + + +def combine_stratified_splits( + ordered_strata_splits: list[dict[str, dict[str, Any]]], +) -> dict[str, dict[str, Any]]: + """Combine per-strata splits preserving strata order in batch dimension.""" + if not ordered_strata_splits: + msg = "No strata outputs to combine." + raise ValueError(msg) + + combined: dict[str, dict[str, Any]] = {} + split_names = ("train", "valid", "test") + for split in split_names: + first_payload = ordered_strata_splits[0][split] + merged_payload: dict[str, Any] = {} + + for key in first_payload: + values = [group[split][key] for group in ordered_strata_splits] + if all(isinstance(value, torch.Tensor) for value in values): + merged_payload[key] = torch.cat(values, dim=0) + elif all(value is None for value in values): + merged_payload[key] = None + else: + msg = ( + f"Cannot combine non-tensor field '{key}' across strata. " + "Expected all tensors or all None." + ) + raise ValueError(msg) + + combined[split] = merged_payload + + return combined + + +@hydra.main(version_base=None, config_path="configs", config_name="generate_data") +def _generate_main(cfg: Any) -> None: + """Generate simulation datasets from a Hydra-configured simulator.""" + channel_names_for_visualization: list[str] | None = None + stratify_cfg = cfg.get("stratify") + if stratify_cfg is not None and bool(stratify_cfg.get("enabled", False)): + key = stratify_cfg.get("key") + values = list(stratify_cfg.get("values", [])) + if key is None or str(key).strip() == "": + msg = "stratify.key must be set when stratify.enabled=true." + raise ValueError(msg) + if not values: + msg = "stratify.values must be a non-empty list when stratify.enabled=true." + raise ValueError(msg) + + n_train_each, n_valid_each, n_test_each = get_per_strata_counts( + n_train=cfg.dataset.n_train, + n_valid=cfg.dataset.n_valid, + n_test=cfg.dataset.n_test, + n_strata=len(values), + ) + + key_path = str(key) + if key_path.startswith("simulator."): + key_path = key_path[len("simulator.") :] + + per_strata_outputs: list[dict[str, dict[str, Any]]] = [] + for value in values: + sim_cfg = OmegaConf.create( + OmegaConf.to_container(cfg.simulator, resolve=True) + ) + OmegaConf.update(sim_cfg, key_path, value, merge=False) + sim = build_simulator(sim_cfg) + if channel_names_for_visualization is None: + channel_names_for_visualization = list(sim.output_names) + splits = generate_dataset_splits( + sim=sim, + n_train=n_train_each, + n_valid=n_valid_each, + n_test=n_test_each, + base_seed=cfg.seed, + ensure_exact_n=bool(cfg.dataset.get("ensure_exact_n", False)), + ) + per_strata_outputs.append(splits) + + splits = combine_stratified_splits(per_strata_outputs) + else: + sim = build_simulator(cfg.simulator) + channel_names_for_visualization = list(sim.output_names) + + splits = generate_dataset_splits( + sim=sim, + n_train=cfg.dataset.n_train, + n_valid=cfg.dataset.n_valid, + n_test=cfg.dataset.n_test, + base_seed=cfg.seed, + ensure_exact_n=bool(cfg.dataset.get("ensure_exact_n", False)), + ) + + output_dir = Path(cfg.dataset.output_dir) + if not output_dir.is_absolute(): + output_dir = Path(get_original_cwd()) / output_dir + + save_resolved_config(cfg=cfg, output_dir=output_dir) + + save_dataset_splits(splits=splits, output_dir=output_dir, overwrite=cfg.overwrite) + save_example_videos( + splits=splits, + output_dir=output_dir, + visualize_cfg=cfg.get("visualize"), + channel_names=channel_names_for_visualization, + ) + + +def list_simulators() -> list[str]: + """Return available simulator config names from the package config group.""" + simulator_dir = Path(__file__).parent / "configs" / "simulator" + if not simulator_dir.exists(): + return [] + return sorted(path.stem for path in simulator_dir.glob("*.yaml")) + + +def main() -> None: + """Dispatch tiny autosim subcommands. + + - `autosim list` prints simulator config names. + - `autosim` (or any Hydra overrides) runs data generation. + """ + argv = sys.argv[1:] + + if argv and argv[0] in {"-h", "--help"}: + parser = argparse.ArgumentParser( + prog="autosim", + description=( + "Generate simulation datasets using Hydra overrides, or list " + "available simulator configs." + ), + ) + parser.add_argument( + "command", + nargs="?", + help="Subcommand: 'list'. Omit to run data generation with Hydra.", + ) + parser.print_help() + return + + if argv and argv[0] == "list": + list_parser = argparse.ArgumentParser( + prog="autosim list", + description="List available simulator config names.", + ) + list_parser.parse_args(argv[1:]) + for name in list_simulators(): + print(name) + return + + # Preserve all original arguments for Hydra's own parser. + sys.argv = [sys.argv[0], *argv] + _generate_main() + + +if __name__ == "__main__": + main() diff --git a/src/autosim/configs/__init__.py b/src/autosim/configs/__init__.py new file mode 100644 index 0000000..16cbf8b --- /dev/null +++ b/src/autosim/configs/__init__.py @@ -0,0 +1 @@ +"""Hydra config package for autosim.""" diff --git a/src/autosim/configs/generate_data.yaml b/src/autosim/configs/generate_data.yaml new file mode 100644 index 0000000..1dfb0ce --- /dev/null +++ b/src/autosim/configs/generate_data.yaml @@ -0,0 +1,31 @@ +defaults: + - simulator: shallow_water2d + - _self_ + +dataset: + output_dir: outputs/${now:%Y-%m-%d}/${hydra:runtime.choices.simulator}_${shortuuid:7} + n_train: 200 + n_valid: 20 + n_test: 20 + ensure_exact_n: true + +seed: null +overwrite: false + +stratify: + enabled: false + key: null + values: [] + +visualize: + enabled: true + split: train + batch_indices: [0, 1, 2, 3] + fps: 5 + file_ext: mp4 # gif (no extra deps) or mp4 (requires ffmpeg) + overwrite: true + preserve_aspect: true + +hydra: + run: + dir: ${dataset.output_dir} \ No newline at end of file diff --git a/src/autosim/configs/generate_data_gray_scott.yaml b/src/autosim/configs/generate_data_gray_scott.yaml new file mode 100644 index 0000000..d75012d --- /dev/null +++ b/src/autosim/configs/generate_data_gray_scott.yaml @@ -0,0 +1,31 @@ +defaults: + - simulator: gray_scott + - _self_ + +dataset: + output_dir: outputs/${now:%Y-%m-%d}/${hydra:runtime.choices.simulator}_${shortuuid:7} + n_train: 240 + n_valid: 24 + n_test: 24 + ensure_exact_n: true + +seed: null +overwrite: false + +stratify: + enabled: true + key: simulator.pattern + values: [spirals, spots, worms, maze, gliders, bubbles] + +visualize: + enabled: true + split: train + batch_indices: [0, 1, 2, 3] + fps: 5 + file_ext: mp4 # gif (no extra deps) or mp4 (requires ffmpeg) + overwrite: true + preserve_aspect: true + +hydra: + run: + dir: ${dataset.output_dir} diff --git a/src/autosim/configs/simulator/__init__.py b/src/autosim/configs/simulator/__init__.py new file mode 100644 index 0000000..9199bb9 --- /dev/null +++ b/src/autosim/configs/simulator/__init__.py @@ -0,0 +1 @@ +"""Simulator config group for autosim Hydra configs.""" diff --git a/src/autosim/configs/simulator/advection_diffusion.yaml b/src/autosim/configs/simulator/advection_diffusion.yaml new file mode 100644 index 0000000..91277a3 --- /dev/null +++ b/src/autosim/configs/simulator/advection_diffusion.yaml @@ -0,0 +1,11 @@ +_target_: autosim.simulations.AdvectionDiffusionMultichannel +output_indices: [0] +return_timeseries: true +log_level: warning +n: 64 +L: 10.0 +T: 80.0 +dt: 0.25 +parameters_range: + nu: [0.0001, 0.01] + mu: [0.5, 2.0] diff --git a/src/autosim/configs/simulator/advection_diffusion_multichannel.yaml b/src/autosim/configs/simulator/advection_diffusion_multichannel.yaml new file mode 100644 index 0000000..f2fffd6 --- /dev/null +++ b/src/autosim/configs/simulator/advection_diffusion_multichannel.yaml @@ -0,0 +1,11 @@ +_target_: autosim.simulations.AdvectionDiffusionMultichannel +output_indices: [0, 1, 2, 3] +return_timeseries: true +log_level: warning +n: 64 +L: 10.0 +T: 80.0 +dt: 0.25 +parameters_range: + nu: [0.0001, 0.01] + mu: [0.5, 2.0] diff --git a/src/autosim/configs/simulator/compressible_fluid_2d.yaml b/src/autosim/configs/simulator/compressible_fluid_2d.yaml new file mode 100644 index 0000000..93e8fd3 --- /dev/null +++ b/src/autosim/configs/simulator/compressible_fluid_2d.yaml @@ -0,0 +1,11 @@ +_target_: autosim.experimental.simulations.CompressibleFluid2D +return_timeseries: true +log_level: warning +n: 64 +T: 1.2 +dt_save: 0.01 +cfl: 0.32 +scenario: vortex_sheet +parameters_range: + gamma: [1.4, 1.4] + amp: [0.14, 0.22] diff --git a/src/autosim/configs/simulator/conditioned_navier_stokes_2d.yaml b/src/autosim/configs/simulator/conditioned_navier_stokes_2d.yaml new file mode 100644 index 0000000..2ca1b3d --- /dev/null +++ b/src/autosim/configs/simulator/conditioned_navier_stokes_2d.yaml @@ -0,0 +1,18 @@ +_target_: autosim.experimental.simulations.ConditionedNavierStokes2D +return_timeseries: true +log_level: warning +n: 64 +L: 32.0 +T: 85.869 +dt: 0.26 +snapshot_dt: 0.261 +nu: 0.01 +cfl: 0.35 +bc_mode: neumann +buoyancy_mode: raw +skip_nt: 8 +parameters_range: + buoyancy_y: [0.2, 0.8] + smoothness: [6.0, 6.0] + noise_scale: [8.0, 18.0] + smoke_diffusivity: [0.0, 0.0] diff --git a/src/autosim/configs/simulator/epidemic.yaml b/src/autosim/configs/simulator/epidemic.yaml new file mode 100644 index 0000000..63102e7 --- /dev/null +++ b/src/autosim/configs/simulator/epidemic.yaml @@ -0,0 +1,2 @@ +_target_: autosim.simulations.Epidemic +log_level: warning diff --git a/src/autosim/configs/simulator/flow_problem.yaml b/src/autosim/configs/simulator/flow_problem.yaml new file mode 100644 index 0000000..9a4af73 --- /dev/null +++ b/src/autosim/configs/simulator/flow_problem.yaml @@ -0,0 +1,4 @@ +_target_: autosim.simulations.FlowProblem +log_level: warning +ncycles: 10 +ncomp: 10 diff --git a/src/autosim/configs/simulator/gpe_high_complexity.yaml b/src/autosim/configs/simulator/gpe_high_complexity.yaml new file mode 100644 index 0000000..1e2bc9d --- /dev/null +++ b/src/autosim/configs/simulator/gpe_high_complexity.yaml @@ -0,0 +1,25 @@ +_target_: autosim.experimental.simulations.GrossPitaevskiiEquation2D +return_timeseries: true +log_level: warning +n: 64 +L: 10.0 +T: 7.42 +dt: 0.005 +snapshot_dt: 0.02 +skip_nt: 50 +random_seed: 42 +parameters_range: + wx: [0.5, 0.5] + wy: [0.5, 0.5] + box_param: [0.1, 0.1] + width: [0.4, 0.8] + x0: [-1.0, 1.0] + y0: [-1.0, 1.0] + kx0: [-2.0, 2.0] + ky0: [-2.0, 2.0] + imaginary_time: [0, 0] + g: [30.0, 50.0] + disorder_strength: [0.5, 1.2] + spoon_strength: [2.0, 4.0] + spoon_speed: [1.0, 2.0] + Omega: [0.1, 0.4] diff --git a/src/autosim/configs/simulator/gpe_low_complexity.yaml b/src/autosim/configs/simulator/gpe_low_complexity.yaml new file mode 100644 index 0000000..8fd2a00 --- /dev/null +++ b/src/autosim/configs/simulator/gpe_low_complexity.yaml @@ -0,0 +1,25 @@ +_target_: autosim.experimental.simulations.GrossPitaevskiiEquation2D +return_timeseries: true +log_level: warning +n: 64 +L: 10.0 +T: 7.42 +dt: 0.005 +snapshot_dt: 0.02 +skip_nt: 50 +random_seed: 42 +parameters_range: + wx: [0.5, 0.5] + wy: [0.5, 0.5] + box_param: [0.1, 0.1] + width: [0.4, 0.8] + x0: [-1.0, 1.0] + y0: [-1.0, 1.0] + kx0: [-2.0, 2.0] + ky0: [-2.0, 2.0] + imaginary_time: [0, 0] + g: [0.0, 0.0] + disorder_strength: [0.0, 0.0] + spoon_strength: [0.0, 0.0] + spoon_speed: [0.0, 0.0] + Omega: [0.0, 0.0] diff --git a/src/autosim/configs/simulator/gray_scott.yaml b/src/autosim/configs/simulator/gray_scott.yaml new file mode 100644 index 0000000..82a88ad --- /dev/null +++ b/src/autosim/configs/simulator/gray_scott.yaml @@ -0,0 +1,11 @@ +_target_: autosim.experimental.simulations.GrayScott +return_timeseries: true +log_level: warning +n: 64 +L: 1.0 +T: 1280.0 +dt: 1.0 +snapshot_dt: 4.0 +initial_condition: gaussians +fixed_parameters_given_pattern: true +pattern: spirals diff --git a/src/autosim/configs/simulator/gross_pitaevskii_equation_2d.yaml b/src/autosim/configs/simulator/gross_pitaevskii_equation_2d.yaml new file mode 100644 index 0000000..e69f986 --- /dev/null +++ b/src/autosim/configs/simulator/gross_pitaevskii_equation_2d.yaml @@ -0,0 +1,25 @@ +_target_: autosim.experimental.simulations.GrossPitaevskiiEquation2D +return_timeseries: true +log_level: warning +n: 64 +L: 10.0 +T: 4.0 +dt: 0.005 +snapshot_dt: 0.0125 +skip_nt: 50 +random_seed: 42 +parameters_range: + wx: [1.0, 1.0] + wy: [1.0, 1.0] + box_param: [0.1, 0.1] + width: [0.4, 0.8] + x0: [-1.0, 1.0] + y0: [-1.0, 1.0] + kx0: [-2.0, 2.0] + ky0: [-2.0, 2.0] + imaginary_time: [0, 0] + g: [30.0, 50.0] + disorder_strength: [0.5, 1.2] + spoon_strength: [2.0, 4.0] + spoon_speed: [1.0, 2.0] + Omega: [0.1, 0.4] diff --git a/src/autosim/configs/simulator/hydrodynamics_2d.yaml b/src/autosim/configs/simulator/hydrodynamics_2d.yaml new file mode 100644 index 0000000..e32d7dd --- /dev/null +++ b/src/autosim/configs/simulator/hydrodynamics_2d.yaml @@ -0,0 +1,10 @@ +_target_: autosim.experimental.simulations.Hydrodynamics2D +return_timeseries: true +log_level: warning +n: 64 +T: 3.0 +dt: 0.01 +cfl: 0.3 +parameters_range: + nu: [0.001, 0.002] + force: [1.0, 3.0] diff --git a/src/autosim/configs/simulator/lattice_boltzmann.yaml b/src/autosim/configs/simulator/lattice_boltzmann.yaml new file mode 100644 index 0000000..976843c --- /dev/null +++ b/src/autosim/configs/simulator/lattice_boltzmann.yaml @@ -0,0 +1,15 @@ +_target_: autosim.experimental.simulations.LatticeBoltzmann +return_timeseries: true +log_level: warning +width: 128 +height: 32 +T: 12.9 +dt: 0.032 +use_cylinder: false +oscillatory_inlet: true +n_saved_frames: 321 +skip_nt: 0 +parameters_range: + viscosity: [0.015, 0.03] + u_in: [0.05, 0.11] + oscillation_frequency: [0.2, 1.5] diff --git a/src/autosim/configs/simulator/projectile.yaml b/src/autosim/configs/simulator/projectile.yaml new file mode 100644 index 0000000..f653fe6 --- /dev/null +++ b/src/autosim/configs/simulator/projectile.yaml @@ -0,0 +1,2 @@ +_target_: autosim.simulations.Projectile +log_level: warning diff --git a/src/autosim/configs/simulator/projectile_multioutput.yaml b/src/autosim/configs/simulator/projectile_multioutput.yaml new file mode 100644 index 0000000..6c26aa8 --- /dev/null +++ b/src/autosim/configs/simulator/projectile_multioutput.yaml @@ -0,0 +1,2 @@ +_target_: autosim.simulations.ProjectileMultioutput +log_level: warning diff --git a/src/autosim/configs/simulator/reaction_diffusion.yaml b/src/autosim/configs/simulator/reaction_diffusion.yaml new file mode 100644 index 0000000..7a575a9 --- /dev/null +++ b/src/autosim/configs/simulator/reaction_diffusion.yaml @@ -0,0 +1,10 @@ +_target_: autosim.experimental.simulations.ReactionDiffusion +return_timeseries: true +log_level: warning +n: 64 +L: 20 +T: 32.11 +dt: 0.1 +parameters_range: + beta: [1.0, 2.0] + d: [0.05, 0.3] diff --git a/src/autosim/configs/simulator/seir_simulator.yaml b/src/autosim/configs/simulator/seir_simulator.yaml new file mode 100644 index 0000000..4a97650 --- /dev/null +++ b/src/autosim/configs/simulator/seir_simulator.yaml @@ -0,0 +1,2 @@ +_target_: autosim.simulations.SEIRSimulator +log_level: warning diff --git a/src/autosim/configs/simulator/shallow_water2d.yaml b/src/autosim/configs/simulator/shallow_water2d.yaml new file mode 100644 index 0000000..61014b6 --- /dev/null +++ b/src/autosim/configs/simulator/shallow_water2d.yaml @@ -0,0 +1,17 @@ +_target_: autosim.experimental.simulations.ShallowWater2D +return_timeseries: true +log_level: warning +nx: 64 +ny: 64 +Lx: 64.0 +Ly: 64.0 +T: 74.0 +dt_save: 0.2 +skip_nt: 50 +cfl: 0.12 +g: 9.81 +H: 1.0 +nu: 0.0005 +drag: 0.002 +parameters_range: + amp: [0.07, 0.14] diff --git a/src/autosim/experimental/simulations/__init__.py b/src/autosim/experimental/simulations/__init__.py index 4021585..b40dd92 100644 --- a/src/autosim/experimental/simulations/__init__.py +++ b/src/autosim/experimental/simulations/__init__.py @@ -4,9 +4,11 @@ from .hydrodynamics_2d import Hydrodynamics2D from .lattice_boltzmann import LatticeBoltzmann from .navier_stokes_conditioned import ConditionedNavierStokes2D +from .reaction_diffusion import ReactionDiffusion from .shallow_water import ShallowWater2D ALL_SIMULATORS = [ + ReactionDiffusion, CompressibleFluid2D, Hydrodynamics2D, LatticeBoltzmann, @@ -21,6 +23,7 @@ "GrossPitaevskiiEquation2D", "Hydrodynamics2D", "LatticeBoltzmann", + "ReactionDiffusion", "ShallowWater2D", ] diff --git a/src/autosim/experimental/simulations/compressible_fluid.py b/src/autosim/experimental/simulations/compressible_fluid.py index 4388658..72044b7 100644 --- a/src/autosim/experimental/simulations/compressible_fluid.py +++ b/src/autosim/experimental/simulations/compressible_fluid.py @@ -11,11 +11,11 @@ import torch -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import TensorLike -class CompressibleFluid2D(Simulator): +class CompressibleFluid2D(SpatioTemporalSimulator): """Minimal 2D compressible Euler simulator. Parameters @@ -97,10 +97,16 @@ def _forward(self, x: TensorLike) -> TensorLike: return y.flatten().unsqueeze(0) def forward_samples_spatiotemporal( # noqa: D102 - self, n_samples: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: - x = self.sample_inputs(n_samples, random_seed) - y, x = self.forward_batch(x) + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) channels = 4 features_per_step = self.n * self.n * channels @@ -113,9 +119,9 @@ def forward_samples_spatiotemporal( # noqa: D102 f"expected multiple of {features_per_step}." ) n_time = total // features_per_step - y = y.reshape(n_samples, n_time, self.n, self.n, channels) + y = y.reshape(y.shape[0], n_time, self.n, self.n, channels) else: - y = y.reshape(n_samples, 1, self.n, self.n, channels) + y = y.reshape(y.shape[0], 1, self.n, self.n, channels) return { "data": y, diff --git a/src/autosim/experimental/simulations/gray_scott.py b/src/autosim/experimental/simulations/gray_scott.py index bc10dff..a51741b 100644 --- a/src/autosim/experimental/simulations/gray_scott.py +++ b/src/autosim/experimental/simulations/gray_scott.py @@ -2,7 +2,7 @@ import torch from numpy.fft import fft2, ifft2 -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import NumpyLike, TensorLike PATTERN_RANGES: dict[str, dict[str, tuple[float, float]]] = { @@ -420,7 +420,7 @@ def simulate_spectral_gray_scott( # noqa: PLR0915 return u_output, v_output -class GrayScott(Simulator): +class GrayScott(SpatioTemporalSimulator): """Spectral Gray-Scott simulator based on danfortunato/spectral-gray-scott.""" def __init__( # noqa: PLR0912 @@ -476,7 +476,7 @@ def __init__( # noqa: PLR0912 } if output_names is None: - output_names = ["solution"] + output_names = ["u", "v"] super().__init__(parameters_range, output_names, log_level) @@ -558,14 +558,20 @@ def _forward(self, x: TensorLike) -> TensorLike: return torch.from_numpy(concat).reshape(1, -1) def forward_samples_spatiotemporal( - self, n: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: """Run multiple trajectories and return `[batch, time, x, y, channels]` data.""" if not self.return_timeseries: msg = "forward_samples_spatiotemporal requires return_timeseries=True." raise RuntimeError(msg) - x = self.sample_inputs(n, random_seed) - y, x = self.forward_batch(x) + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) timesteps = _compute_snapshot_count(self.T, self.dt, self.snapshot_dt) y_reshaped = y.reshape(y.shape[0], 2, timesteps, self.n, self.n).permute( diff --git a/src/autosim/experimental/simulations/gross_pitaevskii.py b/src/autosim/experimental/simulations/gross_pitaevskii.py index 0abfd0b..83a6d6c 100644 --- a/src/autosim/experimental/simulations/gross_pitaevskii.py +++ b/src/autosim/experimental/simulations/gross_pitaevskii.py @@ -5,7 +5,7 @@ import torch import torch.nn.functional as F -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import TensorLike @@ -402,7 +402,7 @@ def _snapshot(p) -> torch.Tensor: return _snapshot(psi) -class GrossPitaevskiiEquation2D(Simulator): +class GrossPitaevskiiEquation2D(SpatioTemporalSimulator): """Gross-Pitaevskii Equation simulator for quantum fluids.""" _DEFAULT_SIM_PARAMS: ClassVar[dict[str, Any]] = { @@ -543,11 +543,17 @@ def _forward(self, x: TensorLike) -> TensorLike: return sol.flatten().unsqueeze(0) def forward_samples_spatiotemporal( - self, n: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: """Run sampled trajectories and return `[batch,time,x,y,channels]` data.""" - x = self.sample_inputs(n, random_seed) - y, x = self.forward_batch(x) + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) channels = 2 # density, phase features_per_step = self.n * self.n * channels diff --git a/src/autosim/experimental/simulations/hydrodynamics_2d.py b/src/autosim/experimental/simulations/hydrodynamics_2d.py index 784318b..f66c071 100644 --- a/src/autosim/experimental/simulations/hydrodynamics_2d.py +++ b/src/autosim/experimental/simulations/hydrodynamics_2d.py @@ -13,11 +13,11 @@ import torch -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import TensorLike -class Hydrodynamics2D(Simulator): +class Hydrodynamics2D(SpatioTemporalSimulator): r"""Simplified 2D hydrodynamics simulator with no magnetic field. Parameters @@ -95,11 +95,17 @@ def _forward(self, x: TensorLike) -> TensorLike: return out.flatten().unsqueeze(0) def forward_samples_spatiotemporal( - self, n: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: """Run sampled trajectories and return spatiotemporal tensors.""" - x = self.sample_inputs(n, random_seed) - y, x = self.forward_batch(x) + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) channels = 3 features_per_step = self.n * self.n * channels diff --git a/src/autosim/experimental/simulations/lattice_boltzmann.py b/src/autosim/experimental/simulations/lattice_boltzmann.py index ff45076..52dbc74 100644 --- a/src/autosim/experimental/simulations/lattice_boltzmann.py +++ b/src/autosim/experimental/simulations/lattice_boltzmann.py @@ -10,11 +10,11 @@ import torch -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import TensorLike -class LatticeBoltzmann(Simulator): +class LatticeBoltzmann(SpatioTemporalSimulator): r"""Lattice Boltzmann (D2Q9) simulator for channel flow with obstacles. Simulates 2D flow past a cylinder using the BGK collision model. @@ -169,16 +169,17 @@ def _forward(self, x: TensorLike) -> TensorLike: return out.flatten().unsqueeze(0) def forward_samples_spatiotemporal( - self, n: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: """Run sampled trajectories and return spatiotemporal tensors.""" - x = self.sample_inputs(n, random_seed) - - outputs = [] - for i in range(n): - outputs.append(self._forward(x[i : i + 1])) - - y = torch.cat(outputs, dim=0) + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) # LBM outputs: [B, Steps*Features] or [B, Features] channels = len(self.output_names) # 4 @@ -187,7 +188,7 @@ def forward_samples_spatiotemporal( if self.return_timeseries: total_elements = y.shape[1] steps = total_elements // features_per_frame - y_reshaped = y.reshape(n, steps, self.height, self.width, channels) + y_reshaped = y.reshape(y.shape[0], steps, self.height, self.width, channels) if self.skip_nt >= steps: raise ValueError( @@ -196,7 +197,7 @@ def forward_samples_spatiotemporal( ) y_reshaped = y_reshaped[:, self.skip_nt :, ...] else: - y_reshaped = y.reshape(n, 1, self.height, self.width, channels) + y_reshaped = y.reshape(y.shape[0], 1, self.height, self.width, channels) return { "data": y_reshaped, diff --git a/src/autosim/experimental/simulations/navier_stokes_conditioned.py b/src/autosim/experimental/simulations/navier_stokes_conditioned.py index 8b859cf..ab0e023 100644 --- a/src/autosim/experimental/simulations/navier_stokes_conditioned.py +++ b/src/autosim/experimental/simulations/navier_stokes_conditioned.py @@ -4,7 +4,7 @@ import torch -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import TensorLike @@ -373,7 +373,7 @@ def _snapshot() -> torch.Tensor: return _snapshot() -class ConditionedNavierStokes2D(Simulator): +class ConditionedNavierStokes2D(SpatioTemporalSimulator): """Conditioned 2D Navier-Stokes smoke simulator inspired by PDEArena.""" _DEFAULT_SMOOTHNESS = 6.0 @@ -525,11 +525,17 @@ def _forward(self, x: TensorLike) -> TensorLike: return sol.flatten().unsqueeze(0) def forward_samples_spatiotemporal( - self, n: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: """Run sampled trajectories and return `[batch,time,x,y,channels]` data.""" - x = self.sample_inputs(n, random_seed) - y, x = self.forward_batch(x) + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) channels = 3 features_per_step = self.n * self.n * channels diff --git a/src/autosim/experimental/simulations/reaction_diffusion.py b/src/autosim/experimental/simulations/reaction_diffusion.py new file mode 100644 index 0000000..4676e0c --- /dev/null +++ b/src/autosim/experimental/simulations/reaction_diffusion.py @@ -0,0 +1,240 @@ +import numpy as np +import torch +from numpy.fft import fft2, ifft2 +from scipy.integrate import solve_ivp + +from autosim.simulations.base import SpatioTemporalSimulator +from autosim.types import NumpyLike, TensorLike + +integrator_keywords = {"rtol": 1e-6, "atol": 1e-6, "method": "RK45"} + + +class ReactionDiffusion(SpatioTemporalSimulator): + """Simulate the reaction-diffusion PDE for a given set of parameters.""" + + def __init__( + self, + parameters_range: dict[str, tuple[float, float]] | None = None, + output_names: list[str] | None = None, + return_timeseries: bool = False, + log_level: str = "progress_bar", + n: int = 32, + L: int = 20, + T: float = 10.0, + dt: float = 0.1, + ): + """ + Initialize the ReactionDiffusion simulator. + + Parameters + ---------- + parameters_range: dict[str, tuple[float, float]] + Dictionary mapping input parameter names to their (min, max) ranges. + output_names: list[str] + List of output parameters' names. + log_level: str + Logging level for the simulator. Can be one of: + - "progress_bar": shows a progress bar during batch simulations + - "debug": shows debug messages + - "info": shows informational messages + - "warning": shows warning messages + - "error": shows error messages + - "critical": shows critical messages + return_timeseries: bool + Whether to return the full timeseries or just the spatial solution at the + final time step. Defaults to False. + n: int + Number of spatial points in each direction. + L: int + Domain size in X and Y directions. + T: float + Total time to simulate. + dt: float + Time step size. + """ + if parameters_range is None: + parameters_range = {"beta": (1.0, 2.0), "d": (0.05, 0.3)} + if output_names is None: + output_names = ["u", "v"] + super().__init__(parameters_range, output_names, log_level) + self.return_timeseries = return_timeseries + self.n = n + self.L = L + self.T = T + self.dt = dt + + def _forward(self, x: TensorLike) -> TensorLike: + assert x.shape[0] == 1, ( + f"Simulator._forward expects a single input, got {x.shape[0]}" + ) + u_sol, v_sol = simulate_reaction_diffusion( + x.cpu().numpy()[0], self.return_timeseries, self.n, self.L, self.T, self.dt + ) + + # concatenate U and V arrays (flattened across time and space) + concat_array = np.concatenate([u_sol.ravel(), v_sol.ravel()]) + + # return tensor shape (1, 2*self.t*self.n*self.n) + return torch.tensor(concat_array, dtype=torch.float32).reshape(1, -1) + + def forward_samples_spatiotemporal( + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, + ) -> dict: + """Reshape to spatiotemporal format. + + Parameters + ---------- + n: int + Number of samples to generate. + random_seed: int | None + Random seed for reproducibility. Defaults to None. + + Returns + ------- + dict + A dictionary containing the reshaped spatiotemporal data, constant scalars, + and constant fields. + """ + # Run simulation and optionally resample failed trajectories + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) + + # Reshape and permute output + y_reshaped_permuted = y.reshape( + y.shape[0], 2, int(self.T / self.dt), self.n, self.n + ).permute(0, 2, 3, 4, 1) + + return { + "data": y_reshaped_permuted, + "constant_scalars": x, + "constant_fields": None, + } + + +def reaction_diffusion( + t: float, # noqa: ARG001 + uvt: NumpyLike, + K22: NumpyLike, + d1: float, + d2: float, + beta: float, + n: int, + N: int, +): + """ + Define the reaction-diffusion PDE in the Fourier (kx, ky) space. + + Parameters + ---------- + t: float + The current time step (not used). + uvt: NumpyLike + Fourier transformed solution vector at current time step (length 2*N, 1-D). + K22: NumpyLike + Squared Fourier wavenumbers, shape (N,). + d1: float + The diffusion coefficient for species 1. + d2: float + The diffusion coefficient for species 2. + beta: float + The reaction coefficient controlling reaction between the two species. + n: int + Number of spatial points in each direction. + N: int + Total number of spatial grid points (n*n). + """ + u = np.real(ifft2(uvt[:N].reshape(n, n))) + v = np.real(ifft2(uvt[N:].reshape(n, n))) + u2v = u * u * v + uv2 = u * v * v + rhs = np.empty(2 * N, dtype=complex) + rhs[:N] = fft2(u - u**3 - uv2 + beta * (u2v + v**3)).ravel() - d1 * K22 * uvt[:N] + rhs[N:] = fft2(v - u2v - v**3 - beta * (u**3 + uv2)).ravel() - d2 * K22 * uvt[N:] + return rhs + + +def simulate_reaction_diffusion( + x: NumpyLike, + return_timeseries: bool = False, + n: int = 32, + L: int = 20, + T: float = 10.0, + dt: float = 0.1, +) -> tuple[NumpyLike, NumpyLike]: + """ + Simulate the reaction-diffusion PDE for a given set of parameters. + + Parameters + ---------- + x: NumpyLike + The parameters of the reaction-diffusion model. The first element is the + reaction coefficient (beta) and the second element is the diffusion + coefficient (d). + return_timeseries: bool + Whether to return the full timeseries or just the spatial solution at the final + time step. Defaults to False. + n: int + Number of spatial points in each direction. Defaults to 32. + L: int + Domain size in X and Y directions. Defaults to 20. + T: float + Total time to simulate. Defaults to 10.0. + dt: float + Time step size. Defaults to 0.1. + + Returns + ------- + tuple[NumpyLike, NumpyLike] + [u_sol, v_sol], the spatial solution of the reaction-diffusion PDE, either as a + timeseries or at the final time point of `return_timeseries` is False. + """ + beta, d = x + d1 = d2 = d + + t = np.linspace(0, T, int(T / dt)) + + N = n * n + x_uniform = np.linspace(-L / 2, L / 2, n + 1) + x_grid = x_uniform[:n] + n2 = n // 2 + kx = (2 * np.pi / L) * np.hstack( + (np.linspace(0, n2 - 1, n2), np.linspace(-n2, -1, n2)) + ) + X_grid, Y_grid = np.meshgrid(x_grid, x_grid) + KX, KY = np.meshgrid(kx, kx) + K2 = KX**2 + KY**2 + K22 = K2.ravel() + + r = np.sqrt(X_grid**2 + Y_grid**2) + theta = np.angle(X_grid + 1j * Y_grid) + u0 = np.tanh(r) * np.cos(theta - r) + v0 = np.tanh(r) * np.sin(theta - r) + + uvt0 = np.hstack([fft2(u0).ravel(), fft2(v0).ravel()]) + + uvsol = solve_ivp( + reaction_diffusion, + (t[0], t[-1]), + y0=uvt0, + t_eval=t, + args=(K22, d1, d2, beta, n, N), + **integrator_keywords, + ) + uvsol = uvsol.y + + u_out = np.array( + [np.real(ifft2(uvsol[:N, j].reshape(n, n))) for j in range(len(t))] + ) + v_out = np.array( + [np.real(ifft2(uvsol[N:, j].reshape(n, n))) for j in range(len(t))] + ) + + if return_timeseries: + return u_out, v_out + return u_out[-1], v_out[-1] diff --git a/src/autosim/experimental/simulations/shallow_water.py b/src/autosim/experimental/simulations/shallow_water.py index 50531ae..40eab8f 100644 --- a/src/autosim/experimental/simulations/shallow_water.py +++ b/src/autosim/experimental/simulations/shallow_water.py @@ -4,11 +4,11 @@ import torch -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import TensorLike -class ShallowWater2D(Simulator): +class ShallowWater2D(SpatioTemporalSimulator): """Full 2D shallow-water simulator with prognostic [h, u, v].""" def __init__( @@ -22,7 +22,8 @@ def __init__( Lx: float = 64.0, Ly: float = 128.0, T: float = 90.0, - dt_save: float = 1.0, + dt_save: float = 0.2, + skip_nt: int = 0, cfl: float = 0.12, g: float = 9.81, H: float = 1.0, @@ -31,11 +32,14 @@ def __init__( dtype: torch.dtype = torch.float64, ) -> None: if parameters_range is None: - parameters_range = {"amp": (0.05, 0.2)} + parameters_range = {"amp": (0.05, 0.14)} if output_names is None: output_names = ["h", "u", "v"] super().__init__(parameters_range, output_names, log_level) + if skip_nt < 0: + msg = "skip_nt must be non-negative" + raise ValueError(msg) self.return_timeseries = return_timeseries self.nx = nx self.ny = ny @@ -43,6 +47,7 @@ def __init__( self.Ly = Ly self.T = T self.dt_save = dt_save + self.skip_nt = skip_nt self.cfl = cfl self.g = g self.H = H @@ -63,6 +68,7 @@ def _forward(self, x: TensorLike) -> TensorLike: Ly=self.Ly, T=self.T, dt_save=self.dt_save, + skip_nt=self.skip_nt, cfl=self.cfl, g=self.g, H=self.H, @@ -73,11 +79,18 @@ def _forward(self, x: TensorLike) -> TensorLike: return y.flatten().unsqueeze(0) def forward_samples_spatiotemporal( - self, n: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: """Run sampled trajectories and return `[batch,time,x,y,channels]` data.""" - x = self.sample_inputs(n, random_seed) - y, x = self.forward_batch(x) + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) + n_valid = y.shape[0] channels = 3 features_per_step = self.nx * self.ny * channels @@ -85,9 +98,9 @@ def forward_samples_spatiotemporal( if self.return_timeseries: total = y.shape[1] n_time = total // features_per_step - y = y.reshape(n, n_time, self.nx, self.ny, channels) + y = y.reshape(n_valid, n_time, self.nx, self.ny, channels) else: - y = y.reshape(n, 1, self.nx, self.ny, channels) + y = y.reshape(n_valid, 1, self.nx, self.ny, channels) return { "data": y, @@ -96,7 +109,7 @@ def forward_samples_spatiotemporal( } -def simulate_swe_2d( # noqa: PLR0915 +def simulate_swe_2d( # noqa: PLR0912, PLR0915 amp: float, return_timeseries: bool, nx: int, @@ -111,11 +124,15 @@ def simulate_swe_2d( # noqa: PLR0915 nu: float, drag: float, dtype: torch.dtype = torch.float64, + skip_nt: int = 0, ) -> torch.Tensor: """Integrate full shallow-water equations with PDEArena-style random2 ICs.""" if dtype not in (torch.float32, torch.float64): msg = "dtype must be torch.float32 or torch.float64" raise ValueError(msg) + if skip_nt < 0: + msg = "skip_nt must be non-negative" + raise ValueError(msg) complex_dtype = torch.complex64 if dtype == torch.float32 else torch.complex128 x = torch.linspace(0.0, Lx, nx + 1, dtype=dtype)[:-1] @@ -285,11 +302,25 @@ def output(h: torch.Tensor, u: torch.Tensor, v: torch.Tensor) -> torch.Tensor: u = u0 v = v0 + h_min_bound = 1e-4 + h_max_bound = 100.0 + uv_abs_bound = 100.0 + saturation_frac_threshold = 0.01 + + def _saturation_fraction( + h_curr: torch.Tensor, u_curr: torch.Tensor, v_curr: torch.Tensor + ) -> float: + h_sat = ((h_curr <= h_min_bound) | (h_curr >= h_max_bound)).float().mean() + u_sat = (u_curr.abs() >= uv_abs_bound).float().mean() + v_sat = (v_curr.abs() >= uv_abs_bound).float().mean() + return float(torch.maximum(torch.maximum(h_sat, u_sat), v_sat).item()) + history: list[torch.Tensor] = [] expected_frames = int(T / dt_save) + 1 t = 0.0 next_save = 0.0 last_valid = output(h, u, v) + failure_reason: str | None = None while t <= T + 1e-10: if not ( @@ -297,6 +328,10 @@ def output(h: torch.Tensor, u: torch.Tensor, v: torch.Tensor) -> torch.Tensor: and torch.isfinite(u).all() and torch.isfinite(v).all() ): + failure_reason = "non-finite state encountered" + break + if _saturation_fraction(h, u, v) >= saturation_frac_threshold: + failure_reason = "state saturated at clipping bounds" break if return_timeseries and t >= next_save - 1e-10: @@ -312,6 +347,7 @@ def output(h: torch.Tensor, u: torch.Tensor, v: torch.Tensor) -> torch.Tensor: speed_y = (v.abs() + c_now).max().item() max_speed = max(speed_x, speed_y, 1e-8) if not math.isfinite(max_speed): + failure_reason = "non-finite wave speed" break dt = cfl * min(dx, dy) / max_speed @@ -353,15 +389,31 @@ def output(h: torch.Tensor, u: torch.Tensor, v: torch.Tensor) -> torch.Tensor: and torch.isfinite(u).all() and torch.isfinite(v).all() ): + if _saturation_fraction(h, u, v) >= saturation_frac_threshold: + failure_reason = "state saturated at clipping bounds after step" + break last_valid = output(h, u, v) else: + failure_reason = "non-finite state after step" break t += dt + if failure_reason is not None: + raise RuntimeError( + "ShallowWater2D simulation failed: " + f"{failure_reason} at t={t:.6f} (amp={amp:.6f})." + ) + if return_timeseries: + if skip_nt >= expected_frames: + msg = ( + "skip_nt is too large for the available trajectory length; " + f"skip_nt={skip_nt}, available_frames={expected_frames}." + ) + raise ValueError(msg) while len(history) < expected_frames: history.append(last_valid) if len(history) > expected_frames: history = history[:expected_frames] - return torch.stack(history, dim=0) + return torch.stack(history[skip_nt:], dim=0) return output(h, u, v).unsqueeze(0) diff --git a/src/autosim/simulations/__init__.py b/src/autosim/simulations/__init__.py index fac7bef..3a47fc6 100644 --- a/src/autosim/simulations/__init__.py +++ b/src/autosim/simulations/__init__.py @@ -3,11 +3,9 @@ from .epidemic import Epidemic from .flow_problem import FlowProblem from .projectile import Projectile, ProjectileMultioutput -from .reaction_diffusion import ReactionDiffusion from .seir import SEIRSimulator ALL_SIMULATORS = [ - ReactionDiffusion, AdvectionDiffusion, AdvectionDiffusionMultichannel, Epidemic, @@ -24,7 +22,6 @@ "FlowProblem", "Projectile", "ProjectileMultioutput", - "ReactionDiffusion", "SEIRSimulator", ] diff --git a/src/autosim/simulations/advection_diffusion.py b/src/autosim/simulations/advection_diffusion.py index 5e605be..2b1ab38 100644 --- a/src/autosim/simulations/advection_diffusion.py +++ b/src/autosim/simulations/advection_diffusion.py @@ -4,7 +4,7 @@ from scipy.fft import fft2, ifft2 from scipy.integrate import solve_ivp -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import NumpyLike, TensorLike integrator_keywords = {} @@ -13,7 +13,7 @@ integrator_keywords["atol"] = 1e-8 -class AdvectionDiffusion(Simulator): +class AdvectionDiffusion(SpatioTemporalSimulator): """Simulate the 2D vorticity equation (advection-diffusion).""" def __init__( @@ -55,7 +55,7 @@ def __init__( "mu": (0.5, 2.0), # advection strength } if output_names is None: - output_names = ["solution"] + output_names = ["vorticity"] super().__init__(parameters_range, output_names, log_level) self.return_timeseries = return_timeseries self.n = n @@ -75,12 +75,17 @@ def _forward(self, x: TensorLike) -> TensorLike: return torch.tensor(vorticity_sol.ravel(), dtype=torch.float32).reshape(1, -1) def forward_samples_spatiotemporal( - self, n: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: """Reshape to spatiotemporal format and return data plus constants.""" - x = self.sample_inputs(n, random_seed) - - y, x = self.forward_batch(x) + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) if self.return_timeseries: n_time = int(self.T / self.dt) diff --git a/src/autosim/simulations/advection_diffusion_multichannel.py b/src/autosim/simulations/advection_diffusion_multichannel.py index d320436..c1c7dbb 100644 --- a/src/autosim/simulations/advection_diffusion_multichannel.py +++ b/src/autosim/simulations/advection_diffusion_multichannel.py @@ -14,14 +14,14 @@ import torch from scipy.integrate import solve_ivp -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import NumpyLike, TensorLike # Integrator settings integrator_keywords = {"rtol": 1e-6, "atol": 1e-8, "method": "RK45"} -class AdvectionDiffusionMultichannel(Simulator): +class AdvectionDiffusionMultichannel(SpatioTemporalSimulator): r"""Differentiable advection-diffusion simulator exposing multi-channel outputs. Parameters @@ -30,6 +30,9 @@ class AdvectionDiffusionMultichannel(Simulator): Bounds on the sampled viscosity (`nu`) and advection strength (`mu`). output_names: list[str], optional Human-readable names for the returned channels. + output_indices: list[int], optional + Channel indices to keep from ``[vorticity, u, v, streamfunction]``. + Defaults to all channels in canonical order. return_timeseries: bool, default=False Whether `forward` returns the entire trajectory instead of a single snapshot. log_level: str, default="progress_bar" @@ -50,10 +53,13 @@ class AdvectionDiffusionMultichannel(Simulator): Each grid point emits four channels `[vorticity, u, v, streamfunction]`. """ + _ALL_CHANNEL_NAMES = ("vorticity", "u", "v", "streamfunction") + def __init__( self, parameters_range: dict[str, tuple[float, float]] | None = None, output_names: list[str] | None = None, + output_indices: list[int] | None = None, return_timeseries: bool = False, log_level: str = "progress_bar", n: int = 32, @@ -64,11 +70,38 @@ def __init__( ) -> None: if parameters_range is None: parameters_range = {"nu": (0.0001, 0.01), "mu": (0.5, 2.0)} + + if output_indices is None: + output_indices = [0, 1, 2, 3] + if len(output_indices) == 0: + msg = "output_indices must contain at least one channel index." + raise ValueError(msg) + if len(set(output_indices)) != len(output_indices): + msg = "output_indices must not contain duplicate channel indices." + raise ValueError(msg) + + invalid_indices = [idx for idx in output_indices if idx < 0 or idx > 3] + if invalid_indices: + msg = ( + "output_indices entries must be in the range [0, 3] for channels " + "[vorticity, u, v, streamfunction]. " + f"Received invalid indices: {invalid_indices}." + ) + raise ValueError(msg) + + selected_output_names = [self._ALL_CHANNEL_NAMES[idx] for idx in output_indices] if output_names is None: - output_names = ["vorticity", "u", "v", "streamfunction"] + output_names = selected_output_names + elif len(output_names) != len(output_indices): + msg = ( + "output_names length must match selected output_indices length. " + f"Received {len(output_names)} names for {len(output_indices)} indices." + ) + raise ValueError(msg) super().__init__(parameters_range, output_names, log_level) + self.output_indices = output_indices self.return_timeseries = return_timeseries self.n = n self.L = L @@ -102,7 +135,10 @@ def _forward(self, x: TensorLike) -> TensorLike: return torch.from_numpy(arr.ravel()).reshape(1, -1) def forward_samples_spatiotemporal( - self, n: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: """Produce simulator rollouts along with the sampled parameters. @@ -120,15 +156,17 @@ def forward_samples_spatiotemporal( ``data`` Tensor of shape ``(batch, time, n, n, channels)`` if `return_timeseries` is ``True`` or ``(batch, 1, n, n, channels)`` - otherwise. Channels follow `[vorticity, u, v, streamfunction]`. + otherwise. Channels follow the configured `output_indices` order. ``constant_scalars`` Sampled `[nu, mu]` parameters. ``constant_fields`` Placeholder for future field inputs; always ``None`` here. """ - x = self.sample_inputs(n, random_seed) - - y, x = self.forward_batch(x) + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) channels = 4 features_per_step = self.n * self.n * channels @@ -150,6 +188,9 @@ def forward_samples_spatiotemporal( ) y_reshaped = y.reshape(y.shape[0], 1, self.n, self.n, channels) + # Select configured subset of channels in user-specified order. + y_reshaped = y_reshaped[..., self.output_indices] + return { "data": y_reshaped, "constant_scalars": x, diff --git a/src/autosim/simulations/base.py b/src/autosim/simulations/base.py index 46db00a..7214bab 100644 --- a/src/autosim/simulations/base.py +++ b/src/autosim/simulations/base.py @@ -1,3 +1,4 @@ +import abc import logging from abc import ABC, abstractmethod @@ -8,7 +9,7 @@ from autosim.device import TorchDeviceMixin from autosim.logging import get_configured_logger from autosim.types import DeviceLike, TensorLike -from autosim.utils import ValidationMixin, set_random_seed +from autosim.validation import ValidationMixin, set_random_seed logger = logging.getLogger("autosim") @@ -459,3 +460,95 @@ def forward_batch( self.results_tensor = results valid_inputs = valid_inputs.to(self.device) return results, valid_inputs + + +class SpatioTemporalSimulator(Simulator, abc.ABC): + """ + Base class for simulators that output spatiotemporal data. + + This class extends the base Simulator with additional functionality for + handling spatiotemporal outputs, such as reshaping to spatiotemporal format + and returning constant fields. + """ + + @abstractmethod + def forward_samples_spatiotemporal( + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, + ) -> dict: + """ + Generate spatiotemporal samples from the simulator. + + Parameters + ---------- + n: int + Number of samples to generate. + random_seed: int | None + Random seed for reproducibility. Defaults to None. + ensure_exact_n: bool + When True, retry failed simulations until exactly ``n`` successful + samples are collected. Defaults to False. + + Returns + ------- + dict + A dictionary containing the reshaped spatiotemporal data, constant scalars, + and constant fields. + """ + + @staticmethod + def _retry_budget(n: int) -> int: + """Max retry rounds when collecting ``n`` samples with ``ensure_exact_n``.""" + return max(100, 20 * n) + + def _forward_batch_with_optional_retries( + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, + ) -> tuple[TensorLike, TensorLike]: + """Run a batch and optionally retry until exactly ``n`` successes. + + Parameters + ---------- + n: int + Number of successful samples requested. + random_seed: int | None + Base random seed for deterministic sampling. + ensure_exact_n: bool + Whether to keep resampling failed simulations. + + Returns + ------- + tuple[TensorLike, TensorLike] + Tuple of ``(simulation_results, valid_input_parameters)``. + """ + x = self.sample_inputs(n, random_seed) + y, x_valid = self.forward_batch(x) + + if not ensure_exact_n or y.shape[0] == n: + return y, x_valid + + y_parts = [y] if y.shape[0] > 0 else [] + x_parts = [x_valid] if x_valid.shape[0] > 0 else [] + successful = y.shape[0] + + for retry_round in range(1, self._retry_budget(n) + 1): + remaining = n - successful + retry_seed = None if random_seed is None else random_seed + retry_round + y_b, x_b = self.forward_batch(self.sample_inputs(remaining, retry_seed)) + if y_b.shape[0] > 0: + y_parts.append(y_b) + x_parts.append(x_b) + successful += y_b.shape[0] + if successful >= n: + break + else: + raise RuntimeError( + f"Could not collect exactly n={n} successful samples after " + f"{self._retry_budget(n)} retry rounds. Collected {successful}." + ) + + return torch.cat(y_parts, dim=0)[:n], torch.cat(x_parts, dim=0)[:n] diff --git a/src/autosim/simulations/reaction_diffusion.py b/src/autosim/simulations/reaction_diffusion.py index 8a0e769..71c6b9a 100644 --- a/src/autosim/simulations/reaction_diffusion.py +++ b/src/autosim/simulations/reaction_diffusion.py @@ -3,7 +3,7 @@ from numpy.fft import fft2, ifft2 from scipy.integrate import solve_ivp -from autosim.simulations.base import Simulator +from autosim.simulations.base import SpatioTemporalSimulator from autosim.types import NumpyLike, TensorLike integrator_keywords = {} @@ -12,7 +12,7 @@ integrator_keywords["atol"] = 1e-12 -class ReactionDiffusion(Simulator): +class ReactionDiffusion(SpatioTemporalSimulator): """Simulate the reaction-diffusion PDE for a given set of parameters.""" def __init__( @@ -58,7 +58,7 @@ def __init__( if parameters_range is None: parameters_range = {"beta": (1.0, 2.0), "d": (0.05, 0.3)} if output_names is None: - output_names = ["solution"] + output_names = ["u", "v"] super().__init__(parameters_range, output_names, log_level) self.return_timeseries = return_timeseries self.n = n @@ -81,7 +81,10 @@ def _forward(self, x: TensorLike) -> TensorLike: return torch.tensor(concat_array, dtype=torch.float32).reshape(1, -1) def forward_samples_spatiotemporal( - self, n: int, random_seed: int | None = None + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, ) -> dict: """Reshape to spatiotemporal format. @@ -98,11 +101,12 @@ def forward_samples_spatiotemporal( A dictionary containing the reshaped spatiotemporal data, constant scalars, and constant fields. """ - # Sample inputs - x = self.sample_inputs(n, random_seed) - - # Run simulation - y, x = self.forward_batch(x) + # Run simulation and optionally resample failed trajectories + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) # Reshape and permute output y_reshaped_permuted = y.reshape( diff --git a/src/autosim/utils.py b/src/autosim/utils.py index e2cb2be..2c4f9e8 100644 --- a/src/autosim/utils.py +++ b/src/autosim/utils.py @@ -1,8 +1,8 @@ -import random -from typing import Literal, Protocol +from __future__ import annotations + +from typing import Literal import numpy as np -import torch from einops import rearrange from matplotlib import animation from matplotlib import pyplot as plt @@ -10,13 +10,7 @@ from matplotlib.gridspec import GridSpec from torch import Tensor -from autosim.types import OutputLike, TensorLike, TorchScalarDType - - -class SpatioTemporalSimulator(Protocol): # noqa: D101 - def forward_samples_spatiotemporal( # noqa: D102 - self, n: int, random_seed: int | None = None - ) -> dict: ... +from autosim.simulations.base import SpatioTemporalSimulator def generate_output_data( @@ -32,284 +26,6 @@ def generate_output_data( return {"train": train, "valid": valid, "test": test} -class ValidationMixin: - """ - Mixin class for validation methods. - - This class provides static methods for checking the types and shapes of - input and output data, as well as validating specific tensor shapes. - """ - - @staticmethod - def _check(x: TensorLike, y: TensorLike | None): - """ - Check the types and shape are correct for the input data. - - Checks are equivalent to sklearn's check_array. - """ - if not isinstance(x, TensorLike): - raise ValueError(f"Expected x to be TensorLike, got {type(x)}") - - if y is not None and not isinstance(y, TensorLike): - raise ValueError(f"Expected y to be TensorLike, got {type(y)}") - - # Check x - if not torch.isfinite(x).all(): - msg = "Input tensor x contains non-finite values" - raise ValueError(msg) - if x.dtype not in TorchScalarDType: - msg = ( - f"Input tensor x has unsupported dtype {x.dtype}. " - "Expected float32, float64, int32, or int64." - ) - raise ValueError(msg) - - # Check y if not None - if y is not None: - if not torch.isfinite(y).all(): - msg = "Input tensor y contains non-finite values" - raise ValueError(msg) - if y.dtype not in TorchScalarDType: - msg = ( - f"Input tensor y has unsupported dtype {y.dtype}. " - "Expected float32, float64, int32, or int64." - ) - raise ValueError(msg) - - return x, y - - @staticmethod - def _check_output(output: OutputLike): - """Check the types and shape are correct for the output data.""" - if not isinstance(output, OutputLike): - raise ValueError(f"Expected OutputLike, got {type(output)}") - - @staticmethod - def check_vector(x: TensorLike) -> TensorLike: - """ - Validate that the input is a 1D TensorLike. - - Parameters - ---------- - x: TensorLike - Input tensor to validate. - - Returns - ------- - TensorLike - Validated 1D tensor. - - Raises - ------ - ValueError - If x is not a TensorLike or is not 1-dimensional. - """ - if not isinstance(x, TensorLike): - raise ValueError(f"Expected TensorLike, got {type(x)}") - if x.ndim != 1: - raise ValueError(f"Expected 1D tensor, got {x.ndim}D") - return x - - @staticmethod - def check_tensor_is_2d(x: TensorLike) -> TensorLike: - """ - Validate that the input is a 2D TensorLike. - - Parameters - ---------- - x: TensorLike - Input tensor to validate. - - Returns - ------- - TensorLike - Validated 2D tensor. - - Raises - ------ - ValueError - If x is not a TensorLike or is not 2-dimensional. - """ - if not isinstance(x, TensorLike): - raise ValueError(f"Expected TensorLike, got {type(x)}") - if x.ndim != 2: - raise ValueError(f"Expected 2D tensor, got {x.ndim}D") - return x - - @staticmethod - def check_pair(x: TensorLike, y: TensorLike) -> tuple[TensorLike, TensorLike]: - """ - Validate that two tensors have the same number of rows. - - Parameters - ---------- - x: TensorLike - First tensor. - y: TensorLike - Second tensor. - - Returns - ------- - tuple[TensorLike, TensorLike] - The validated pair of tensors. - - Raises - ------ - ValueError - If x and y do not have the same number of rows. - """ - if x.shape[0] != y.shape[0]: - msg = "x and y must have the same number of rows" - raise ValueError(msg) - return x, y - - @staticmethod - def check_covariance(y: TensorLike, Sigma: TensorLike) -> TensorLike: - """ - Validate and return the covariance matrix. - - Parameters - ---------- - y: TensorLike - Output tensor. - Sigma: TensorLike - Covariance matrix, which may be full, diagonal, or a scalar per sample. - - Returns - ------- - TensorLike - Validated covariance matrix. - - Raises - ------ - ValueError - If Sigma does not have a valid shape relative to y. - """ - if ( - Sigma.shape == (y.shape[0], y.shape[1], y.shape[1]) - or Sigma.shape == (y.shape[0], y.shape[1]) - or Sigma.shape == (y.shape[0],) - ): - return Sigma - msg = "Invalid covariance matrix shape" - raise ValueError(msg) - - @staticmethod - def trace(Sigma: TensorLike, d: int) -> TensorLike: - """ - Compute the trace of the covariance matrix (A-optimal design criterion). - - Parameters - ---------- - Sigma: TensorLike - Covariance matrix (full, diagonal, or scalar). - d: int - Dimension of the output. - - Returns - ------- - TensorLike - The computed trace value. - - Raises - ------ - ValueError - If Sigma does not have a valid shape. - """ - if Sigma.dim() == 3 and Sigma.shape[1:] == (d, d): - return torch.diagonal(Sigma, dim1=1, dim2=2).sum(dim=1).mean() - if Sigma.dim() == 2 and Sigma.shape[1] == d: - return Sigma.sum(dim=1).mean() - if Sigma.dim() == 1: - return d * Sigma.mean() - raise ValueError(f"Invalid covariance matrix shape: {Sigma.shape}") - - @staticmethod - def logdet(Sigma: TensorLike, dim: int) -> TensorLike: - """ - Return the log-determinant of the covariance matrix. - - Compute the log-determinant of the covariance matrix (D-optimal design - criterion). - - Parameters - ---------- - Sigma: TensorLike - Covariance matrix (full, diagonal, or scalar). - dim: int - Dimension of the output. - - Returns - ------- - TensorLike - The computed log-determinant value. - - Raises - ------ - ValueError - If Sigma does not have a valid shape. - """ - if len(Sigma.shape) == 3 and Sigma.shape[1:] == (dim, dim): - return torch.logdet(Sigma).mean() - if len(Sigma.shape) == 2 and Sigma.shape[1] == dim: - return torch.sum(torch.log(Sigma), dim=1).mean() - if len(Sigma.shape) == 1: - return dim * torch.log(Sigma).mean() - raise ValueError(f"Invalid covariance matrix shape: {Sigma.shape}") - - @staticmethod - def max_eigval(Sigma: TensorLike) -> TensorLike: - """ - Return the maximum eigenvalue of the covariance matrix. - - Compute the maximum eigenvalue of the covariance matrix (E-optimal design - criterion). - - Parameters - ---------- - Sigma: TensorLike - Covariance matrix (full, diagonal, or scalar). - - Returns - ------- - TensorLike - The average maximum eigenvalue. - - Raises - ------ - ValueError - If Sigma does not have a valid shape. - """ - if Sigma.dim() == 3 and Sigma.shape[1:] == (Sigma.shape[1], Sigma.shape[1]): - eigvals = torch.linalg.eigvalsh(Sigma) - return eigvals[:, -1].mean() # Eigenvalues are sorted in ascending order - if Sigma.dim() == 2: - return Sigma.max(dim=1).values.mean() - if Sigma.dim() == 1: - return Sigma.mean() - raise ValueError(f"Invalid covariance matrix shape: {Sigma.shape}") - - -def set_random_seed(seed: int = 42, deterministic: bool = True): - """ - Set random seed for Python, NumPy and PyTorch. - - Parameters - ---------- - seed: int - The random seed to use. - deterministic: bool - Use "deterministic" algorithms in PyTorch. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - if deterministic: - torch.backends.cudnn.benchmark = False - torch.use_deterministic_algorithms(True) - - def plot_spatiotemporal_video( # noqa: PLR0915, PLR0912 true: Tensor, pred: Tensor | None = None, @@ -388,6 +104,14 @@ def plot_spatiotemporal_video( # noqa: PLR0915, PLR0912 if pred_uq_batch is not None: pred_uq_batch = pred_uq_batch.detach().cpu().numpy() + default_channel_names = [f"Channel {ch}" for ch in range(C)] + if channel_names is None: + resolved_channel_names = default_channel_names + else: + resolved_channel_names = default_channel_names.copy() + for idx, name in enumerate(channel_names[:C]): + resolved_channel_names[idx] = str(name) + primary_rows = [true_batch] # Calculate difference @@ -511,14 +235,11 @@ def _to_imshow_frame( norm = uq_norm else: norm = diff_norm - im = ax.imshow(frame0, cmap=row_cmap, aspect="auto", norm=norm) + aspect = "equal" if preserve_aspect else "auto" + im = ax.imshow(frame0, cmap=row_cmap, aspect=aspect, norm=norm) if row_idx == 0: - ( - ax.set_title(f"Channel {ch}") - if channel_names is None - else ax.set_title(f"{channel_names[ch]}") - ) + ax.set_title(resolved_channel_names[ch]) if ch == 0: ax.set_ylabel(row_label) diff --git a/src/autosim/validation.py b/src/autosim/validation.py new file mode 100644 index 0000000..ce61bc5 --- /dev/null +++ b/src/autosim/validation.py @@ -0,0 +1,133 @@ +from __future__ import annotations + +import random + +import numpy as np +import torch + +from autosim.types import OutputLike, TensorLike, TorchScalarDType + + +class ValidationMixin: + """Mixin class for validation methods.""" + + @staticmethod + def _check(x: TensorLike, y: TensorLike | None): + """Check the types and shape are correct for the input data.""" + if not isinstance(x, TensorLike): + raise ValueError(f"Expected x to be TensorLike, got {type(x)}") + + if y is not None and not isinstance(y, TensorLike): + raise ValueError(f"Expected y to be TensorLike, got {type(y)}") + + if not torch.isfinite(x).all(): + msg = "Input tensor x contains non-finite values" + raise ValueError(msg) + if x.dtype not in TorchScalarDType: + msg = ( + f"Input tensor x has unsupported dtype {x.dtype}. " + "Expected float32, float64, int32, or int64." + ) + raise ValueError(msg) + + if y is not None: + if not torch.isfinite(y).all(): + msg = "Input tensor y contains non-finite values" + raise ValueError(msg) + if y.dtype not in TorchScalarDType: + msg = ( + f"Input tensor y has unsupported dtype {y.dtype}. " + "Expected float32, float64, int32, or int64." + ) + raise ValueError(msg) + + return x, y + + @staticmethod + def _check_output(output: OutputLike): + """Check the types and shape are correct for the output data.""" + if not isinstance(output, OutputLike): + raise ValueError(f"Expected OutputLike, got {type(output)}") + + @staticmethod + def check_vector(x: TensorLike) -> TensorLike: + """Validate that the input is a 1D TensorLike.""" + if not isinstance(x, TensorLike): + raise ValueError(f"Expected TensorLike, got {type(x)}") + if x.ndim != 1: + raise ValueError(f"Expected 1D tensor, got {x.ndim}D") + return x + + @staticmethod + def check_tensor_is_2d(x: TensorLike) -> TensorLike: + """Validate that the input is a 2D TensorLike.""" + if not isinstance(x, TensorLike): + raise ValueError(f"Expected TensorLike, got {type(x)}") + if x.ndim != 2: + raise ValueError(f"Expected 2D tensor, got {x.ndim}D") + return x + + @staticmethod + def check_pair(x: TensorLike, y: TensorLike) -> tuple[TensorLike, TensorLike]: + """Validate that two tensors have the same number of rows.""" + if x.shape[0] != y.shape[0]: + msg = "x and y must have the same number of rows" + raise ValueError(msg) + return x, y + + @staticmethod + def check_covariance(y: TensorLike, Sigma: TensorLike) -> TensorLike: + """Validate and return the covariance matrix.""" + if ( + Sigma.shape == (y.shape[0], y.shape[1], y.shape[1]) + or Sigma.shape == (y.shape[0], y.shape[1]) + or Sigma.shape == (y.shape[0],) + ): + return Sigma + msg = "Invalid covariance matrix shape" + raise ValueError(msg) + + @staticmethod + def trace(Sigma: TensorLike, d: int) -> TensorLike: + """Compute the trace of the covariance matrix.""" + if Sigma.dim() == 3 and Sigma.shape[1:] == (d, d): + return torch.diagonal(Sigma, dim1=1, dim2=2).sum(dim=1).mean() + if Sigma.dim() == 2 and Sigma.shape[1] == d: + return Sigma.sum(dim=1).mean() + if Sigma.dim() == 1: + return d * Sigma.mean() + raise ValueError(f"Invalid covariance matrix shape: {Sigma.shape}") + + @staticmethod + def logdet(Sigma: TensorLike, dim: int) -> TensorLike: + """Compute the log-determinant of the covariance matrix.""" + if len(Sigma.shape) == 3 and Sigma.shape[1:] == (dim, dim): + return torch.logdet(Sigma).mean() + if len(Sigma.shape) == 2 and Sigma.shape[1] == dim: + return torch.sum(torch.log(Sigma), dim=1).mean() + if len(Sigma.shape) == 1: + return dim * torch.log(Sigma).mean() + raise ValueError(f"Invalid covariance matrix shape: {Sigma.shape}") + + @staticmethod + def max_eigval(Sigma: TensorLike) -> TensorLike: + """Compute the maximum eigenvalue of the covariance matrix.""" + if Sigma.dim() == 3 and Sigma.shape[1:] == (Sigma.shape[1], Sigma.shape[1]): + eigvals = torch.linalg.eigvalsh(Sigma) + return eigvals[:, -1].mean() + if Sigma.dim() == 2: + return Sigma.max(dim=1).values.mean() + if Sigma.dim() == 1: + return Sigma.mean() + raise ValueError(f"Invalid covariance matrix shape: {Sigma.shape}") + + +def set_random_seed(seed: int = 42, deterministic: bool = True): + """Set random seed for Python, NumPy and PyTorch.""" + random.seed(seed) + np.random.seed(seed) + torch.manual_seed(seed) + torch.cuda.manual_seed(seed) + if deterministic: + torch.backends.cudnn.benchmark = False + torch.use_deterministic_algorithms(True) diff --git a/tests/simulations/test_advection_diffusion_multichannel.py b/tests/simulations/test_advection_diffusion_multichannel.py new file mode 100644 index 0000000..69ff406 --- /dev/null +++ b/tests/simulations/test_advection_diffusion_multichannel.py @@ -0,0 +1,52 @@ +from __future__ import annotations + +import pytest +import torch + +from autosim.simulations import AdvectionDiffusionMultichannel + + +def test_output_indices_validation() -> None: + with pytest.raises(ValueError, match="at least one"): + AdvectionDiffusionMultichannel(output_indices=[]) + + with pytest.raises(ValueError, match="must not contain duplicate"): + AdvectionDiffusionMultichannel(output_indices=[0, 0]) + + with pytest.raises(ValueError, match=r"range \[0, 3\]"): + AdvectionDiffusionMultichannel(output_indices=[4]) + + +def test_output_indices_select_and_order_channels() -> None: + fixed_params = {"nu": (0.001, 0.001), "mu": (1.0, 1.0)} + + sim_full = AdvectionDiffusionMultichannel( + parameters_range=fixed_params, + output_indices=[0, 1, 2, 3], + return_timeseries=False, + n=8, + L=4.0, + T=0.25, + dt=0.25, + log_level="warning", + ) + sim_subset = AdvectionDiffusionMultichannel( + parameters_range=fixed_params, + output_indices=[0, 2], + return_timeseries=False, + n=8, + L=4.0, + T=0.25, + dt=0.25, + log_level="warning", + ) + + full = sim_full.forward_samples_spatiotemporal(n=1, random_seed=7) + subset = sim_subset.forward_samples_spatiotemporal(n=1, random_seed=7) + + assert sim_subset.output_names == ["vorticity", "v"] + assert full["data"].shape == (1, 1, 8, 8, 4) + assert subset["data"].shape == (1, 1, 8, 8, 2) + + expected_subset = full["data"][..., [0, 2]] + assert torch.allclose(subset["data"], expected_subset) diff --git a/tests/simulations/test_all_simulator_output_names.py b/tests/simulations/test_all_simulator_output_names.py new file mode 100644 index 0000000..9b63eab --- /dev/null +++ b/tests/simulations/test_all_simulator_output_names.py @@ -0,0 +1,46 @@ +from autosim.experimental.simulations import ( + CompressibleFluid2D, + ConditionedNavierStokes2D, + GrayScott, + GrossPitaevskiiEquation2D, + Hydrodynamics2D, + LatticeBoltzmann, + ReactionDiffusion, + ShallowWater2D, +) +from autosim.simulations import ( + AdvectionDiffusion, + AdvectionDiffusionMultichannel, + Epidemic, + FlowProblem, + Projectile, + ProjectileMultioutput, + SEIRSimulator, +) + + +def test_all_simulators_have_explicit_output_names() -> None: + simulators = [ + AdvectionDiffusion(), + AdvectionDiffusionMultichannel(), + ReactionDiffusion(), + Epidemic(), + SEIRSimulator(), + FlowProblem(), + Projectile(), + ProjectileMultioutput(), + GrayScott(), + LatticeBoltzmann(), + ConditionedNavierStokes2D(), + Hydrodynamics2D(), + CompressibleFluid2D(), + ShallowWater2D(), + GrossPitaevskiiEquation2D(), + ] + + for sim in simulators: + names = sim.output_names + assert isinstance(names, list) + assert names + assert all(isinstance(name, str) and name.strip() for name in names) + assert names != ["solution"] diff --git a/tests/simulations/test_base_simulator.py b/tests/simulations/test_base_simulator.py index 65ca711..957cf28 100644 --- a/tests/simulations/test_base_simulator.py +++ b/tests/simulations/test_base_simulator.py @@ -2,7 +2,7 @@ import torch from torch import Tensor -from autosim.simulations.base import Simulator, TorchSimulator +from autosim.simulations.base import Simulator, SpatioTemporalSimulator, TorchSimulator from autosim.types import TensorLike @@ -404,3 +404,74 @@ def recording_move(self, *args): ) sim.sample_inputs(5) assert calls["count"] == 1 + + +class RetrySpatioTemporalSimulator(SpatioTemporalSimulator): + """Test double for retrying failed spatiotemporal simulations.""" + + def __init__(self, sample_batches: list[TensorLike]): + super().__init__(parameters_range={"param1": (0.0, 1.0)}, output_names=["out"]) + self.sample_batches = [batch.clone() for batch in sample_batches] + self.sample_calls = 0 + + def sample_inputs( # type: ignore[override] + self, n_samples: int, random_seed: int | None = None, method: str = "lhs" + ) -> TensorLike: + del random_seed, method + self.sample_calls += 1 + batch = self.sample_batches.pop(0) + assert batch.shape[0] == n_samples + return batch + + def _forward(self, x: TensorLike) -> TensorLike | None: + if x[0, 0] <= 0.5: + msg = "simulated failure" + raise RuntimeError(msg) + return (x[:, :1] * 2.0).reshape(1, 1) + + def forward_samples_spatiotemporal( + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, + ) -> dict: + y, x = self._forward_batch_with_optional_retries( + n=n, + random_seed=random_seed, + ensure_exact_n=ensure_exact_n, + ) + return { + "data": y.reshape(y.shape[0], 1, 1, 1, 1), + "constant_scalars": x, + "constant_fields": None, + } + + +def test_spatiotemporal_sampling_can_return_fewer_without_exact_n(): + sim = RetrySpatioTemporalSimulator( + sample_batches=[ + torch.tensor([[0.1], [0.9], [0.2], [0.8]], dtype=torch.float32), + ] + ) + + out = sim.forward_samples_spatiotemporal(n=4, ensure_exact_n=False) + + assert out["data"].shape[0] == 2 + assert out["constant_scalars"].shape[0] == 2 + assert sim.sample_calls == 1 + + +def test_spatiotemporal_sampling_retries_to_exact_n_when_enabled(): + sim = RetrySpatioTemporalSimulator( + sample_batches=[ + torch.tensor([[0.1], [0.9], [0.2], [0.8]], dtype=torch.float32), + torch.tensor([[0.3], [0.95]], dtype=torch.float32), + torch.tensor([[0.99]], dtype=torch.float32), + ] + ) + + out = sim.forward_samples_spatiotemporal(n=4, ensure_exact_n=True) + + assert out["data"].shape[0] == 4 + assert out["constant_scalars"].shape[0] == 4 + assert sim.sample_calls == 3 diff --git a/tests/simulations/test_shallow_water.py b/tests/simulations/test_shallow_water.py index 9ba2e9e..e9f19bb 100644 --- a/tests/simulations/test_shallow_water.py +++ b/tests/simulations/test_shallow_water.py @@ -1,3 +1,4 @@ +import pytest import torch from autosim.experimental.simulations import ShallowWater2D @@ -28,31 +29,42 @@ def test_full_swe_timeseries_shape_and_finite() -> None: assert (h[-1] - h[0]).abs().max().item() > 1e-4 -def test_full_swe_initial_condition_is_balanced_and_non_trivial() -> None: - """IC should have non-trivial velocity, height anomaly, and 2D structure.""" - out = simulate_swe_2d( - amp=0.12, +def test_full_swe_skip_nt_reduces_timeseries_length() -> None: + sim = ShallowWater2D( return_timeseries=True, - nx=48, - ny=48, - Lx=48.0, - Ly=48.0, - T=0.0, + log_level="warning", + nx=24, + ny=24, + Lx=24.0, + Ly=24.0, + T=5.0, dt_save=1.0, - cfl=0.12, - g=9.81, - H=1.0, - nu=5e-4, - drag=2e-3, + skip_nt=2, + parameters_range={"amp": (0.1, 0.1)}, ) - h0, u0, v0 = out[0, ..., 0], out[0, ..., 1], out[0, ..., 2] + out = sim.forward_samples_spatiotemporal(n=1, random_seed=0) + data = out["data"] + + expected_frames = int(sim.T / sim.dt_save) + 1 - sim.skip_nt + assert data.shape == (1, expected_frames, sim.nx, sim.ny, 3) + - # velocity has spatial structure - assert u0.std().item() > 1e-4 - assert v0.std().item() > 1e-4 - # height deviates from rest - assert (h0 - 1.0).abs().max().item() > 1e-5 - # 2D structure: variance differs between rows (not pure zonal stripes) - row_vars = torch.stack([u0[i].var() for i in range(u0.shape[0])]) - assert row_vars.std().item() > 0 +def test_full_swe_skip_nt_too_large_raises() -> None: + with pytest.raises(ValueError, match="skip_nt is too large"): + simulate_swe_2d( + amp=0.12, + return_timeseries=True, + nx=24, + ny=24, + Lx=24.0, + Ly=24.0, + T=1.0, + dt_save=1.0, + cfl=0.12, + g=9.81, + H=1.0, + nu=5e-4, + drag=2e-3, + skip_nt=2, + ) diff --git a/tests/test_cli.py b/tests/test_cli.py new file mode 100644 index 0000000..532cd24 --- /dev/null +++ b/tests/test_cli.py @@ -0,0 +1,329 @@ +from __future__ import annotations + +import subprocess +import sys +from pathlib import Path + +import pytest +import torch +from omegaconf import OmegaConf + +from autosim.cli import ( + build_simulator, + combine_stratified_splits, + generate_dataset_splits, + get_per_strata_counts, + save_dataset_splits, + save_example_videos, +) +from autosim.simulations.base import SpatioTemporalSimulator + + +class DummySimulator(SpatioTemporalSimulator): + def _forward(self, x: torch.Tensor) -> torch.Tensor | None: + msg = "DummySimulator does not implement _forward." + raise NotImplementedError(msg) + + def forward_samples_spatiotemporal( + self, + n: int, + random_seed: int | None = None, + ensure_exact_n: bool = False, + ) -> dict: + del ensure_exact_n + seed_value = -1 if random_seed is None else random_seed + return { + "data": torch.full((n, 1, 2, 2, 1), float(seed_value), dtype=torch.float32), + "constant_scalars": torch.tensor([seed_value]), + "constant_fields": None, + } + + +def test_build_simulator_from_target_core_and_experimental() -> None: + core_cfg = OmegaConf.create( + {"_target_": "autosim.simulations.AdvectionDiffusion", "log_level": "warning"} + ) + experimental_cfg = OmegaConf.create( + { + "_target_": "autosim.experimental.simulations.ShallowWater2D", + "log_level": "warning", + } + ) + + assert build_simulator(core_cfg).__class__.__name__ == "AdvectionDiffusion" + assert build_simulator(experimental_cfg).__class__.__name__ == "ShallowWater2D" + + +def test_generate_dataset_splits_uses_non_overlapping_seed_namespaces() -> None: + splits = generate_dataset_splits( + sim=DummySimulator({}, []), + n_train=3, + n_valid=2, + n_test=1, + base_seed=11, + ) + + assert splits["train"]["data"].shape[0] == 3 + assert splits["valid"]["data"].shape[0] == 2 + assert splits["test"]["data"].shape[0] == 1 + assert splits["train"]["constant_scalars"].item() == 11 + assert splits["valid"]["constant_scalars"].item() == 112 + assert splits["test"]["constant_scalars"].item() == 213 + + +def test_build_simulator_rejects_non_spatiotemporal() -> None: + non_spatiotemporal_cfg = OmegaConf.create( + {"_target_": "autosim.simulations.Projectile", "log_level": "warning"} + ) + + with pytest.raises(TypeError, match="SpatioTemporalSimulator"): + build_simulator(non_spatiotemporal_cfg) + + +def test_save_dataset_splits_writes_expected_structure(tmp_path: Path) -> None: + splits = generate_dataset_splits( + sim=DummySimulator({}, []), + n_train=1, + n_valid=1, + n_test=1, + base_seed=5, + ) + + output_dir = tmp_path / "dataset" + save_dataset_splits(splits=splits, output_dir=output_dir) + + for split in ("train", "valid", "test"): + data_path = output_dir / split / "data.pt" + assert data_path.exists() + payload = torch.load(data_path) + assert "data" in payload + + +def test_save_dataset_splits_respects_overwrite_flag(tmp_path: Path) -> None: + splits = generate_dataset_splits( + sim=DummySimulator({}, []), + n_train=1, + n_valid=1, + n_test=1, + ) + output_dir = tmp_path / "dataset" + save_dataset_splits(splits=splits, output_dir=output_dir) + + with pytest.raises(FileExistsError): + save_dataset_splits(splits=splits, output_dir=output_dir, overwrite=False) + + save_dataset_splits(splits=splits, output_dir=output_dir, overwrite=True) + + +def test_cli_generates_dataset_fast_with_advection_diffusion(tmp_path: Path) -> None: + output_dir = tmp_path / "generated" + hydra_run_dir = tmp_path / "hydra_run" + repo_root = Path(__file__).resolve().parents[1] + + command = [ + sys.executable, + "-m", + "autosim.cli", + f"dataset.output_dir={output_dir.as_posix()}", + "dataset.n_train=1", + "dataset.n_valid=1", + "dataset.n_test=1", + "overwrite=true", + "simulator=advection_diffusion", + "simulator.log_level=warning", + "simulator.return_timeseries=true", + "simulator.n=8", + "simulator.L=4.0", + "simulator.T=0.1", + "simulator.dt=0.1", + "visualize.enabled=false", + f"hydra.run.dir={hydra_run_dir.as_posix()}", + "hydra.output_subdir=null", + ] + + subprocess.run(command, check=True, cwd=repo_root) + + for split in ("train", "valid", "test"): + split_path = output_dir / split / "data.pt" + assert split_path.exists() + payload = torch.load(split_path) + assert payload["data"].shape[0] == 1 + + +def test_cli_list_subcommand_outputs_simulator_names() -> None: + repo_root = Path(__file__).resolve().parents[1] + result = subprocess.run( + [sys.executable, "-m", "autosim.cli", "list"], + check=True, + cwd=repo_root, + capture_output=True, + text=True, + ) + + output_lines = [line.strip() for line in result.stdout.splitlines() if line.strip()] + assert "advection_diffusion" in output_lines + assert "shallow_water2d" in output_lines + + +def test_cli_help_outputs_usage() -> None: + repo_root = Path(__file__).resolve().parents[1] + result = subprocess.run( + [sys.executable, "-m", "autosim.cli", "--help"], + check=True, + cwd=repo_root, + capture_output=True, + text=True, + ) + + assert "usage:" in result.stdout.lower() + assert "list" in result.stdout + + +def test_get_per_strata_counts_requires_exact_divisibility() -> None: + with pytest.raises(ValueError, match="must be divisible"): + get_per_strata_counts(n_train=10, n_valid=4, n_test=4, n_strata=3) + + train, valid, test = get_per_strata_counts( + n_train=12, n_valid=6, n_test=3, n_strata=3 + ) + assert (train, valid, test) == (4, 2, 1) + + +def test_combine_stratified_splits_preserves_strata_order() -> None: + group_a = { + "train": { + "data": torch.full((2, 1, 1, 1, 1), 1.0), + "constant_scalars": torch.full((2, 1), 1.0), + "constant_fields": None, + }, + "valid": { + "data": torch.full((1, 1, 1, 1, 1), 1.0), + "constant_scalars": torch.full((1, 1), 1.0), + "constant_fields": None, + }, + "test": { + "data": torch.full((1, 1, 1, 1, 1), 1.0), + "constant_scalars": torch.full((1, 1), 1.0), + "constant_fields": None, + }, + } + group_b = { + "train": { + "data": torch.full((2, 1, 1, 1, 1), 2.0), + "constant_scalars": torch.full((2, 1), 2.0), + "constant_fields": None, + }, + "valid": { + "data": torch.full((1, 1, 1, 1, 1), 2.0), + "constant_scalars": torch.full((1, 1), 2.0), + "constant_fields": None, + }, + "test": { + "data": torch.full((1, 1, 1, 1, 1), 2.0), + "constant_scalars": torch.full((1, 1), 2.0), + "constant_fields": None, + }, + } + + combined = combine_stratified_splits([group_a, group_b]) + + assert combined["train"]["data"].shape[0] == 4 + assert torch.all(combined["train"]["data"][:2] == 1.0) + assert torch.all(combined["train"]["data"][2:] == 2.0) + + +@pytest.fixture +def dummy_splits() -> dict: + return { + "train": { + "data": torch.zeros((3, 2, 4, 4, 2), dtype=torch.float32), + "constant_scalars": torch.zeros((3, 1), dtype=torch.float32), + "constant_fields": None, + }, + "valid": { + "data": torch.zeros((1, 2, 4, 4, 2), dtype=torch.float32), + "constant_scalars": torch.zeros((1, 1), dtype=torch.float32), + "constant_fields": None, + }, + "test": { + "data": torch.zeros((1, 2, 4, 4, 2), dtype=torch.float32), + "constant_scalars": torch.zeros((1, 1), dtype=torch.float32), + "constant_fields": None, + }, + } + + +def test_save_example_videos_disabled_is_noop(tmp_path: Path, dummy_splits) -> None: + cfg = OmegaConf.create({"enabled": False}) + save_example_videos(splits=dummy_splits, output_dir=tmp_path, visualize_cfg=cfg) + assert not (tmp_path / "examples").exists() + + +def test_save_example_videos_empty_batch_indices_is_noop( + tmp_path: Path, dummy_splits, monkeypatch +) -> None: + import autosim.cli as cli_module # noqa: PLC0415 + + calls: list = [] + monkeypatch.setattr( + cli_module, "plot_spatiotemporal_video", lambda **kw: calls.append(kw) + ) + + cfg = OmegaConf.create({"enabled": True, "split": "train", "batch_indices": []}) + save_example_videos(splits=dummy_splits, output_dir=tmp_path, visualize_cfg=cfg) + assert calls == [] + + +def test_save_example_videos_out_of_range_raises(tmp_path: Path, dummy_splits) -> None: + cfg = OmegaConf.create( + { + "enabled": True, + "split": "train", + "batch_indices": [99], + "fps": 5, + "file_ext": "gif", + "overwrite": True, + } + ) + with pytest.raises(ValueError, match="out of range"): + save_example_videos(splits=dummy_splits, output_dir=tmp_path, visualize_cfg=cfg) + + +def test_save_example_videos_uses_batch_indices_and_split( + tmp_path: Path, dummy_splits, monkeypatch +) -> None: + import autosim.cli as cli_module # noqa: PLC0415 + + calls: list[dict] = [] + + def _fake(**kwargs): + save_path = Path(str(kwargs["save_path"])) + save_path.parent.mkdir(parents=True, exist_ok=True) + save_path.write_text("stub") + calls.append(kwargs) + + monkeypatch.setattr(cli_module, "plot_spatiotemporal_video", _fake) + + cfg = OmegaConf.create( + { + "enabled": True, + "split": "train", + "batch_indices": [0, 2], + "fps": 7, + "file_ext": "gif", + "overwrite": True, + } + ) + + save_example_videos( + splits=dummy_splits, + output_dir=tmp_path, + visualize_cfg=cfg, + channel_names=["h", "u"], + ) + + assert len(calls) == 2 + assert all(call["channel_names"] == ["h", "u"] for call in calls) + saved_paths = sorted(Path(str(call["save_path"])) for call in calls) + assert saved_paths[0] == tmp_path / "examples" / "train" / "batch_0.gif" + assert saved_paths[1] == tmp_path / "examples" / "train" / "batch_2.gif" diff --git a/tests/test_utils.py b/tests/test_utils.py new file mode 100644 index 0000000..61326b7 --- /dev/null +++ b/tests/test_utils.py @@ -0,0 +1,16 @@ +import torch + +from autosim.utils import plot_spatiotemporal_video + + +def test_plot_video_accepts_short_channel_names_and_preserve_aspect() -> None: + true = torch.rand(1, 3, 8, 16, 3) + + anim = plot_spatiotemporal_video( + true=true, + batch_idx=0, + channel_names=["h"], + preserve_aspect=True, + ) + + assert anim is not None diff --git a/uv.lock b/uv.lock index b755a14..670dff0 100644 --- a/uv.lock +++ b/uv.lock @@ -2,11 +2,20 @@ version = 1 revision = 3 requires-python = ">=3.10, <3.13" resolution-markers = [ - "python_full_version >= '3.12'", - "python_full_version == '3.11.*'", - "python_full_version < '3.11'", + "(python_full_version >= '3.12' and sys_platform == 'linux') or (python_full_version >= '3.12' and sys_platform == 'win32')", + "python_full_version >= '3.12' and sys_platform != 'linux' and sys_platform != 'win32'", + "(python_full_version == '3.11.*' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform == 'win32')", + "python_full_version == '3.11.*' and sys_platform != 'linux' and sys_platform != 'win32'", + "(python_full_version < '3.11' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform == 'win32')", + "python_full_version < '3.11' and sys_platform != 'linux' and sys_platform != 'win32'", ] +[[package]] +name = "antlr4-python3-runtime" +version = "4.9.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/3e/38/7859ff46355f76f8d19459005ca000b6e7012f2f1ca597746cbcd1fbfe5e/antlr4-python3-runtime-4.9.3.tar.gz", hash = "sha256:f224469b4168294902bb1efa80a8bf7855f24c99aef99cbefc1bcd3cce77881b", size = 117034, upload-time = "2021-11-06T17:52:23.524Z" } + [[package]] name = "appnope" version = "0.1.4" @@ -31,6 +40,7 @@ version = "0.0.1" source = { editable = "." } dependencies = [ { name = "einops" }, + { name = "hydra-core" }, { name = "matplotlib" }, { name = "numpy", version = "2.2.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11'" }, { name = "numpy", version = "2.4.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" }, @@ -38,7 +48,8 @@ dependencies = [ { name = "scikit-learn", version = "1.8.0", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" }, { name = "scipy", version = "1.15.3", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11'" }, { name = "scipy", version = "1.17.0", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" }, - { name = "torch" }, + { name = "torch", version = "2.10.0", source = { registry = "https://pypi.org/simple" }, marker = "sys_platform != 'linux' and sys_platform != 'win32'" }, + { name = "torch", version = "2.10.0+cu126", source = { registry = "https://download.pytorch.org/whl/cu126" }, marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, { name = "tqdm" }, ] @@ -55,6 +66,7 @@ dev = [ [package.metadata] requires-dist = [ { name = "einops", specifier = ">=0.8.2" }, + { name = "hydra-core", specifier = ">=1.3" }, { name = "ipykernel", marker = "extra == 'dev'", specifier = ">=7.1.0" }, { name = "matplotlib" }, { name = "numpy", specifier = ">=1.24" }, @@ -65,7 +77,8 @@ requires-dist = [ { name = "ruff", marker = "extra == 'dev'", specifier = "==0.12.11" }, { name = "scikit-learn", specifier = ">=1.7.2" }, { name = "scipy", specifier = ">=1.10" }, - { name = "torch", specifier = ">=2.0" }, + { name = "torch", marker = "sys_platform != 'linux' and sys_platform != 'win32'", specifier = ">=2.0" }, + { name = "torch", marker = "sys_platform == 'linux' or sys_platform == 'win32'", specifier = ">=2.0", index = "https://download.pytorch.org/whl/cu126" }, { name = "tqdm", specifier = ">=4.65" }, ] provides-extras = ["dev"] @@ -150,7 +163,8 @@ name = "contourpy" version = "1.3.2" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version < '3.11'", + "(python_full_version < '3.11' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform == 'win32')", + "python_full_version < '3.11' and sys_platform != 'linux' and sys_platform != 'win32'", ] dependencies = [ { name = "numpy", version = "2.2.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11'" }, @@ -200,8 +214,10 @@ name = "contourpy" version = "1.3.3" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version >= '3.12'", - "python_full_version == '3.11.*'", + "(python_full_version >= '3.12' and sys_platform == 'linux') or (python_full_version >= '3.12' and sys_platform == 'win32')", + "python_full_version >= '3.12' and sys_platform != 'linux' and sys_platform != 'win32'", + "(python_full_version == '3.11.*' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform == 'win32')", + "python_full_version == '3.11.*' and sys_platform != 'linux' and sys_platform != 'win32'", ] dependencies = [ { name = "numpy", version = "2.4.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" }, @@ -300,11 +316,14 @@ name = "cuda-bindings" version = "12.9.4" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "cuda-pathfinder" }, + { name = "cuda-pathfinder", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, ] wheels = [ + { url = "https://files.pythonhosted.org/packages/37/31/bfcc870f69c6a017c4ad5c42316207fc7551940db6f3639aa4466ec5faf3/cuda_bindings-12.9.4-cp310-cp310-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a022c96b8bd847e8dc0675523431149a4c3e872f440e3002213dbb9e08f0331a", size = 11800959, upload-time = "2025-10-21T14:51:26.458Z" }, { url = "https://files.pythonhosted.org/packages/7a/d8/b546104b8da3f562c1ff8ab36d130c8fe1dd6a045ced80b4f6ad74f7d4e1/cuda_bindings-12.9.4-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4d3c842c2a4303b2a580fe955018e31aea30278be19795ae05226235268032e5", size = 12148218, upload-time = "2025-10-21T14:51:28.855Z" }, + { url = "https://files.pythonhosted.org/packages/a9/2b/ebcbb60aa6dba830474cd360c42e10282f7a343c0a1f58d24fbd3b7c2d77/cuda_bindings-12.9.4-cp311-cp311-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a6a429dc6c13148ff1e27c44f40a3dd23203823e637b87fd0854205195988306", size = 11840604, upload-time = "2025-10-21T14:51:34.565Z" }, { url = "https://files.pythonhosted.org/packages/45/e7/b47792cc2d01c7e1d37c32402182524774dadd2d26339bd224e0e913832e/cuda_bindings-12.9.4-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c912a3d9e6b6651853eed8eed96d6800d69c08e94052c292fec3f282c5a817c9", size = 12210593, upload-time = "2025-10-21T14:51:36.574Z" }, + { url = "https://files.pythonhosted.org/packages/0c/c2/65bfd79292b8ff18be4dd7f7442cea37bcbc1a228c1886f1dea515c45b67/cuda_bindings-12.9.4-cp312-cp312-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:694ba35023846625ef471257e6b5a4bc8af690f961d197d77d34b1d1db393f56", size = 11760260, upload-time = "2025-10-21T14:51:40.79Z" }, { url = "https://files.pythonhosted.org/packages/a9/c1/dabe88f52c3e3760d861401bb994df08f672ec893b8f7592dc91626adcf3/cuda_bindings-12.9.4-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fda147a344e8eaeca0c6ff113d2851ffca8f7dfc0a6c932374ee5c47caa649c8", size = 12151019, upload-time = "2025-10-21T14:51:43.167Z" }, ] @@ -445,6 +464,20 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/e6/ab/fb21f4c939bb440104cc2b396d3be1d9b7a9fd3c6c2a53d98c45b3d7c954/fsspec-2026.2.0-py3-none-any.whl", hash = "sha256:98de475b5cb3bd66bedd5c4679e87b4fdfe1a3bf4d707b151b3c07e58c9a2437", size = 202505, upload-time = "2026-02-05T21:50:51.819Z" }, ] +[[package]] +name = "hydra-core" +version = "1.3.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "antlr4-python3-runtime" }, + { name = "omegaconf" }, + { name = "packaging" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/6d/8e/07e42bc434a847154083b315779b0a81d567154504624e181caf2c71cd98/hydra-core-1.3.2.tar.gz", hash = "sha256:8a878ed67216997c3e9d88a8e72e7b4767e81af37afb4ea3334b269a4390a824", size = 3263494, upload-time = "2023-02-23T18:33:43.03Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c6/50/e0edd38dcd63fb26a8547f13d28f7a008bc4a3fd4eb4ff030673f22ad41a/hydra_core-1.3.2-py3-none-any.whl", hash = "sha256:fa0238a9e31df3373b35b0bfb672c34cc92718d21f81311d8996a16de1141d8b", size = 154547, upload-time = "2023-02-23T18:33:40.801Z" }, +] + [[package]] name = "identify" version = "2.6.16" @@ -493,7 +526,8 @@ name = "ipython" version = "8.38.0" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version < '3.11'", + "(python_full_version < '3.11' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform == 'win32')", + "python_full_version < '3.11' and sys_platform != 'linux' and sys_platform != 'win32'", ] dependencies = [ { name = "colorama", marker = "python_full_version < '3.11' and sys_platform == 'win32'" }, @@ -518,8 +552,10 @@ name = "ipython" version = "9.10.0" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version >= '3.12'", - "python_full_version == '3.11.*'", + "(python_full_version >= '3.12' and sys_platform == 'linux') or (python_full_version >= '3.12' and sys_platform == 'win32')", + "python_full_version >= '3.12' and sys_platform != 'linux' and sys_platform != 'win32'", + "(python_full_version == '3.11.*' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform == 'win32')", + "python_full_version == '3.11.*' and sys_platform != 'linux' and sys_platform != 'win32'", ] dependencies = [ { name = "colorama", marker = "python_full_version >= '3.11' and sys_platform == 'win32'" }, @@ -793,7 +829,8 @@ name = "networkx" version = "3.4.2" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version < '3.11'", + "(python_full_version < '3.11' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform == 'win32')", + "python_full_version < '3.11' and sys_platform != 'linux' and sys_platform != 'win32'", ] sdist = { url = "https://files.pythonhosted.org/packages/fd/1d/06475e1cd5264c0b870ea2cc6fdb3e37177c1e565c43f56ff17a10e3937f/networkx-3.4.2.tar.gz", hash = "sha256:307c3669428c5362aab27c8a1260aa8f47c4e91d3891f48be0141738d8d053e1", size = 2151368, upload-time = "2024-10-21T12:39:38.695Z" } wheels = [ @@ -805,8 +842,10 @@ name = "networkx" version = "3.6.1" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version >= '3.12'", - "python_full_version == '3.11.*'", + "(python_full_version >= '3.12' and sys_platform == 'linux') or (python_full_version >= '3.12' and sys_platform == 'win32')", + "python_full_version >= '3.12' and sys_platform != 'linux' and sys_platform != 'win32'", + "(python_full_version == '3.11.*' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform == 'win32')", + "python_full_version == '3.11.*' and sys_platform != 'linux' and sys_platform != 'win32'", ] sdist = { url = "https://files.pythonhosted.org/packages/6a/51/63fe664f3908c97be9d2e4f1158eb633317598cfa6e1fc14af5383f17512/networkx-3.6.1.tar.gz", hash = "sha256:26b7c357accc0c8cde558ad486283728b65b6a95d85ee1cd66bafab4c8168509", size = 2517025, upload-time = "2025-12-08T17:02:39.908Z" } wheels = [ @@ -827,7 +866,8 @@ name = "numpy" version = "2.2.6" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version < '3.11'", + "(python_full_version < '3.11' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform == 'win32')", + "python_full_version < '3.11' and sys_platform != 'linux' and sys_platform != 'win32'", ] sdist = { url = "https://files.pythonhosted.org/packages/76/21/7d2a95e4bba9dc13d043ee156a356c0a8f0c6309dff6b21b4d71a073b8a8/numpy-2.2.6.tar.gz", hash = "sha256:e29554e2bef54a90aa5cc07da6ce955accb83f21ab5de01a62c8478897b264fd", size = 20276440, upload-time = "2025-05-17T22:38:04.611Z" } wheels = [ @@ -872,8 +912,10 @@ name = "numpy" version = "2.4.2" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version >= '3.12'", - "python_full_version == '3.11.*'", + "(python_full_version >= '3.12' and sys_platform == 'linux') or (python_full_version >= '3.12' and sys_platform == 'win32')", + "python_full_version >= '3.12' and sys_platform != 'linux' and sys_platform != 'win32'", + "(python_full_version == '3.11.*' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform == 'win32')", + "python_full_version == '3.11.*' and sys_platform != 'linux' and sys_platform != 'win32'", ] sdist = { url = "https://files.pythonhosted.org/packages/57/fd/0005efbd0af48e55eb3c7208af93f2862d4b1a56cd78e84309a2d959208d/numpy-2.4.2.tar.gz", hash = "sha256:659a6107e31a83c4e33f763942275fd278b21d095094044eb35569e86a21ddae", size = 20723651, upload-time = "2026-01-31T23:13:10.135Z" } wheels = [ @@ -910,34 +952,42 @@ wheels = [ [[package]] name = "nvidia-cublas-cu12" -version = "12.8.4.1" +version = "12.6.4.1" source = { registry = "https://pypi.org/simple" } wheels = [ - { url = "https://files.pythonhosted.org/packages/dc/61/e24b560ab2e2eaeb3c839129175fb330dfcfc29e5203196e5541a4c44682/nvidia_cublas_cu12-12.8.4.1-py3-none-manylinux_2_27_x86_64.whl", hash = "sha256:8ac4e771d5a348c551b2a426eda6193c19aa630236b418086020df5ba9667142", size = 594346921, upload-time = "2025-03-07T01:44:31.254Z" }, + { url = "https://files.pythonhosted.org/packages/af/eb/ff4b8c503fa1f1796679dce648854d58751982426e4e4b37d6fce49d259c/nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:08ed2686e9875d01b58e3cb379c6896df8e76c75e0d4a7f7dace3d7b6d9ef8eb", size = 393138322, upload-time = "2024-11-20T17:40:25.65Z" }, + { url = "https://files.pythonhosted.org/packages/97/0d/f1f0cadbf69d5b9ef2e4f744c9466cb0a850741d08350736dfdb4aa89569/nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux_2_27_aarch64.whl", hash = "sha256:235f728d6e2a409eddf1df58d5b0921cf80cfa9e72b9f2775ccb7b4a87984668", size = 390794615, upload-time = "2024-11-20T17:39:52.715Z" }, ] [[package]] name = "nvidia-cuda-cupti-cu12" -version = "12.8.90" +version = "12.6.80" source = { registry = "https://pypi.org/simple" } wheels = [ - { url = "https://files.pythonhosted.org/packages/f8/02/2adcaa145158bf1a8295d83591d22e4103dbfd821bcaf6f3f53151ca4ffa/nvidia_cuda_cupti_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ea0cb07ebda26bb9b29ba82cda34849e73c166c18162d3913575b0c9db9a6182", size = 10248621, upload-time = "2025-03-07T01:40:21.213Z" }, + { url = "https://files.pythonhosted.org/packages/e6/8b/2f6230cb715646c3a9425636e513227ce5c93c4d65823a734f4bb86d43c3/nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:166ee35a3ff1587f2490364f90eeeb8da06cd867bd5b701bf7f9a02b78bc63fc", size = 8236764, upload-time = "2024-11-20T17:35:41.03Z" }, + { url = "https://files.pythonhosted.org/packages/25/0f/acb326ac8fd26e13c799e0b4f3b2751543e1834f04d62e729485872198d4/nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_aarch64.whl", hash = "sha256:358b4a1d35370353d52e12f0a7d1769fc01ff74a191689d3870b2123156184c4", size = 8236756, upload-time = "2024-10-01T16:57:45.507Z" }, + { url = "https://files.pythonhosted.org/packages/49/60/7b6497946d74bcf1de852a21824d63baad12cd417db4195fc1bfe59db953/nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:6768bad6cab4f19e8292125e5f1ac8aa7d1718704012a0e3272a6f61c4bce132", size = 8917980, upload-time = "2024-11-20T17:36:04.019Z" }, + { url = "https://files.pythonhosted.org/packages/a5/24/120ee57b218d9952c379d1e026c4479c9ece9997a4fb46303611ee48f038/nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.whl", hash = "sha256:a3eff6cdfcc6a4c35db968a06fcadb061cbc7d6dde548609a941ff8701b98b73", size = 8917972, upload-time = "2024-10-01T16:58:06.036Z" }, ] [[package]] name = "nvidia-cuda-nvrtc-cu12" -version = "12.8.93" +version = "12.6.77" source = { registry = "https://pypi.org/simple" } wheels = [ - { url = "https://files.pythonhosted.org/packages/05/6b/32f747947df2da6994e999492ab306a903659555dddc0fbdeb9d71f75e52/nvidia_cuda_nvrtc_cu12-12.8.93-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl", hash = "sha256:a7756528852ef889772a84c6cd89d41dfa74667e24cca16bb31f8f061e3e9994", size = 88040029, upload-time = "2025-03-07T01:42:13.562Z" }, + { url = "https://files.pythonhosted.org/packages/f4/2f/72df534873235983cc0a5371c3661bebef7c4682760c275590b972c7b0f9/nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_aarch64.whl", hash = "sha256:5847f1d6e5b757f1d2b3991a01082a44aad6f10ab3c5c0213fa3e25bddc25a13", size = 23162955, upload-time = "2024-10-01T16:59:50.922Z" }, + { url = "https://files.pythonhosted.org/packages/75/2e/46030320b5a80661e88039f59060d1790298b4718944a65a7f2aeda3d9e9/nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl", hash = "sha256:35b0cc6ee3a9636d5409133e79273ce1f3fd087abb0532d2d2e8fff1fe9efc53", size = 23650380, upload-time = "2024-10-01T17:00:14.643Z" }, ] [[package]] name = "nvidia-cuda-runtime-cu12" -version = "12.8.90" +version = "12.6.77" source = { registry = "https://pypi.org/simple" } wheels = [ - { url = "https://files.pythonhosted.org/packages/0d/9b/a997b638fcd068ad6e4d53b8551a7d30fe8b404d6f1804abf1df69838932/nvidia_cuda_runtime_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:adade8dcbd0edf427b7204d480d6066d33902cab2a4707dcfc48a2d0fd44ab90", size = 954765, upload-time = "2025-03-07T01:40:01.615Z" }, + { url = "https://files.pythonhosted.org/packages/8f/ea/590b2ac00d772a8abd1c387a92b46486d2679ca6622fd25c18ff76265663/nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:6116fad3e049e04791c0256a9778c16237837c08b27ed8c8401e2e45de8d60cd", size = 908052, upload-time = "2024-11-20T17:35:19.905Z" }, + { url = "https://files.pythonhosted.org/packages/b7/3d/159023799677126e20c8fd580cca09eeb28d5c5a624adc7f793b9aa8bbfa/nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_aarch64.whl", hash = "sha256:d461264ecb429c84c8879a7153499ddc7b19b5f8d84c204307491989a365588e", size = 908040, upload-time = "2024-10-01T16:57:22.221Z" }, + { url = "https://files.pythonhosted.org/packages/e1/23/e717c5ac26d26cf39a27fbc076240fad2e3b817e5889d671b67f4f9f49c5/nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ba3b56a4f896141e25e19ab287cd71e52a6a0f4b29d0d31609f60e3b4d5219b7", size = 897690, upload-time = "2024-11-20T17:35:30.697Z" }, + { url = "https://files.pythonhosted.org/packages/f0/62/65c05e161eeddbafeca24dc461f47de550d9fa8a7e04eb213e32b55cfd99/nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl", hash = "sha256:a84d15d5e1da416dd4774cb42edf5e954a3e60cc945698dc1d5be02321c44dc8", size = 897678, upload-time = "2024-10-01T16:57:33.821Z" }, ] [[package]] @@ -945,61 +995,75 @@ name = "nvidia-cudnn-cu12" version = "9.10.2.21" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "nvidia-cublas-cu12" }, + { name = "nvidia-cublas-cu12", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, ] wheels = [ + { url = "https://files.pythonhosted.org/packages/fa/41/e79269ce215c857c935fd86bcfe91a451a584dfc27f1e068f568b9ad1ab7/nvidia_cudnn_cu12-9.10.2.21-py3-none-manylinux_2_27_aarch64.whl", hash = "sha256:c9132cc3f8958447b4910a1720036d9eff5928cc3179b0a51fb6d167c6cc87d8", size = 705026878, upload-time = "2025-06-06T21:52:51.348Z" }, { url = "https://files.pythonhosted.org/packages/ba/51/e123d997aa098c61d029f76663dedbfb9bc8dcf8c60cbd6adbe42f76d049/nvidia_cudnn_cu12-9.10.2.21-py3-none-manylinux_2_27_x86_64.whl", hash = "sha256:949452be657fa16687d0930933f032835951ef0892b37d2d53824d1a84dc97a8", size = 706758467, upload-time = "2025-06-06T21:54:08.597Z" }, ] [[package]] name = "nvidia-cufft-cu12" -version = "11.3.3.83" +version = "11.3.0.4" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "nvidia-nvjitlink-cu12" }, + { name = "nvidia-nvjitlink-cu12", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, ] wheels = [ - { url = "https://files.pythonhosted.org/packages/1f/13/ee4e00f30e676b66ae65b4f08cb5bcbb8392c03f54f2d5413ea99a5d1c80/nvidia_cufft_cu12-11.3.3.83-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d2dd21ec0b88cf61b62e6b43564355e5222e4a3fb394cac0db101f2dd0d4f74", size = 193118695, upload-time = "2025-03-07T01:45:27.821Z" }, + { url = "https://files.pythonhosted.org/packages/1f/37/c50d2b2f2c07e146776389e3080f4faf70bcc4fa6e19d65bb54ca174ebc3/nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d16079550df460376455cba121db6564089176d9bac9e4f360493ca4741b22a6", size = 200164144, upload-time = "2024-11-20T17:40:58.288Z" }, + { url = "https://files.pythonhosted.org/packages/ce/f5/188566814b7339e893f8d210d3a5332352b1409815908dad6a363dcceac1/nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8510990de9f96c803a051822618d42bf6cb8f069ff3f48d93a8486efdacb48fb", size = 200164135, upload-time = "2024-10-01T17:03:24.212Z" }, + { url = "https://files.pythonhosted.org/packages/8f/16/73727675941ab8e6ffd86ca3a4b7b47065edcca7a997920b831f8147c99d/nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ccba62eb9cef5559abd5e0d54ceed2d9934030f51163df018532142a8ec533e5", size = 200221632, upload-time = "2024-11-20T17:41:32.357Z" }, + { url = "https://files.pythonhosted.org/packages/60/de/99ec247a07ea40c969d904fc14f3a356b3e2a704121675b75c366b694ee1/nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.whl", hash = "sha256:768160ac89f6f7b459bee747e8d175dbf53619cfe74b2a5636264163138013ca", size = 200221622, upload-time = "2024-10-01T17:03:58.79Z" }, ] [[package]] name = "nvidia-cufile-cu12" -version = "1.13.1.3" +version = "1.11.1.6" source = { registry = "https://pypi.org/simple" } wheels = [ - { url = "https://files.pythonhosted.org/packages/bb/fe/1bcba1dfbfb8d01be8d93f07bfc502c93fa23afa6fd5ab3fc7c1df71038a/nvidia_cufile_cu12-1.13.1.3-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:1d069003be650e131b21c932ec3d8969c1715379251f8d23a1860554b1cb24fc", size = 1197834, upload-time = "2025-03-07T01:45:50.723Z" }, + { url = "https://files.pythonhosted.org/packages/b2/66/cc9876340ac68ae71b15c743ddb13f8b30d5244af344ec8322b449e35426/nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:cc23469d1c7e52ce6c1d55253273d32c565dd22068647f3aa59b3c6b005bf159", size = 1142103, upload-time = "2024-11-20T17:42:11.83Z" }, + { url = "https://files.pythonhosted.org/packages/17/bf/cc834147263b929229ce4aadd62869f0b195e98569d4c28b23edc72b85d9/nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux_2_27_aarch64.whl", hash = "sha256:8f57a0051dcf2543f6dc2b98a98cb2719c37d3cee1baba8965d57f3bbc90d4db", size = 1066155, upload-time = "2024-11-20T17:41:49.376Z" }, ] [[package]] name = "nvidia-curand-cu12" -version = "10.3.9.90" +version = "10.3.7.77" source = { registry = "https://pypi.org/simple" } wheels = [ - { url = "https://files.pythonhosted.org/packages/fb/aa/6584b56dc84ebe9cf93226a5cde4d99080c8e90ab40f0c27bda7a0f29aa1/nvidia_curand_cu12-10.3.9.90-py3-none-manylinux_2_27_x86_64.whl", hash = "sha256:b32331d4f4df5d6eefa0554c565b626c7216f87a06a4f56fab27c3b68a830ec9", size = 63619976, upload-time = "2025-03-07T01:46:23.323Z" }, + { url = "https://files.pythonhosted.org/packages/42/ac/36543605358a355632f1a6faa3e2d5dfb91eab1e4bc7d552040e0383c335/nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_aarch64.whl", hash = "sha256:6e82df077060ea28e37f48a3ec442a8f47690c7499bff392a5938614b56c98d8", size = 56289881, upload-time = "2024-10-01T17:04:18.981Z" }, + { url = "https://files.pythonhosted.org/packages/73/1b/44a01c4e70933637c93e6e1a8063d1e998b50213a6b65ac5a9169c47e98e/nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a42cd1344297f70b9e39a1e4f467a4e1c10f1da54ff7a85c12197f6c652c8bdf", size = 56279010, upload-time = "2024-11-20T17:42:50.958Z" }, + { url = "https://files.pythonhosted.org/packages/4a/aa/2c7ff0b5ee02eaef890c0ce7d4f74bc30901871c5e45dee1ae6d0083cd80/nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.whl", hash = "sha256:99f1a32f1ac2bd134897fc7a203f779303261268a65762a623bf30cc9fe79117", size = 56279000, upload-time = "2024-10-01T17:04:45.274Z" }, + { url = "https://files.pythonhosted.org/packages/a6/02/5362a9396f23f7de1dd8a64369e87c85ffff8216fc8194ace0fa45ba27a5/nvidia_curand_cu12-10.3.7.77-py3-none-manylinux_2_27_aarch64.whl", hash = "sha256:7b2ed8e95595c3591d984ea3603dd66fe6ce6812b886d59049988a712ed06b6e", size = 56289882, upload-time = "2024-11-20T17:42:25.222Z" }, ] [[package]] name = "nvidia-cusolver-cu12" -version = "11.7.3.90" +version = "11.7.1.2" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "nvidia-cublas-cu12" }, - { name = "nvidia-cusparse-cu12" }, - { name = "nvidia-nvjitlink-cu12" }, + { name = "nvidia-cublas-cu12", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, + { name = "nvidia-cusparse-cu12", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, + { name = "nvidia-nvjitlink-cu12", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, ] wheels = [ - { url = "https://files.pythonhosted.org/packages/85/48/9a13d2975803e8cf2777d5ed57b87a0b6ca2cc795f9a4f59796a910bfb80/nvidia_cusolver_cu12-11.7.3.90-py3-none-manylinux_2_27_x86_64.whl", hash = "sha256:4376c11ad263152bd50ea295c05370360776f8c3427b30991df774f9fb26c450", size = 267506905, upload-time = "2025-03-07T01:47:16.273Z" }, + { url = "https://files.pythonhosted.org/packages/93/17/dbe1aa865e4fdc7b6d4d0dd308fdd5aaab60f939abfc0ea1954eac4fb113/nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:0ce237ef60acde1efc457335a2ddadfd7610b892d94efee7b776c64bb1cac9e0", size = 157833628, upload-time = "2024-10-01T17:05:05.591Z" }, + { url = "https://files.pythonhosted.org/packages/f0/6e/c2cf12c9ff8b872e92b4a5740701e51ff17689c4d726fca91875b07f655d/nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e9e49843a7707e42022babb9bcfa33c29857a93b88020c4e4434656a655b698c", size = 158229790, upload-time = "2024-11-20T17:43:43.211Z" }, + { url = "https://files.pythonhosted.org/packages/9f/81/baba53585da791d043c10084cf9553e074548408e04ae884cfe9193bd484/nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6cf28f17f64107a0c4d7802be5ff5537b2130bfc112f25d5a30df227058ca0e6", size = 158229780, upload-time = "2024-10-01T17:05:39.875Z" }, + { url = "https://files.pythonhosted.org/packages/7c/5f/07d0ba3b7f19be5a5ec32a8679fc9384cfd9fc6c869825e93be9f28d6690/nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux_2_27_aarch64.whl", hash = "sha256:dbbe4fc38ec1289c7e5230e16248365e375c3673c9c8bac5796e2e20db07f56e", size = 157833630, upload-time = "2024-11-20T17:43:16.77Z" }, ] [[package]] name = "nvidia-cusparse-cu12" -version = "12.5.8.93" +version = "12.5.4.2" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "nvidia-nvjitlink-cu12" }, + { name = "nvidia-nvjitlink-cu12", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, ] wheels = [ - { url = "https://files.pythonhosted.org/packages/c2/f5/e1854cb2f2bcd4280c44736c93550cc300ff4b8c95ebe370d0aa7d2b473d/nvidia_cusparse_cu12-12.5.8.93-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:1ec05d76bbbd8b61b06a80e1eaf8cf4959c3d4ce8e711b65ebd0443bb0ebb13b", size = 288216466, upload-time = "2025-03-07T01:48:13.779Z" }, + { url = "https://files.pythonhosted.org/packages/eb/eb/6681efd0aa7df96b4f8067b3ce7246833dd36830bb4cec8896182773db7d/nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d25b62fb18751758fe3c93a4a08eff08effedfe4edf1c6bb5afd0890fe88f887", size = 216451147, upload-time = "2024-11-20T17:44:18.055Z" }, + { url = "https://files.pythonhosted.org/packages/d3/56/3af21e43014eb40134dea004e8d0f1ef19d9596a39e4d497d5a7de01669f/nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7aa32fa5470cf754f72d1116c7cbc300b4e638d3ae5304cfa4a638a5b87161b1", size = 216451135, upload-time = "2024-10-01T17:06:03.826Z" }, + { url = "https://files.pythonhosted.org/packages/06/1e/b8b7c2f4099a37b96af5c9bb158632ea9e5d9d27d7391d7eb8fc45236674/nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7556d9eca156e18184b94947ade0fba5bb47d69cec46bf8660fd2c71a4b48b73", size = 216561367, upload-time = "2024-11-20T17:44:54.824Z" }, + { url = "https://files.pythonhosted.org/packages/43/ac/64c4316ba163e8217a99680c7605f779accffc6a4bcd0c778c12948d3707/nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.whl", hash = "sha256:23749a6571191a215cb74d1cdbff4a86e7b19f1200c071b3fcf844a5bea23a2f", size = 216561357, upload-time = "2024-10-01T17:06:29.861Z" }, ] [[package]] @@ -1007,6 +1071,7 @@ name = "nvidia-cusparselt-cu12" version = "0.7.1" source = { registry = "https://pypi.org/simple" } wheels = [ + { url = "https://files.pythonhosted.org/packages/73/b9/598f6ff36faaece4b3c50d26f50e38661499ff34346f00e057760b35cc9d/nvidia_cusparselt_cu12-0.7.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8878dce784d0fac90131b6817b607e803c36e629ba34dc5b433471382196b6a5", size = 283835557, upload-time = "2025-02-26T00:16:54.265Z" }, { url = "https://files.pythonhosted.org/packages/56/79/12978b96bd44274fe38b5dde5cfb660b1d114f70a65ef962bcbbed99b549/nvidia_cusparselt_cu12-0.7.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:f1bb701d6b930d5a7cea44c19ceb973311500847f81b634d802b7b539dc55623", size = 287193691, upload-time = "2025-02-26T00:15:44.104Z" }, ] @@ -1015,15 +1080,17 @@ name = "nvidia-nccl-cu12" version = "2.27.5" source = { registry = "https://pypi.org/simple" } wheels = [ + { url = "https://files.pythonhosted.org/packages/bb/1c/857979db0ef194ca5e21478a0612bcdbbe59458d7694361882279947b349/nvidia_nccl_cu12-2.27.5-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:31432ad4d1fb1004eb0c56203dc9bc2178a1ba69d1d9e02d64a6938ab5e40e7a", size = 322400625, upload-time = "2025-06-26T04:11:04.496Z" }, { url = "https://files.pythonhosted.org/packages/6e/89/f7a07dc961b60645dbbf42e80f2bc85ade7feb9a491b11a1e973aa00071f/nvidia_nccl_cu12-2.27.5-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ad730cf15cb5d25fe849c6e6ca9eb5b76db16a80f13f425ac68d8e2e55624457", size = 322348229, upload-time = "2025-06-26T04:11:28.385Z" }, ] [[package]] name = "nvidia-nvjitlink-cu12" -version = "12.8.93" +version = "12.6.85" source = { registry = "https://pypi.org/simple" } wheels = [ - { url = "https://files.pythonhosted.org/packages/f6/74/86a07f1d0f42998ca31312f998bd3b9a7eff7f52378f4f270c8679c77fb9/nvidia_nvjitlink_cu12-12.8.93-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl", hash = "sha256:81ff63371a7ebd6e6451970684f916be2eab07321b73c9d244dc2b4da7f73b88", size = 39254836, upload-time = "2025-03-07T01:49:55.661Z" }, + { url = "https://files.pythonhosted.org/packages/9d/d7/c5383e47c7e9bf1c99d5bd2a8c935af2b6d705ad831a7ec5c97db4d82f4f/nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl", hash = "sha256:eedc36df9e88b682efe4309aa16b5b4e78c2407eac59e8c10a6a47535164369a", size = 19744971, upload-time = "2024-11-20T17:46:53.366Z" }, + { url = "https://files.pythonhosted.org/packages/31/db/dc71113d441f208cdfe7ae10d4983884e13f464a6252450693365e166dcf/nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cf4eaa7d4b6b543ffd69d6abfb11efdeb2db48270d94dfd3a452c24150829e41", size = 19270338, upload-time = "2024-11-20T17:46:29.758Z" }, ] [[package]] @@ -1031,15 +1098,32 @@ name = "nvidia-nvshmem-cu12" version = "3.4.5" source = { registry = "https://pypi.org/simple" } wheels = [ + { url = "https://files.pythonhosted.org/packages/1d/6a/03aa43cc9bd3ad91553a88b5f6fb25ed6a3752ae86ce2180221962bc2aa5/nvidia_nvshmem_cu12-3.4.5-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0b48363fc6964dede448029434c6abed6c5e37f823cb43c3bcde7ecfc0457e15", size = 138936938, upload-time = "2025-09-06T00:32:05.589Z" }, { url = "https://files.pythonhosted.org/packages/b5/09/6ea3ea725f82e1e76684f0708bbedd871fc96da89945adeba65c3835a64c/nvidia_nvshmem_cu12-3.4.5-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:042f2500f24c021db8a06c5eec2539027d57460e1c1a762055a6554f72c369bd", size = 139103095, upload-time = "2025-09-06T00:32:31.266Z" }, ] [[package]] name = "nvidia-nvtx-cu12" -version = "12.8.90" +version = "12.6.77" +source = { registry = "https://pypi.org/simple" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b9/93/80f8a520375af9d7ee44571a6544653a176e53c2b8ccce85b97b83c2491b/nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:f44f8d86bb7d5629988d61c8d3ae61dddb2015dee142740536bc7481b022fe4b", size = 90549, upload-time = "2024-11-20T17:38:17.387Z" }, + { url = "https://files.pythonhosted.org/packages/2b/53/36e2fd6c7068997169b49ffc8c12d5af5e5ff209df6e1a2c4d373b3a638f/nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_aarch64.whl", hash = "sha256:adcaabb9d436c9761fca2b13959a2d237c5f9fd406c8e4b723c695409ff88059", size = 90539, upload-time = "2024-10-01T17:00:27.179Z" }, + { url = "https://files.pythonhosted.org/packages/56/9a/fff8376f8e3d084cd1530e1ef7b879bb7d6d265620c95c1b322725c694f4/nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b90bed3df379fa79afbd21be8e04a0314336b8ae16768b58f2d34cb1d04cd7d2", size = 89276, upload-time = "2024-11-20T17:38:27.621Z" }, + { url = "https://files.pythonhosted.org/packages/9e/4e/0d0c945463719429b7bd21dece907ad0bde437a2ff12b9b12fee94722ab0/nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6574241a3ec5fdc9334353ab8c479fe75841dbe8f4532a8fc97ce63503330ba1", size = 89265, upload-time = "2024-10-01T17:00:38.172Z" }, +] + +[[package]] +name = "omegaconf" +version = "2.3.0" source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "antlr4-python3-runtime" }, + { name = "pyyaml" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/09/48/6388f1bb9da707110532cb70ec4d2822858ddfb44f1cdf1233c20a80ea4b/omegaconf-2.3.0.tar.gz", hash = "sha256:d5d4b6d29955cc50ad50c46dc269bcd92c6e00f5f90d23ab5fee7bfca4ba4cc7", size = 3298120, upload-time = "2022-12-08T20:59:22.753Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/a2/eb/86626c1bbc2edb86323022371c39aa48df6fd8b0a1647bc274577f72e90b/nvidia_nvtx_cu12-12.8.90-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5b17e2001cc0d751a5bc2c6ec6d26ad95913324a4adb86788c944f8ce9ba441f", size = 89954, upload-time = "2025-03-07T01:42:44.131Z" }, + { url = "https://files.pythonhosted.org/packages/e3/94/1843518e420fa3ed6919835845df698c7e27e183cb997394e4a670973a65/omegaconf-2.3.0-py3-none-any.whl", hash = "sha256:7b4df175cdb08ba400f45cae3bdcae7ba8365db4d165fc65fd04b050ab63b46b", size = 79500, upload-time = "2022-12-08T20:59:19.686Z" }, ] [[package]] @@ -1402,7 +1486,8 @@ name = "scikit-learn" version = "1.7.2" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version < '3.11'", + "(python_full_version < '3.11' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform == 'win32')", + "python_full_version < '3.11' and sys_platform != 'linux' and sys_platform != 'win32'", ] dependencies = [ { name = "joblib", marker = "python_full_version < '3.11'" }, @@ -1434,8 +1519,10 @@ name = "scikit-learn" version = "1.8.0" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version >= '3.12'", - "python_full_version == '3.11.*'", + "(python_full_version >= '3.12' and sys_platform == 'linux') or (python_full_version >= '3.12' and sys_platform == 'win32')", + "python_full_version >= '3.12' and sys_platform != 'linux' and sys_platform != 'win32'", + "(python_full_version == '3.11.*' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform == 'win32')", + "python_full_version == '3.11.*' and sys_platform != 'linux' and sys_platform != 'win32'", ] dependencies = [ { name = "joblib", marker = "python_full_version >= '3.11'" }, @@ -1464,7 +1551,8 @@ name = "scipy" version = "1.15.3" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version < '3.11'", + "(python_full_version < '3.11' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform == 'win32')", + "python_full_version < '3.11' and sys_platform != 'linux' and sys_platform != 'win32'", ] dependencies = [ { name = "numpy", version = "2.2.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11'" }, @@ -1505,8 +1593,10 @@ name = "scipy" version = "1.17.0" source = { registry = "https://pypi.org/simple" } resolution-markers = [ - "python_full_version >= '3.12'", - "python_full_version == '3.11.*'", + "(python_full_version >= '3.12' and sys_platform == 'linux') or (python_full_version >= '3.12' and sys_platform == 'win32')", + "python_full_version >= '3.12' and sys_platform != 'linux' and sys_platform != 'win32'", + "(python_full_version == '3.11.*' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform == 'win32')", + "python_full_version == '3.11.*' and sys_platform != 'linux' and sys_platform != 'win32'", ] dependencies = [ { name = "numpy", version = "2.4.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" }, @@ -1619,51 +1709,78 @@ wheels = [ name = "torch" version = "2.10.0" source = { registry = "https://pypi.org/simple" } +resolution-markers = [ + "python_full_version >= '3.12' and sys_platform != 'linux' and sys_platform != 'win32'", + "python_full_version == '3.11.*' and sys_platform != 'linux' and sys_platform != 'win32'", + "python_full_version < '3.11' and sys_platform != 'linux' and sys_platform != 'win32'", +] dependencies = [ - { name = "cuda-bindings", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "filelock" }, - { name = "fsspec" }, - { name = "jinja2" }, - { name = "networkx", version = "3.4.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11'" }, - { name = "networkx", version = "3.6.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" }, - { name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-cufile-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-cusparselt-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-nvjitlink-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-nvshmem-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "setuptools", marker = "python_full_version >= '3.12'" }, - { name = "sympy" }, - { name = "triton", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" }, - { name = "typing-extensions" }, + { name = "filelock", marker = "sys_platform != 'linux' and sys_platform != 'win32'" }, + { name = "fsspec", marker = "sys_platform != 'linux' and sys_platform != 'win32'" }, + { name = "jinja2", marker = "sys_platform != 'linux' and sys_platform != 'win32'" }, + { name = "networkx", version = "3.4.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11' and sys_platform != 'linux' and sys_platform != 'win32'" }, + { name = "networkx", version = "3.6.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11' and sys_platform != 'linux' and sys_platform != 'win32'" }, + { name = "setuptools", marker = "python_full_version >= '3.12' and sys_platform != 'linux' and sys_platform != 'win32'" }, + { name = "sympy", marker = "sys_platform != 'linux' and sys_platform != 'win32'" }, + { name = "typing-extensions", marker = "sys_platform != 'linux' and sys_platform != 'win32'" }, ] wheels = [ { url = "https://files.pythonhosted.org/packages/5b/30/bfebdd8ec77db9a79775121789992d6b3b75ee5494971294d7b4b7c999bc/torch-2.10.0-2-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:2b980edd8d7c0a68c4e951ee1856334a43193f98730d97408fbd148c1a933313", size = 79411457, upload-time = "2026-02-10T21:44:59.189Z" }, { url = "https://files.pythonhosted.org/packages/0f/8b/4b61d6e13f7108f36910df9ab4b58fd389cc2520d54d81b88660804aad99/torch-2.10.0-2-cp311-none-macosx_11_0_arm64.whl", hash = "sha256:418997cb02d0a0f1497cf6a09f63166f9f5df9f3e16c8a716ab76a72127c714f", size = 79423467, upload-time = "2026-02-10T21:44:48.711Z" }, { url = "https://files.pythonhosted.org/packages/d3/54/a2ba279afcca44bbd320d4e73675b282fcee3d81400ea1b53934efca6462/torch-2.10.0-2-cp312-none-macosx_11_0_arm64.whl", hash = "sha256:13ec4add8c3faaed8d13e0574f5cd4a323c11655546f91fbe6afa77b57423574", size = 79498202, upload-time = "2026-02-10T21:44:52.603Z" }, - { url = "https://files.pythonhosted.org/packages/0c/1a/c61f36cfd446170ec27b3a4984f072fd06dab6b5d7ce27e11adb35d6c838/torch-2.10.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:5276fa790a666ee8becaffff8acb711922252521b28fbce5db7db5cf9cb2026d", size = 145992962, upload-time = "2026-01-21T16:24:14.04Z" }, - { url = "https://files.pythonhosted.org/packages/b5/60/6662535354191e2d1555296045b63e4279e5a9dbad49acf55a5d38655a39/torch-2.10.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:aaf663927bcd490ae971469a624c322202a2a1e68936eb952535ca4cd3b90444", size = 915599237, upload-time = "2026-01-21T16:23:25.497Z" }, - { url = "https://files.pythonhosted.org/packages/40/b8/66bbe96f0d79be2b5c697b2e0b187ed792a15c6c4b8904613454651db848/torch-2.10.0-cp310-cp310-win_amd64.whl", hash = "sha256:a4be6a2a190b32ff5c8002a0977a25ea60e64f7ba46b1be37093c141d9c49aeb", size = 113720931, upload-time = "2026-01-21T16:24:23.743Z" }, { url = "https://files.pythonhosted.org/packages/76/bb/d820f90e69cda6c8169b32a0c6a3ab7b17bf7990b8f2c680077c24a3c14c/torch-2.10.0-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:35e407430795c8d3edb07a1d711c41cc1f9eaddc8b2f1cc0a165a6767a8fb73d", size = 79411450, upload-time = "2026-01-21T16:25:30.692Z" }, - { url = "https://files.pythonhosted.org/packages/78/89/f5554b13ebd71e05c0b002f95148033e730d3f7067f67423026cc9c69410/torch-2.10.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:3282d9febd1e4e476630a099692b44fdc214ee9bf8ee5377732d9d9dfe5712e4", size = 145992610, upload-time = "2026-01-21T16:25:26.327Z" }, - { url = "https://files.pythonhosted.org/packages/ae/30/a3a2120621bf9c17779b169fc17e3dc29b230c29d0f8222f499f5e159aa8/torch-2.10.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:a2f9edd8dbc99f62bc4dfb78af7bf89499bca3d753423ac1b4e06592e467b763", size = 915607863, upload-time = "2026-01-21T16:25:06.696Z" }, - { url = "https://files.pythonhosted.org/packages/6f/3d/c87b33c5f260a2a8ad68da7147e105f05868c281c63d65ed85aa4da98c66/torch-2.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:29b7009dba4b7a1c960260fc8ac85022c784250af43af9fb0ebafc9883782ebd", size = 113723116, upload-time = "2026-01-21T16:25:21.916Z" }, { url = "https://files.pythonhosted.org/packages/61/d8/15b9d9d3a6b0c01b883787bd056acbe5cc321090d4b216d3ea89a8fcfdf3/torch-2.10.0-cp311-none-macosx_11_0_arm64.whl", hash = "sha256:b7bd80f3477b830dd166c707c5b0b82a898e7b16f59a7d9d42778dd058272e8b", size = 79423461, upload-time = "2026-01-21T16:24:50.266Z" }, - { url = "https://files.pythonhosted.org/packages/cc/af/758e242e9102e9988969b5e621d41f36b8f258bb4a099109b7a4b4b50ea4/torch-2.10.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:5fd4117d89ffd47e3dcc71e71a22efac24828ad781c7e46aaaf56bf7f2796acf", size = 145996088, upload-time = "2026-01-21T16:24:44.171Z" }, - { url = "https://files.pythonhosted.org/packages/23/8e/3c74db5e53bff7ed9e34c8123e6a8bfef718b2450c35eefab85bb4a7e270/torch-2.10.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:787124e7db3b379d4f1ed54dd12ae7c741c16a4d29b49c0226a89bea50923ffb", size = 915711952, upload-time = "2026-01-21T16:23:53.503Z" }, - { url = "https://files.pythonhosted.org/packages/6e/01/624c4324ca01f66ae4c7cd1b74eb16fb52596dce66dbe51eff95ef9e7a4c/torch-2.10.0-cp312-cp312-win_amd64.whl", hash = "sha256:2c66c61f44c5f903046cc696d088e21062644cbe541c7f1c4eaae88b2ad23547", size = 113757972, upload-time = "2026-01-21T16:24:39.516Z" }, { url = "https://files.pythonhosted.org/packages/c9/5c/dee910b87c4d5c0fcb41b50839ae04df87c1cfc663cf1b5fca7ea565eeaa/torch-2.10.0-cp312-none-macosx_11_0_arm64.whl", hash = "sha256:6d3707a61863d1c4d6ebba7be4ca320f42b869ee657e9b2c21c736bf17000294", size = 79498198, upload-time = "2026-01-21T16:24:34.704Z" }, ] +[[package]] +name = "torch" +version = "2.10.0+cu126" +source = { registry = "https://download.pytorch.org/whl/cu126" } +resolution-markers = [ + "(python_full_version >= '3.12' and sys_platform == 'linux') or (python_full_version >= '3.12' and sys_platform == 'win32')", + "(python_full_version == '3.11.*' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform == 'win32')", + "(python_full_version < '3.11' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform == 'win32')", +] +dependencies = [ + { name = "cuda-bindings", marker = "sys_platform == 'linux'" }, + { name = "filelock", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, + { name = "fsspec", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, + { name = "jinja2", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, + { name = "networkx", version = "3.4.2", source = { registry = "https://pypi.org/simple" }, marker = "(python_full_version < '3.11' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform == 'win32')" }, + { name = "networkx", version = "3.6.1", source = { registry = "https://pypi.org/simple" }, marker = "(python_full_version >= '3.11' and sys_platform == 'linux') or (python_full_version >= '3.11' and sys_platform == 'win32')" }, + { name = "nvidia-cublas-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-cuda-cupti-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-cuda-nvrtc-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-cuda-runtime-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-cudnn-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-cufft-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-cufile-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-curand-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-cusolver-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-cusparse-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-cusparselt-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-nccl-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-nvjitlink-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-nvshmem-cu12", marker = "sys_platform == 'linux'" }, + { name = "nvidia-nvtx-cu12", marker = "sys_platform == 'linux'" }, + { name = "setuptools", marker = "(python_full_version >= '3.12' and sys_platform == 'linux') or (python_full_version >= '3.12' and sys_platform == 'win32')" }, + { name = "sympy", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, + { name = "triton", marker = "sys_platform == 'linux'" }, + { name = "typing-extensions", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, +] +wheels = [ + { url = "https://download.pytorch.org/whl/cu126/torch-2.10.0%2Bcu126-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:dae63a4756c9c455f299309b7b093f1b7c3460e63b53769cab10543b51a1d827" }, + { url = "https://download.pytorch.org/whl/cu126/torch-2.10.0%2Bcu126-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:a256b51e8ca00770a47fe7ab865e3211d2a080d4f1cdc814cdcfb073b36cf1a1" }, + { url = "https://download.pytorch.org/whl/cu126/torch-2.10.0%2Bcu126-cp310-cp310-win_amd64.whl", hash = "sha256:b91012be20b6c0370800ed7c153fd5b51582495f00f7341c38fa0cb6b9c9a968" }, + { url = "https://download.pytorch.org/whl/cu126/torch-2.10.0%2Bcu126-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:3a5fb967ffb53ffa0d2579c9819491cfc36c557040de6fdeabcfcfb45df019bc" }, + { url = "https://download.pytorch.org/whl/cu126/torch-2.10.0%2Bcu126-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:a9a9ba3b2baf23c044499ffbcbed88e04b6e38b94189c7dc42dd2cfcdd8c55c0" }, + { url = "https://download.pytorch.org/whl/cu126/torch-2.10.0%2Bcu126-cp311-cp311-win_amd64.whl", hash = "sha256:4749cd32e32ed55179ff2ff0407e0ae5077fe4d332bfa49258f4578d09eccb40" }, + { url = "https://download.pytorch.org/whl/cu126/torch-2.10.0%2Bcu126-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:81264238b3d8840276dd30c31f393e325b8f5da6390d18ac2a80dacecfd693ea" }, + { url = "https://download.pytorch.org/whl/cu126/torch-2.10.0%2Bcu126-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:2a7a569206f07965eff69b28e147676540bb0ba6e1a39410802b6e4708cb8356" }, + { url = "https://download.pytorch.org/whl/cu126/torch-2.10.0%2Bcu126-cp312-cp312-win_amd64.whl", hash = "sha256:95d8409b8a15191de4c2958e86ca47f3ea8f9739b994ee4ca0e7586f37336413" }, +] + [[package]] name = "tornado" version = "6.5.4" @@ -1709,8 +1826,11 @@ name = "triton" version = "3.6.0" source = { registry = "https://pypi.org/simple" } wheels = [ + { url = "https://files.pythonhosted.org/packages/44/ba/b1b04f4b291a3205d95ebd24465de0e5bf010a2df27a4e58a9b5f039d8f2/triton-3.6.0-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6c723cfb12f6842a0ae94ac307dba7e7a44741d720a40cf0e270ed4a4e3be781", size = 175972180, upload-time = "2026-01-20T16:15:53.664Z" }, { url = "https://files.pythonhosted.org/packages/8c/f7/f1c9d3424ab199ac53c2da567b859bcddbb9c9e7154805119f8bd95ec36f/triton-3.6.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a6550fae429e0667e397e5de64b332d1e5695b73650ee75a6146e2e902770bea", size = 188105201, upload-time = "2026-01-20T16:00:29.272Z" }, + { url = "https://files.pythonhosted.org/packages/0f/2c/96f92f3c60387e14cc45aed49487f3486f89ea27106c1b1376913c62abe4/triton-3.6.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:49df5ef37379c0c2b5c0012286f80174fcf0e073e5ade1ca9a86c36814553651", size = 176081190, upload-time = "2026-01-20T16:16:00.523Z" }, { url = "https://files.pythonhosted.org/packages/e0/12/b05ba554d2c623bffa59922b94b0775673de251f468a9609bc9e45de95e9/triton-3.6.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e8e323d608e3a9bfcc2d9efcc90ceefb764a82b99dea12a86d643c72539ad5d3", size = 188214640, upload-time = "2026-01-20T16:00:35.869Z" }, + { url = "https://files.pythonhosted.org/packages/17/5d/08201db32823bdf77a0e2b9039540080b2e5c23a20706ddba942924ebcd6/triton-3.6.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:374f52c11a711fd062b4bfbb201fd9ac0a5febd28a96fb41b4a0f51dde3157f4", size = 176128243, upload-time = "2026-01-20T16:16:07.857Z" }, { url = "https://files.pythonhosted.org/packages/ab/a8/cdf8b3e4c98132f965f88c2313a4b493266832ad47fb52f23d14d4f86bb5/triton-3.6.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:74caf5e34b66d9f3a429af689c1c7128daba1d8208df60e81106b115c00d6fca", size = 188266850, upload-time = "2026-01-20T16:00:43.041Z" }, ]