Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 98 additions & 0 deletions .github/agents/my-agent.agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# autoresearch — agent instructions for https://github.com/habanwer/autoresearch/tree/autoresearch/memory-in-the-loop

## 1. Orientation (do this first, every run)

Before making any changes, read and understand the codebase:

1. Read `ground.json` — the user-owned, read-only configuration:
- `mode`: `"test"` or `"train"` — determines which time budget applies.
- `training.time_budget_test` / `training.time_budget_train` — the wall-clock seconds the training loop is allowed to run. **Respect this strictly.**
- `training.max_seq_len` — sequence length, fixed.
- `processor` — dtype, compile, flash_attention, peak_flops overrides (all `"auto"` by default).
2. Read `model.json` — your hyperparameter file (you own this):
- `architecture`: depth, aspect_ratio, head_dim, window_pattern.
- `optimization`: batch sizes, learning rates, weight decay, adam betas, warmup/warmdown ratios, final_lr_frac.
- `evaluation`: batch_size, tokens (for the fast eval after training).
3. Read `prepare.py` — understand but **never edit**:
- Exports: `MAX_SEQ_LEN`, `TIME_BUDGET`, `PLATFORM`, `Tokenizer`, `make_dataloader`, `evaluate_bpb`, `get_token_bytes`.
- `PLATFORM` dict: device, dtype, use_grad_scaler, attention, compile, peak_flops (auto-detected from GPU hardware specs).
4. Read `train.py` — the model and training loop (you own this):
- Loads all hyperparameters from `model.json` at startup.
- Imports platform config from `prepare.py`.
- Prints a `---` separator followed by key=value summary lines at the end of training.
5. Note the key metric: **`val_bpb`** (bits per byte) — lower is better. This is printed by `train.py` after the training loop completes.

## 2. Decision metrics

Use these to guide your experiment choices:

| Metric | Source | Meaning |
|---|---|---|
| `val_bpb` | train.py stdout | Primary objective — minimize this |
| `peak_vram_mb` | train.py stdout | Must not OOM — watch this when increasing batch/model size |
| `mfu_percent` | train.py stdout | Hardware utilization — indicates if compute is bottlenecked |
| `training_seconds` | train.py stdout | Must stay within `TIME_BUDGET` |
| `total_tokens_M` | train.py stdout | Throughput — more tokens = more learning within budget |
| `num_params_M` | train.py stdout | Model capacity — larger is not always better under time constraint |

## 3. File ownership

| File | Owner | Editable | Purpose |
|---|---|---|---|
| `ground.json` | User | **NO** | Platform config, data paths, time budgets |
| `prepare.py` | User | **NO** | Data prep, tokenizer, dataloader, eval, platform detection |
| `model.json` | Agent | **YES** | Architecture + optimization hyperparameters |
| `train.py` | Agent | **YES** | Model definition, optimizer, training loop |
| `results.tsv` | Agent | **YES** | Experiment log — append only |
| `program.md` | User | **NO** | This document |

## 4. Execution sequence

### First run (setup)

1. Run `uv run prepare.py` to ensure data and tokenizer are cached.
2. Initialize `results.tsv` with this exact header (tab-separated):

```
run_id\tval_bpb\tpeak_vram_mb\tmfu_percent\ttraining_seconds\ttotal_tokens_M\tnum_params_M\tstatus\tdescription
```

(Each `\t` above represents a literal tab character.)

3. Run `uv run train.py`, capturing stdout to `sessions/<run_id>.log`.
- `run_id` = short git commit hash or a timestamp tag — unique per run.
4. Parse the `---` block from the log to extract metrics.
5. Append one row to `results.tsv` with the extracted values and `status=baseline`.

### Subsequent runs (experiment loop)

1. Form one hypothesis from the current code and most recent run metrics.
2. Edit `model.json` and/or `train.py`.
3. Commit with a message describing the hypothesis.
4. Run `uv run train.py`, capturing stdout to `sessions/<run_id>.log` (use the new commit hash as `run_id`).
5. Parse the `---` block. Extract `val_bpb`, `peak_vram_mb`, `mfu_percent`, `training_seconds`, `total_tokens_M`, `num_params_M`.
6. Append one row to `results.tsv`:
- `status=keep` if val_bpb improved.
- `status=discard` if val_bpb did not improve.
- `status=crash` if the run failed.
7. If `discard` or `crash`: revert with `git reset --hard HEAD~1`.
8. Continue to next hypothesis.

## 5. Logging rules

- Every run MUST have its own log file: `sessions/<run_id>.log`.
- Every run MUST have exactly one row appended to `results.tsv`.
- The `run_id` in `results.tsv` must match the log filename (without `.log`).
- Never overwrite or delete previous log files or results rows.

## 6. Constraints

1. **Time budget**: `train.py` self-enforces via `TIME_BUDGET` from `ground.json`. Do not circumvent this.
2. **No new packages**: use only what is already installed in the environment.
3. **Do not edit** `ground.json`, `prepare.py`, or `program.md`.
4. **Prefer simpler changes** when two options yield similar `val_bpb`.
5. **VRAM**: if a run OOMs, reduce `device_batch_size` in `model.json` or model size before retrying.

## 7. Autonomy

Continue iterating experiments until manually stopped. Do not pause for permission between runs.
7 changes: 5 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,5 +19,8 @@ AGENTS.md
# Experimental code/artifacts
dev/

# Results file
results.tsv
# Cached data files
*.pkl

# Training logs
run.log
24 changes: 24 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,30 @@

The idea: give an AI agent a small but real LLM training setup and let it experiment autonomously overnight. It modifies the code, trains for 5 minutes, checks if the result improved, keeps or discards, and repeats. You wake up in the morning to a log of experiments and (hopefully) a better model. The training code here is a simplified single-GPU implementation of [nanochat](https://github.com/karpathy/nanochat). The core idea is that you're not touching any of the Python files like you normally would as a researcher. Instead, you are programming the `program.md` Markdown files that provide context to the AI agents and set up your autonomous research org. The default `program.md` in this repo is intentionally kept as a bare bones baseline, though it's obvious how one would iterate on it over time to find the "research org code" that achieves the fastest research progress, how you'd add more agents to the mix, etc. A bit more context on this project is here in this [tweet](https://x.com/karpathy/status/2029701092347630069).

---

## [Karpathy](https://github.com/karpathy)'s [AutoResearch](https://github.com/karpathy/autoresearch) with Memory-in-the-Loop States

*by [Shehab Anwer, MD](https://doi.org/10.1093/ehjimp/qyaf038) (GitHub: [habanwer](https://github.com/habanwer) · [The Adimension](https://github.com/the-adimension))*

> *This fork applies the **DEITY Principles Framework** — **D**ata, **E**thics, **I**nformatics, **T**echnology, and **Y**ou — to implement human-machine interoperability & transparency in research protocols, especially when automation and autonomy are the scope. The framework is described in [The Adimension: bridging human ingenuity and machine intelligence through the DEITY principles framework](https://doi.org/10.1093/ehjimp/qyaf038) (European Heart Journal — Imaging Methods and Practice, 2025).*

### What changed (relative to [karpathy/autoresearch](https://github.com/karpathy/autoresearch) in alignment with the [Adimension](https://theadimension.ch/Introduction.html)'s [DEITY Principles])

**Data.** Hardcoded constants extracted into machine-readable JSON configs. Training platform, data paths, and time budgets live in `ground.json`; architecture and optimization hyperparameters live in `model.json`. Experiment results are logged in three formats: JSON (structured config), TSV (metrics), and Markdown (human/LLM-readable memory).

**Ethics.** Explicit file ownership governance: `ground.json` and `program.md` are user-owned and read-only; `model.json` and `train.py` are agent-owned and editable through the automation experiment. The agent writes experiment memory to `sessions/memory.md` — pinned to the Git branch's ID. Results are append-only, and a crash handler persists state even on failure. Timestamped and SHA-verified outputs are logged.

**Informatics.** `program.md` introduces a structured agent protocol with an orientation checklist, decision metrics table, execution sequence, and logging rules. Train output uses a parseable `---`-delimited key=value block so metrics flow directly into the experiment log. All data paths are configurable in `ground.json` for transparency and reproducibility to enable human review and auditing of the research process, and to allow for future integration with external data sources or experiment tracking tools and agents.

**Technology.** Runtime GPU platform detection spanning Volta (SM 7.0) through Blackwell (SM 10.0). `prepare.py` auto-selects dtype, attention backend, `torch.compile`, and `GradScaler` per GPU generation, and computes peak TFLOPS from SM count and clock. Turing GPUs get fp16 with fp32 optimizer moments; Ampere+ get bf16. `ground.json` processor overrides allow manual tuning. Windows compile guards included.

**You.** The human governs constraints (`ground.json`, `program.md`); the agent experiments autonomously within them (`model.json`, `train.py`). `update_research_memory()` closes the loop — experiment outcomes persist to `sessions/memory.md` so the agent's next hypothesis is informed by all prior runs without modifying user-owned files.

**Upstream**: [karpathy/autoresearch](https://github.com/karpathy/autoresearch) — **Related fork**: [jsegov/autoresearch-win-rtx](https://github.com/jsegov/autoresearch-win-rtx) (Windows RTX adaptation referenced for platform support)

---

## How it works

The repo is deliberately kept small and only really has three files that matter:
Expand Down
4 changes: 2 additions & 2 deletions analysis.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
Expand All @@ -238,7 +238,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.0"
}
},
"nbformat": 4,
Expand Down
36 changes: 36 additions & 0 deletions ground.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
{
"codename": "mar15-2-rtx5000",
"mode": "test",

"data": {
"cache_dir": "~/.cache/autoresearch",
"base_url": "https://huggingface.co/datasets/karpathy/climbmix-400b-shuffle/resolve/main",
"max_shard": 6542,
"val_shard": 6542,
"num_shards": 10,
"download_workers": 8
},

"tokenizer": {
"vocab_size": 8192,
"split_pattern": "'(?i:[sdmt]|ll|ve|re)|[^\\r\\n\\p{L}\\p{N}]?+\\p{L}+|\\p{N}{1,2}| ?[^\\s\\p{L}\\p{N}]++[\\r\\n]*|\\s*[\\r\\n]|\\s+(?!\\S)|\\s+",
"special_tokens_count": 4,
"bos_token": "<|reserved_0|>"
},

"training": {
"max_seq_len": 2048,
"time_budget_test": 60,
"time_budget_train": 300,
"eval_tokens_multiplier": 40,
"eval_tokens_unit": 524288,
"max_run_wall_seconds": 30
},

"processor": {
"dtype": "auto",
"compile": "auto",
"flash_attention": "auto",
"peak_flops": "auto"
}
}
27 changes: 27 additions & 0 deletions model.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
{
"architecture": {
"depth": 8,
"aspect_ratio": 128,
"head_dim": 64,
"window_pattern": "SL"
},

"optimization": {
"total_batch_size_power": 17,
"device_batch_size": 16,
"embedding_lr": 0.1,
"unembedding_lr": 0.002,
"matrix_lr": 0.01,
"scalar_lr": 0.25,
"weight_decay": 0.01,
"adam_betas": [0.8, 0.95],
"warmup_ratio": 0.2,
"warmdown_ratio": 0.75,
"final_lr_frac": 0.1
},

"evaluation": {
"batch_size": 16,
"tokens": 3145728
}
}
Loading