Skip to content
Open
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,5 +19,8 @@ AGENTS.md
# Experimental code/artifacts
dev/

# Results file
results.tsv
# Cached data files
*.pkl

# Training logs
run.log
24 changes: 24 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,30 @@

The idea: give an AI agent a small but real LLM training setup and let it experiment autonomously overnight. It modifies the code, trains for 5 minutes, checks if the result improved, keeps or discards, and repeats. You wake up in the morning to a log of experiments and (hopefully) a better model. The training code here is a simplified single-GPU implementation of [nanochat](https://github.com/karpathy/nanochat). The core idea is that you're not touching any of the Python files like you normally would as a researcher. Instead, you are programming the `program.md` Markdown files that provide context to the AI agents and set up your autonomous research org. The default `program.md` in this repo is intentionally kept as a bare bones baseline, though it's obvious how one would iterate on it over time to find the "research org code" that achieves the fastest research progress, how you'd add more agents to the mix, etc. A bit more context on this project is here in this [tweet](https://x.com/karpathy/status/2029701092347630069).

---

## [Karpathy](https://github.com/karpathy)'s [AutoResearch](https://github.com/karpathy/autoresearch) with Memory-in-the-Loop States

*by [Shehab Anwer, MD](https://doi.org/10.1093/ehjimp/qyaf038) (GitHub: [habanwer](https://github.com/habanwer) · [The Adimension](https://github.com/the-adimension))*

> *This fork applies the **DEITY Principles Framework** — **D**ata, **E**thics, **I**nformatics, **T**echnology, and **Y**ou — to implement human-machine interoperability & transparency in research protocols, especially when automation and autonomy are the scope. The framework is described in [The Adimension: bridging human ingenuity and machine intelligence through the DEITY principles framework](https://doi.org/10.1093/ehjimp/qyaf038) (European Heart Journal — Imaging Methods and Practice, 2025).*

### What changed (relative to [karpathy/autoresearch](https://github.com/karpathy/autoresearch) in alignment with the [Adimension](https://theadimension.ch/Introduction.html)'s [DEITY Principles])

**Data.** Hardcoded constants extracted into machine-readable JSON configs. Training platform, data paths, and time budgets live in `ground.json`; architecture and optimization hyperparameters live in `model.json`. Experiment results are logged in three formats: JSON (structured config), TSV (metrics), and Markdown (human/LLM-readable memory).

**Ethics.** Explicit file ownership governance: `ground.json` and `program.md` are user-owned and read-only; `model.json` and `train.py` are agent-owned and editable through the automation experiment. The agent writes experiment memory to `sessions/memory.md` — pinned to the Git branch's ID. Results are append-only, and a crash handler persists state even on failure. Timestamped and SHA-verified outputs are logged.

**Informatics.** `program.md` introduces a structured agent protocol with an orientation checklist, decision metrics table, execution sequence, and logging rules. Train output uses a parseable `---`-delimited key=value block so metrics flow directly into the experiment log. All data paths are configurable in `ground.json` for transparency and reproducibility to enable human review and auditing of the research process, and to allow for future integration with external data sources or experiment tracking tools and agents.

**Technology.** Runtime GPU platform detection spanning Volta (SM 7.0) through Blackwell (SM 10.0). `prepare.py` auto-selects dtype, attention backend, `torch.compile`, and `GradScaler` per GPU generation, and computes peak TFLOPS from SM count and clock. Turing GPUs get fp16 with fp32 optimizer moments; Ampere+ get bf16. `ground.json` processor overrides allow manual tuning. Windows compile guards included.

**You.** The human governs constraints (`ground.json`, `program.md`); the agent experiments autonomously within them (`model.json`, `train.py`). `update_research_memory()` closes the loop — experiment outcomes persist to `sessions/memory.md` so the agent's next hypothesis is informed by all prior runs without modifying user-owned files.

**Upstream**: [karpathy/autoresearch](https://github.com/karpathy/autoresearch) — **Related fork**: [jsegov/autoresearch-win-rtx](https://github.com/jsegov/autoresearch-win-rtx) (Windows RTX adaptation referenced for platform support)

---

## How it works

The repo is deliberately kept small and only really has three files that matter:
Expand Down
4 changes: 2 additions & 2 deletions analysis.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
Expand All @@ -238,7 +238,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.0"
}
},
"nbformat": 4,
Expand Down
36 changes: 36 additions & 0 deletions ground.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
{
"codename": "mar15-2-rtx5000",
"mode": "test",

"data": {
"cache_dir": "~/.cache/autoresearch",
"base_url": "https://huggingface.co/datasets/karpathy/climbmix-400b-shuffle/resolve/main",
"max_shard": 6542,
"val_shard": 6542,
"num_shards": 10,
"download_workers": 8
},

"tokenizer": {
"vocab_size": 8192,
"split_pattern": "'(?i:[sdmt]|ll|ve|re)|[^\\r\\n\\p{L}\\p{N}]?+\\p{L}+|\\p{N}{1,2}| ?[^\\s\\p{L}\\p{N}]++[\\r\\n]*|\\s*[\\r\\n]|\\s+(?!\\S)|\\s+",
"special_tokens_count": 4,
"bos_token": "<|reserved_0|>"
},

"training": {
"max_seq_len": 2048,
"time_budget_test": 60,
"time_budget_train": 300,
"eval_tokens_multiplier": 40,
"eval_tokens_unit": 524288,
"max_run_wall_seconds": 30
},

"processor": {
"dtype": "auto",
"compile": "auto",
"flash_attention": "auto",
"peak_flops": "auto"
}
}
27 changes: 27 additions & 0 deletions model.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
{
"architecture": {
"depth": 8,
"aspect_ratio": 128,
"head_dim": 64,
"window_pattern": "SL"
},

"optimization": {
"total_batch_size_power": 17,
"device_batch_size": 16,
"embedding_lr": 0.1,
"unembedding_lr": 0.002,
"matrix_lr": 0.01,
"scalar_lr": 0.25,
"weight_decay": 0.01,
"adam_betas": [0.8, 0.95],
"warmup_ratio": 0.2,
"warmdown_ratio": 0.75,
"final_lr_frac": 0.1
},

"evaluation": {
"batch_size": 16,
"tokens": 3145728
}
}
153 changes: 136 additions & 17 deletions prepare.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,34 +21,153 @@
import pyarrow.parquet as pq
import rustbpe
import tiktoken
import json
import torch

# ---------------------------------------------------------------------------
# Constants (fixed, do not modify)
# Constants — loaded from ground.json (required); fail with clear message
# ---------------------------------------------------------------------------

MAX_SEQ_LEN = 2048 # context length
TIME_BUDGET = 300 # training time budget in seconds (5 minutes)
EVAL_TOKENS = 40 * 524288 # number of tokens for val eval
_GROUND_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "ground.json")
try:
with open(_GROUND_PATH, "r", encoding="utf-8") as _f:
_ground = json.load(_f)
except FileNotFoundError:
sys.exit(f"ERROR: ground.json not found at {_GROUND_PATH}. "
"This file is required — see README.md for setup instructions.")
except json.JSONDecodeError as _e:
sys.exit(f"ERROR: ground.json is malformed: {_e}")

_required_keys = ["training", "data", "tokenizer", "mode", "processor"]
_missing = [k for k in _required_keys if k not in _ground]
if _missing:
sys.exit(f"ERROR: ground.json missing required keys: {_missing}")

_training = _ground["training"]
_data = _ground["data"]
_tok = _ground["tokenizer"]
_mode = _ground["mode"]

MAX_SEQ_LEN = _training["max_seq_len"]
TIME_BUDGET = _training["time_budget_test"] if _mode == "test" else _training["time_budget_train"]
EVAL_TOKENS = _training["eval_tokens_multiplier"] * _training["eval_tokens_unit"]

# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------

CACHE_DIR = os.path.join(os.path.expanduser("~"), ".cache", "autoresearch")
CACHE_DIR = os.path.expanduser(_data["cache_dir"])
DATA_DIR = os.path.join(CACHE_DIR, "data")
TOKENIZER_DIR = os.path.join(CACHE_DIR, "tokenizer")
BASE_URL = "https://huggingface.co/datasets/karpathy/climbmix-400b-shuffle/resolve/main"
MAX_SHARD = 6542 # the last datashard is shard_06542.parquet
VAL_SHARD = MAX_SHARD # pinned validation shard (shard_06542)
BASE_URL = _data["base_url"]
MAX_SHARD = _data["max_shard"]
VAL_SHARD = _data["val_shard"]
VAL_FILENAME = f"shard_{VAL_SHARD:05d}.parquet"
VOCAB_SIZE = 8192
VOCAB_SIZE = _tok["vocab_size"]

# BPE split pattern (GPT-4 style, with \p{N}{1,2} instead of {1,3})
SPLIT_PATTERN = r"""'(?i:[sdmt]|ll|ve|re)|[^\r\n\p{L}\p{N}]?+\p{L}+|\p{N}{1,2}| ?[^\s\p{L}\p{N}]++[\r\n]*|\s*[\r\n]|\s+(?!\S)|\s+"""
SPLIT_PATTERN = _tok["split_pattern"]

SPECIAL_TOKENS = [f"<|reserved_{i}|>" for i in range(4)]
BOS_TOKEN = "<|reserved_0|>"
_n_special = _tok["special_tokens_count"]
SPECIAL_TOKENS = [f"<|reserved_{i}|>" for i in range(_n_special)]
BOS_TOKEN = _tok["bos_token"]

# ---------------------------------------------------------------------------
# Platform detection — auto-detect GPU, with ground.json processor overrides
# Exports PLATFORM dict consumed by train.py alongside MAX_SEQ_LEN, TIME_BUDGET
# ---------------------------------------------------------------------------

_proc = _ground["processor"]

# FP16/BF16 tensor-core ops per cycle per SM, by (major, minor) compute capability.
# Used with SM count + boost clock to compute peak FLOPS at runtime.
_GPU_OPS_PER_CYCLE_PER_SM = {
(7, 0): 128, # Volta (V100)
(7, 5): 128, # Turing (RTX 20xx, Quadro RTX, T4)
(8, 0): 512, # Ampere GA100 (A100)
(8, 6): 256, # Ampere GA10x (RTX 30xx, A40)
(8, 7): 256, # Ampere GA10B (Jetson Orin)
(8, 9): 512, # Ada Lovelace (RTX 40xx, L40)
(9, 0): 1024, # Hopper (H100 SXM)
(10, 0): 1024, # Blackwell (B100/B200) — provisional
}

def _estimate_peak_flops(device_idx=0):
"""Compute peak FP16/BF16 tensor TFLOPS from SM count and clock."""
import torch as _torch
props = _torch.cuda.get_device_properties(device_idx)
cc = (props.major, props.minor)
sm_count = props.multi_processor_count
clock_ghz = props.clock_rate / 1e6 # clock_rate is in kHz
ops_per_cycle = _GPU_OPS_PER_CYCLE_PER_SM.get(cc)
if ops_per_cycle is None:
major_fallbacks = {7: 128, 8: 256, 9: 1024, 10: 1024}
ops_per_cycle = major_fallbacks.get(cc[0], 128)
print(f"Warning: unknown GPU CC {cc}, using fallback ops/cycle={ops_per_cycle}")
peak = sm_count * ops_per_cycle * clock_ghz * 1e9 * 2 # *2 for FMA
print(f"GPU: {props.name} | CC {cc[0]}.{cc[1]} | {sm_count} SMs | "
f"{clock_ghz:.2f} GHz | peak {peak/1e12:.1f} TFLOPS")
return peak


def _detect_platform():
import torch as _torch
p = {"device": "cpu", "dtype": "fp32", "use_grad_scaler": False,
"attention": "naive", "compile": False, "embedding_dtype": "fp32",
"fa3_repo": None, "peak_flops": 0.0}
if _torch.cuda.is_available():
p["device"] = "cuda"
cc = _torch.cuda.get_device_capability(0)
p["peak_flops"] = _estimate_peak_flops(0)
if cc[0] >= 9:
# Hopper+ — bf16, FA3, compile
p["dtype"] = "bf16"
p["embedding_dtype"] = "bf16"
p["fa3_repo"] = "varunneal/flash-attention-3"
p["attention"] = "flash"
try:
import triton # noqa: F401
p["compile"] = sys.platform != "win32"
except ImportError:
print("Warning: triton not found — torch.compile disabled (Hopper)")
elif cc[0] >= 8:
# Ampere/Ada — bf16, FA3, compile
p["dtype"] = "bf16"
p["embedding_dtype"] = "bf16"
p["fa3_repo"] = "kernels-community/flash-attn3"
p["attention"] = "flash"
try:
import triton # noqa: F401
p["compile"] = sys.platform != "win32"
except ImportError:
print("Warning: triton not found — torch.compile disabled (Ampere/Ada)")
else:
# Turing / older — fp16, SDPA, no compile
p["dtype"] = "fp16"
p["use_grad_scaler"] = True
p["attention"] = "sdpa"
p["embedding_dtype"] = "fp32"

# Apply ground.json processor overrides (non-"auto" values)
if _proc["dtype"] != "auto":
p["dtype"] = _proc["dtype"]
p["use_grad_scaler"] = p["dtype"] == "fp16"
p["embedding_dtype"] = "bf16" if p["dtype"] == "bf16" else "fp32"
if _proc["compile"] != "auto":
p["compile"] = bool(_proc["compile"])
if _proc["flash_attention"] != "auto":
fa = _proc["flash_attention"]
if fa is False or fa == "sdpa":
p["attention"] = "sdpa"
p["fa3_repo"] = None
elif isinstance(fa, str) and fa not in ("auto", "sdpa"):
p["attention"] = "flash"
p["fa3_repo"] = fa
pf = _proc.get("peak_flops", "auto")
if pf != "auto":
p["peak_flops"] = float(pf)
return p

PLATFORM = _detect_platform()

# ---------------------------------------------------------------------------
# Data download
Expand Down Expand Up @@ -81,8 +200,8 @@ def download_single_shard(index):
if os.path.exists(path):
try:
os.remove(path)
except OSError:
pass
except OSError as cleanup_err:
print(f" Warning: failed to clean up {path}: {cleanup_err}")
if attempt < max_attempts:
time.sleep(2 ** attempt)
return False
Expand Down Expand Up @@ -370,8 +489,8 @@ def evaluate_bpb(model, tokenizer, batch_size):

if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Prepare data and tokenizer for autoresearch")
parser.add_argument("--num-shards", type=int, default=10, help="Number of training shards to download (-1 = all). Val shard is always pinned.")
parser.add_argument("--download-workers", type=int, default=8, help="Number of parallel download workers")
parser.add_argument("--num-shards", type=int, default=_data["num_shards"], help="Number of training shards to download (-1 = all). Val shard is always pinned.")
parser.add_argument("--download-workers", type=int, default=_data["download_workers"], help="Number of parallel download workers")
args = parser.parse_args()

num_shards = MAX_SHARD if args.num_shards == -1 else args.num_shards
Expand Down
Loading