This is code accompanying the publication
Fawzi, A. et al. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610 (2022)
There are 4 independent directories:
-
algorithmscontains algorithms discovered by AlphaTensor, represented as factorizations of matrix multiplication tensors, and a Colab showing how to load these. -
benchmarkingcontains a script that can be used to measure the actual speed of matrix multiplication algorithms on an NVIDIA V100 GPU. -
nonequivalencecontains 14,236 nonequivalent algorithms discovered by AlphaTensor for the same matrix multiplication problem (multiplying 4x4 matrices), and a Colab that verifies their nonequivalence. -
recombinationcontains the code we used to decompose larger matrix multiplication tensors by recombining factorizations of smaller ones.
-
algorithms: No installation required. -
benchmarking: SeeREADMEin the subdirectory. -
nonequivalence: No installation required. -
recombination: A machine with Python 3 installed is required. The required dependencies (numpyandabsl-py) can be installed by executingpip3 install -r alphatensor/recombination/requirements.txt.
-
algorithms: The notebookexplore_factorizations.ipynbcan be opened via. When running the code, you will be asked to upload a file containing the factorizations. Please select either of the compressed NumPy files
factorizations_r.npz(containing algoritms in standard arithmetic) orfactorizations_f2.npz(algorithms in arithmetic modulo 2). -
benchmarking: SeeREADMEin the subdirectory, and Supplement D of the paper. -
nonequivalence: The notebookinspect_factorizations_notebook.ipynbcan be opened via. When running the code, you will be asked to upload a file. Please select the compressed NumPy file
alphatensor_14236_factorizations.npz. This will upload the factorizations found by AlphaTensor, and then compute invariants certifying that they are all nonequivalent. For more details, see Supplement B of the paper. -
recombination: Executepython3 -m alphatensor.recombination.exampleon the command line, from the parent directory that contains thealphatensorrepository as a subdirectory. For more details, see Supplement H of the paper.
If you use the code or data in this package, please cite:
@Article{AlphaTensor2022,
author = {Fawzi, Alhussein and Balog, Matej and Huang, Aja and Hubert, Thomas and Romera-Paredes, Bernardino and Barekatain, Mohammadamin and Novikov, Alexander and Ruiz, Francisco J. R. and Schrittwieser, Julian and Swirszcz, Grzegorz and Silver, David and Hassabis, Demis and Kohli, Pushmeet},
journal = {Nature},
title = {Discovering faster matrix multiplication algorithms with reinforcement learning},
year = {2022},
volume = {610},
number = {7930},
pages = {47--53},
doi = {10.1038/s41586-022-05172-4}
}Copyright 2022 DeepMind Technologies Limited
All software is licensed under the Apache License, Version 2.0 (Apache 2.0); you may not use this file except in compliance with the Apache 2.0 license. You may obtain a copy of the Apache 2.0 license at: https://www.apache.org/licenses/LICENSE-2.0
All other materials are licensed under the Creative Commons Attribution 4.0 International License (CC-BY). You may obtain a copy of the CC-BY license at: https://creativecommons.org/licenses/by/4.0/legalcode
Unless required by applicable law or agreed to in writing, all software and materials distributed here under the Apache 2.0 or CC-BY licenses are distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the licenses for the specific language governing permissions and limitations under those licenses.
This is not an official Google product.