Skip to content

Commit bbbd99f

Browse files
apsdehalfacebook-github-bot
authored andcommittedOct 14, 2021
[chores] Rename master to main everywhere (#190)
Summary: Pull Request resolved: fairinternal/mmf-internal#190 Pull Request resolved: #1121 Renaming master references to main at most places as part of terminology shift. Reviewed By: ebsmothers Differential Revision: D31623714 fbshipit-source-id: a8052d9606373f5bfb7c9ad6bb78628bbfa220e7
1 parent 582c719 commit bbbd99f

39 files changed

+135
-129
lines changed
 

‎.github/CONTRIBUTING.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ We take the following factors into consideration before accepting features and P
4040
### Process
4141

4242
1. Read the [PR guidelines](#guidelines) if you haven't.
43-
1. Fork the repo and create your branch from `master`.
43+
1. Fork the repo and create your branch from `main`.
4444
1. If your PR contains multiple orthogonal changes, split it to several PRs. Keep one PR focused on a single change while keeping it small.
4545
1. If you've added code that should be tested, add tests.
4646
1. Follow the [coding style guidelines](#coding-style) mentioned below.
@@ -64,7 +64,7 @@ pip install pre-commit && pre-commit install
6464

6565
After this pre-commit hooks will be run before every commit.
6666

67-
* Read the [editorconfig](https://github.com/facebookresearch/mmf/blob/master/.editorconfig) file to understand the exact coding style preferences.
67+
* Read the [editorconfig](https://github.com/facebookresearch/mmf/blob/main/.editorconfig) file to understand the exact coding style preferences.
6868

6969
* Ideally, black and isort should be run via pre-commit hooks.
7070
But if for some reason you want to run black and isort separately follow this:

‎.github/PULL_REQUEST_TEMPLATE.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@ Thanks for your contribution!
33
If you're sending a large PR (e.g., >50 lines), please open an issue first about
44
the feature/bug, and indicate how you want to contribute.
55

6-
Use [contributing guidelines](https://github.com/facebookresearch/mmf/tree/master/.github/CONTRIBUTING.md) before opening up the PR to follow MMF style guidelines.
6+
Use [contributing guidelines](https://github.com/facebookresearch/mmf/tree/main/.github/CONTRIBUTING.md) before opening up the PR to follow MMF style guidelines.

‎.github/workflows/deploy_website.yaml

+6-6
Original file line numberDiff line numberDiff line change
@@ -3,21 +3,21 @@ name: Deploy Website 🚀
33
on:
44
push:
55
branches:
6-
- master
6+
- main
77
jobs:
88
build:
99
runs-on: ubuntu-latest
1010
steps:
11-
- name: Checkout master 🛎️
11+
- name: Checkout main 🛎️
1212
uses: actions/checkout@v2
1313
# If you're using actions/checkout@v2 you must set persist-credentials to false in most
1414
# cases for the deployment to work correctly.
1515
with:
1616
persist-credentials: false
1717
# 0 here indicates fetch all history for proper doc setup
1818
fetch-depth: 0
19-
ref: master
20-
path: mmf_master
19+
ref: main
20+
path: mmf_main
2121

2222
- name: Setup Miniconda
2323
uses: conda-incubator/setup-miniconda@v2
@@ -53,7 +53,7 @@ jobs:
5353
shell: bash -l {0}
5454
run: |
5555
conda activate mmf
56-
cd ${GITHUB_WORKSPACE}/mmf_master
56+
cd ${GITHUB_WORKSPACE}/mmf_main
5757
python setup.py install
5858
python -c 'import torch; print("Torch version:", torch.__version__)'
5959
python -m torch.utils.collect_env
@@ -74,4 +74,4 @@ jobs:
7474
uses: peaceiris/actions-gh-pages@v3
7575
with:
7676
github_token: ${{ secrets.GITHUB_TOKEN }}
77-
publish_dir: ./mmf_master/website/build
77+
publish_dir: ./mmf_main/website/build

‎.pre-commit-config.yaml

+2
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@ repos:
1010
- id: trailing-whitespace
1111
- id: check-ast
1212
- id: check-merge-conflict
13+
- id: no-commit-to-branch
14+
args: ['--branch=main']
1315
- id: no-commit-to-branch
1416
args: ['--branch=master']
1517
- id: check-added-large-files

‎README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818

1919
MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-the-art vision and language models and has powered multiple research projects at Facebook AI Research. See full list of project inside or built on MMF [here](https://mmf.sh/docs/notes/projects).
2020

21-
MMF is powered by PyTorch, allows distributed training and is un-opinionated, scalable and fast. Use MMF to **_bootstrap_** for your next vision and language multimodal research project by following the [installation instructions](https://mmf.sh/docs/). Take a look at list of MMF features [here](https://mmf.sh/docs/getting_started/features).
21+
MMF is powered by PyTorch, allows distributed training and is un-opinionated, scalable and fast. Use MMF to **_bootstrap_** for your next vision and language multimodal research project by following the [installation instructions](https://mmf.sh/docs/getting_started/installation). Take a look at list of MMF features [here](https://mmf.sh/docs/getting_started/features).
2222

2323
MMF also acts as **starter codebase** for challenges around vision and
2424
language datasets (The Hateful Memes, TextVQA, TextCaps and VQA challenges). MMF was formerly known as Pythia. The next video shows an overview of how datasets and models work inside MMF. Checkout MMF's [video overview](https://mmf.sh/docs/getting_started/video_overview).

‎docs/source/_templates/theme_variables.jinja

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
{%-
22
set external_urls = {
33
'github': 'https://github.com/facebookresearch/mmf',
4-
'github_issues': 'https://github.com/facebookresearch/master/issues',
5-
'contributing': 'https://github.com/facebookresearch/master/blob/master/CONTRIBUTING.md',
4+
'github_issues': 'https://github.com/facebookresearch/mmf/issues',
5+
'contributing': 'https://github.com/facebookresearch/mmf/blob/main/CONTRIBUTING.md',
66
'api': 'https://mmf.sh/api',
77
'docs': 'https://mmf.sh/docs',
88
'previous_pytorch_versions': 'https://mmf.sh/previous-versions/',

‎docs/source/conf.py

+6-6
Original file line numberDiff line numberDiff line change
@@ -65,8 +65,8 @@
6565
source_suffix = [".rst", ".md"]
6666
# source_suffix = '.rst'
6767

68-
# The master toctree document.
69-
master_doc = "index"
68+
# The main toctree document.
69+
main_doc = "index"
7070

7171
# General information about the project.
7272
project = "MMF"
@@ -167,15 +167,15 @@
167167
# (source start file, target name, title,
168168
# author, documentclass [howto, manual, or own class]).
169169
latex_documents = [
170-
(master_doc, "mmf.tex", "MMF Documentation", "Facebook AI Research", "manual")
170+
(main_doc, "mmf.tex", "MMF Documentation", "Facebook AI Research", "manual")
171171
]
172172

173173

174174
# -- Options for manual page output ---------------------------------------
175175

176176
# One entry per manual page. List of tuples
177177
# (source start file, name, description, authors, manual section).
178-
man_pages = [(master_doc, "mmf", "MMF Documentation", [author], 1)]
178+
man_pages = [(main_doc, "mmf", "MMF Documentation", [author], 1)]
179179

180180

181181
# -- Options for Texinfo output -------------------------------------------
@@ -185,7 +185,7 @@
185185
# dir menu entry, description, category)
186186
texinfo_documents = [
187187
(
188-
master_doc,
188+
main_doc,
189189
"mmf",
190190
"MMF Documentation",
191191
author,
@@ -195,7 +195,7 @@
195195
)
196196
]
197197

198-
github_doc_root = "https://github.com/facebookresearch/mmf/tree/master"
198+
github_doc_root = "https://github.com/facebookresearch/mmf/tree/main"
199199

200200

201201
# At the bottom of conf.py

‎docs/source/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
.. mmf documentation master file, created by
1+
.. mmf documentation main file, created by
22
sphinx-quickstart on Tue Apr 23 10:42:55 2019.
33
You can adapt this file completely to your liking, but it should at least
44
contain the root `toctree` directive.

‎mmf/common/test_reporter.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
from mmf.common.registry import registry
1212
from mmf.common.sample import convert_batch_to_sample_list
1313
from mmf.utils.configuration import get_mmf_env
14-
from mmf.utils.distributed import gather_tensor, is_master
14+
from mmf.utils.distributed import gather_tensor, is_main
1515
from mmf.utils.file_io import PathManager
1616
from mmf.utils.general import ckpt_name_from_core_args, foldername_from_config_override
1717
from mmf.utils.logger import log_class_usage
@@ -115,7 +115,7 @@ def next_dataset(self, flush_report=True):
115115
return True
116116

117117
def flush_report(self):
118-
if not is_master():
118+
if not is_main():
119119
# Empty report in all processes to avoid any leaks
120120
self.report = []
121121
return

‎mmf/datasets/base_dataset_builder.py

+7-8
Original file line numberDiff line numberDiff line change
@@ -32,14 +32,13 @@ def load(self, config, dataset_type, *args, **kwargs):
3232
def build(self, config, dataset_type, *args, **kwargs):
3333
...
3434
35-
.. _here: https://github.com/facebookresearch/mmf/blob/master/mmf/datasets/vqa/vqa2/builder.py
35+
.. _here: https://github.com/facebookresearch/mmf/blob/main/mmf/datasets/vqa/vqa2/builder.py
3636
"""
3737
import uuid
3838
from typing import Optional
3939

4040
import pytorch_lightning as pl
4141
from mmf.utils.build import build_dataloader_and_sampler
42-
from mmf.utils.distributed import is_master, synchronize
4342
from mmf.utils.logger import log_class_usage
4443
from omegaconf import DictConfig
4544
from torch.utils.data import Dataset
@@ -77,14 +76,14 @@ def dataset_name(self, dataset_name):
7776

7877
def prepare_data(self, config, *args, **kwargs):
7978
"""
80-
NOTE: The caller to this function should only call this on master process
79+
NOTE: The caller to this function should only call this on main process
8180
in a distributed settings so that downloads and build only happen
82-
on master process and others can just load it. Make sure to call
81+
on main process and others can just load it. Make sure to call
8382
synchronize afterwards to bring all processes in sync.
8483
8584
Lightning automatically wraps datamodule in a way that it is only
86-
called on a master node, but for extra precaution as lightning
87-
can introduce bugs, we should always call this under master process
85+
called on a main node, but for extra precaution as lightning
86+
can introduce bugs, we should always call this under main process
8887
with extra checks on our sides as well.
8988
"""
9089
self.config = config
@@ -129,9 +128,9 @@ def build_dataset(self, config, dataset_type="train", *args, **kwargs):
129128
time when it is not available. This internally calls 'build' function.
130129
Override that function in your child class.
131130
132-
NOTE: The caller to this function should only call this on master process
131+
NOTE: The caller to this function should only call this on main process
133132
in a distributed settings so that downloads and build only happen
134-
on master process and others can just load it. Make sure to call
133+
on main process and others can just load it. Make sure to call
135134
synchronize afterwards to bring all processes in sync.
136135
137136
Args:

‎mmf/datasets/builders/clevr/dataset.py

+2-3
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,9 @@
33

44
import numpy as np
55
import torch
6-
from mmf.common.registry import registry
76
from mmf.common.sample import Sample
87
from mmf.datasets.base_dataset import BaseDataset
9-
from mmf.utils.distributed import is_master, synchronize
8+
from mmf.utils.distributed import is_main, synchronize
109
from mmf.utils.general import get_mmf_root
1110
from mmf.utils.text import VocabFromText, tokenize
1211
from PIL import Image
@@ -81,7 +80,7 @@ def load(self):
8180
self.questions = json.load(f)[_CONSTANTS["questions_key"]]
8281

8382
# Vocab should only be built in main process, as it will repetition of same task
84-
if is_master():
83+
if is_main():
8584
self._build_vocab(self.questions, _CONSTANTS["question_key"])
8685
self._build_vocab(self.questions, _CONSTANTS["answer_key"])
8786
synchronize()

‎mmf/datasets/builders/okvqa/dataset.py

-2
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,11 @@
22
from typing import Type, Union
33

44
import torch
5-
import tqdm
65
from mmf.common.sample import Sample
76
from mmf.common.typings import MMFDatasetConfigType
87
from mmf.datasets.builders.okvqa.database import OKVQAAnnotationDatabase
98
from mmf.datasets.mmf_dataset import MMFDataset
109
from mmf.datasets.processors import GraphVQAAnswerProcessor
11-
from mmf.utils.distributed import is_master
1210

1311

1412
class OKVQADataset(MMFDataset):

‎mmf/datasets/builders/vqa2/dataset.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
import tqdm
66
from mmf.common.sample import Sample
77
from mmf.datasets.mmf_dataset import MMFDataset
8-
from mmf.utils.distributed import is_master
8+
from mmf.utils.distributed import is_main
99

1010

1111
logger = logging.getLogger(__name__)
@@ -42,7 +42,7 @@ def try_fast_read(self):
4242
)
4343
self.cache = {}
4444
for idx in tqdm.tqdm(
45-
range(len(self.annotation_db)), miniters=100, disable=not is_master()
45+
range(len(self.annotation_db)), miniters=100, disable=not is_main()
4646
):
4747
self.cache[idx] = self.load_item(idx)
4848

‎mmf/datasets/databases/features_database.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
import tqdm
66
from mmf.datasets.databases.image_database import ImageDatabase
77
from mmf.datasets.databases.readers.feature_readers import FeatureReader
8-
from mmf.utils.distributed import is_master
8+
from mmf.utils.distributed import is_main
99
from mmf.utils.general import get_absolute_path
1010

1111

@@ -47,7 +47,7 @@ def _threaded_read(self):
4747
elements = [idx for idx in range(1, len(self.annotation_db))]
4848
pool = ThreadPool(processes=4)
4949

50-
with tqdm.tqdm(total=len(elements), disable=not is_master()) as pbar:
50+
with tqdm.tqdm(total=len(elements), disable=not is_main()) as pbar:
5151
for i, _ in enumerate(pool.imap_unordered(self._fill_cache, elements)):
5252
if i % 100 == 0:
5353
pbar.update(100)

‎mmf/datasets/multi_dataset_loader.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@
77
import warnings
88
from typing import Dict, Iterator
99

10-
import numpy as np
1110
import torch
1211
from mmf.common.sample import SampleList, convert_batch_to_sample_list
1312
from mmf.datasets import iteration_strategies
@@ -17,7 +16,7 @@
1716
broadcast_scalar,
1817
get_world_size,
1918
is_dist_initialized,
20-
is_master,
19+
is_main,
2120
is_xla,
2221
)
2322
from mmf.utils.general import get_batch_size, get_current_device
@@ -47,7 +46,7 @@ def __init__(
4746

4847
self._iteration_strategy = iteration_strategy
4948
self._loaders = loaders
50-
self._is_master = is_master()
49+
self._is_main = is_main()
5150
self._num_datasets = len(self.loaders)
5251
self.dataset_list = list(loaders.keys())
5352
self._iterators = {}
@@ -230,7 +229,7 @@ def change_dataloader(self):
230229
self.current_index = choice
231230
return
232231

233-
if self._is_master:
232+
if self._is_main:
234233
choice = self.iteration_strategy()
235234

236235
# self._finished_iterators will always be empty in case of

‎mmf/datasets/processors/processors.py

+9-8
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ def __call__(self, item, *args, **kwargs):
8282
from mmf.common.registry import registry
8383
from mmf.common.typings import ProcessorConfigType
8484
from mmf.utils.configuration import get_mmf_cache_dir, get_mmf_env
85-
from mmf.utils.distributed import is_master, synchronize
85+
from mmf.utils.distributed import is_main, synchronize
8686
from mmf.utils.file_io import PathManager
8787
from mmf.utils.logger import log_class_usage
8888
from mmf.utils.text import VocabDict
@@ -421,15 +421,15 @@ def __init__(self, config, *args, **kwargs):
421421
self._try_download()
422422

423423
def _try_download(self):
424-
_is_master = is_master()
424+
_is_main = is_main()
425425

426426
if self._already_downloaded:
427427
return
428428

429429
needs_download = False
430430

431431
if not hasattr(self.config, "model_file"):
432-
if _is_master:
432+
if _is_main:
433433
warnings.warn(
434434
"'model_file' key is required but missing "
435435
"from FastTextProcessor's config."
@@ -442,7 +442,7 @@ def _try_download(self):
442442
model_file = os.path.join(get_mmf_cache_dir(), model_file)
443443

444444
if not PathManager.exists(model_file):
445-
if _is_master:
445+
if _is_main:
446446
warnings.warn(f"No model file present at {model_file}.")
447447
needs_download = True
448448

@@ -455,11 +455,11 @@ def _try_download(self):
455455
synchronize()
456456

457457
def _download_model(self):
458-
_is_master = is_master()
458+
_is_main = is_main()
459459

460460
model_file_path = os.path.join(get_mmf_cache_dir(), "wiki.en.bin")
461461

462-
if not _is_master:
462+
if not _is_main:
463463
return model_file_path
464464

465465
if PathManager.exists(model_file_path):
@@ -477,7 +477,7 @@ def _download_model(self):
477477
pbar = tqdm(
478478
total=int(response.headers["Content-Length"]) / 4096,
479479
miniters=50,
480-
disable=not _is_master,
480+
disable=not _is_main,
481481
)
482482

483483
idx = 0
@@ -726,7 +726,8 @@ class GraphVQAAnswerProcessor(BaseProcessor):
726726
"answers" or "answers_tokens". "answers" are preprocessed to generate
727727
"answers_tokens" if passed.
728728
729-
This version also takes a graph vocab and predicts a main and graph stream simultanously
729+
This version also takes a graph vocab and predicts a main
730+
and graph stream simultanously
730731
731732
Args:
732733
config (DictConfig): Configuration for the processor

‎mmf/trainers/core/evaluation_loop.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
from mmf.common.meter import Meter
1111
from mmf.common.report import Report
1212
from mmf.common.sample import to_device
13-
from mmf.utils.distributed import gather_tensor, is_master, is_xla
13+
from mmf.utils.distributed import gather_tensor, is_main, is_xla
1414

1515

1616
logger = logging.getLogger(__name__)
@@ -28,7 +28,7 @@ def evaluation_loop(
2828

2929
with torch.no_grad():
3030
self.model.eval()
31-
disable_tqdm = not use_tqdm or not is_master()
31+
disable_tqdm = not use_tqdm or not is_main()
3232
while reporter.next_dataset(flush_report=False):
3333
dataloader = reporter.get_dataloader()
3434
combined_report = None

‎mmf/utils/build.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
)
1919
from mmf.datasets.processors.processors import Processor
2020
from mmf.utils.configuration import Configuration, get_global_config
21-
from mmf.utils.distributed import is_dist_initialized, is_master, is_xla, synchronize
21+
from mmf.utils.distributed import is_dist_initialized, is_main, is_xla, synchronize
2222
from mmf.utils.general import get_optimizer_parameters
2323
from omegaconf import DictConfig, OmegaConf
2424
from packaging import version
@@ -96,7 +96,7 @@ def build_lightning_model(
9696
https://github.com/PyTorchLightning/pytorch-lightning/issues/5410
9797
"""
9898

99-
if is_master():
99+
if is_main():
100100
model_class.load_requirements(model_class, config=config)
101101
model = model_class.load_from_checkpoint(
102102
checkpoint_path, config=config, strict=False
@@ -140,7 +140,7 @@ def build_model(
140140
now other cores can proceed to build the model
141141
using already downloaded checkpoint.
142142
"""
143-
if is_master():
143+
if is_main():
144144
model_class.load_requirements(model_class, config=config)
145145
model.build()
146146
synchronize()
@@ -250,7 +250,7 @@ def build_multiple_datamodules(
250250
)
251251
dataset_config = OmegaConf.create()
252252

253-
if is_master():
253+
if is_main():
254254
datamodule_instance.prepare_data(dataset_config)
255255

256256
synchronize()

‎mmf/utils/checkpoint.py

+9-9
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
from mmf.common.registry import registry
1313
from mmf.utils.checkpoint_updater import get_pretrained_state_mapping_checkpoint
1414
from mmf.utils.configuration import get_mmf_env, load_yaml
15-
from mmf.utils.distributed import is_master, is_xla, open_if_master, synchronize
15+
from mmf.utils.distributed import is_main, is_xla, open_if_main, synchronize
1616
from mmf.utils.download import download_pretrained_model
1717
from mmf.utils.file_io import PathManager
1818
from mmf.utils.general import get_current_device, updir
@@ -208,7 +208,7 @@ def __init__(self, trainer):
208208
self.saved_iterations = []
209209

210210
def save_config(self):
211-
if not is_master():
211+
if not is_main():
212212
return
213213

214214
cfg_file = os.path.join(self.ckpt_foldername, "config.yaml")
@@ -498,7 +498,7 @@ def save(self, update, iteration=None, update_best=False):
498498
# Which ensures that actual checkpoint saving happens
499499
# only for the master node.
500500
# The method also takes care of all the necessary synchronization
501-
if not is_master() and not is_xla():
501+
if not is_main() and not is_xla():
502502
return
503503

504504
logger.info("Checkpoint save operation started!")
@@ -560,23 +560,23 @@ def save(self, update, iteration=None, update_best=False):
560560
git_metadata_dict = self._get_vcs_fields()
561561
ckpt.update(git_metadata_dict)
562562

563-
with open_if_master(ckpt_filepath, "wb") as f:
563+
with open_if_main(ckpt_filepath, "wb") as f:
564564
self.save_func(ckpt, f)
565565

566566
if update_best:
567567
logger.info("Saving best checkpoint")
568-
with open_if_master(best_ckpt_filepath, "wb") as f:
568+
with open_if_main(best_ckpt_filepath, "wb") as f:
569569
self.save_func(ckpt, f)
570570

571571
# Save current always
572572

573573
logger.info("Saving current checkpoint")
574-
with open_if_master(current_ckpt_filepath, "wb") as f:
574+
with open_if_main(current_ckpt_filepath, "wb") as f:
575575
self.save_func(ckpt, f)
576576

577577
# Remove old checkpoints if max_to_keep is set
578578
# In XLA, only delete checkpoint files in main process
579-
if self.max_to_keep > 0 and is_master():
579+
if self.max_to_keep > 0 and is_main():
580580
if len(self.saved_iterations) == self.max_to_keep:
581581
self.remove(self.saved_iterations.pop(0))
582582
self.saved_iterations.append(update)
@@ -597,6 +597,6 @@ def restore(self):
597597
self._load(best_path, force=True)
598598

599599
def finalize(self):
600-
if is_master() or is_xla():
601-
with open_if_master(self.pth_filepath, "wb") as f:
600+
if is_main() or is_xla():
601+
with open_if_main(self.pth_filepath, "wb") as f:
602602
self.save_func(self.trainer.model.state_dict(), f)

‎mmf/utils/distributed.py

+14-6
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,10 @@ def get_rank():
9898
return dist.get_rank()
9999

100100

101+
def is_main():
102+
return is_master()
103+
104+
101105
def is_master():
102106
return get_rank() == 0
103107

@@ -380,20 +384,20 @@ def distributed_init(config):
380384
# perform a dummy all-reduce to initialize the NCCL communicator
381385
dist.all_reduce(torch.zeros(1).cuda())
382386

383-
suppress_output(is_master())
387+
suppress_output(is_main())
384388
config.distributed.rank = dist.get_rank()
385389
return config.distributed.rank
386390

387391

388-
def suppress_output(is_master):
392+
def suppress_output(is_main):
389393
"""Suppress printing on the current device. Force printing with `force=True`."""
390394
import builtins as __builtin__
391395

392396
builtin_print = __builtin__.print
393397

394398
def print(*args, **kwargs):
395399
force = kwargs.pop("force", False)
396-
if is_master or force:
400+
if is_main or force:
397401
builtin_print(*args, **kwargs)
398402

399403
__builtin__.print = print
@@ -404,7 +408,7 @@ def print(*args, **kwargs):
404408

405409
def warn(*args, **kwargs):
406410
force = kwargs.pop("force", False)
407-
if is_master or force:
411+
if is_main or force:
408412
builtin_warn(*args, **kwargs)
409413

410414
# Log warnings only once
@@ -415,20 +419,24 @@ def warn(*args, **kwargs):
415419
def open_if_master(path, mode):
416420
from mmf.utils.file_io import PathManager
417421

418-
if is_master():
422+
if is_main():
419423
return PathManager.open(path, mode)
420424
else:
421425
return contextlib.nullcontext()
422426

423427

428+
def open_if_main(*args):
429+
return open_if_master(*args)
430+
431+
424432
def broadcast_xla_master_model_param(model):
425433
logger.info("Broadcasting XLA model parameters and buffers from master process ...")
426434

427435
parameters_and_buffers = []
428436
for p in chain(model.parameters(), model.buffers()):
429437
# Set all params in non-master devices to zero so that all_reduce is equivalent
430438
# to broadcasting parameters from master to other devices.
431-
if not is_master():
439+
if not is_main():
432440
zero = torch.tensor(0, dtype=p.data.dtype, device=p.data.device)
433441
p.data.mul_(zero)
434442
parameters_and_buffers.append(p.data)

‎mmf/utils/early_stopping.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Copyright (c) Facebook, Inc. and its affiliates.
22
import numpy as np
33
import torch
4-
from mmf.utils.distributed import is_master, is_xla
4+
from mmf.utils.distributed import is_main, is_xla
55

66

77
class EarlyStopping:
@@ -49,7 +49,7 @@ def __call__(self, update, iteration, meter):
4949
# There are operations involving synchronization downstream
5050
# For XLA those calls must be executed from all cores
5151
# Therefore we do return here in case of XLA
52-
if not is_master() and not is_xla():
52+
if not is_main() and not is_xla():
5353
return False
5454

5555
value = meter.meters.get(self.early_stop_criteria, None)

‎mmf/utils/logger.py

+8-8
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
import torch
1414
from mmf.common.registry import registry
1515
from mmf.utils.configuration import get_mmf_env
16-
from mmf.utils.distributed import get_rank, is_master, is_xla
16+
from mmf.utils.distributed import get_rank, is_main, is_xla
1717
from mmf.utils.file_io import PathManager
1818
from mmf.utils.timer import Timer
1919
from termcolor import colored
@@ -222,7 +222,7 @@ def summarize_report(
222222
):
223223
if extra is None:
224224
extra = {}
225-
if not is_master() and not is_xla():
225+
if not is_main() and not is_xla():
226226
return
227227

228228
if tb_writer:
@@ -309,15 +309,15 @@ def log_class_usage(component_type, klass):
309309

310310
def skip_if_tensorboard_inactive(fn: Callable) -> Callable:
311311
"""
312-
Checks whether summary writer is initialized and rank is 0 (master)
312+
Checks whether summary writer is initialized and rank is 0 (main)
313313
Args:
314314
fn (Callable): Function which should be called based on whether
315315
tensorboard should log or not
316316
"""
317317

318318
@wraps(fn)
319319
def wrapped_fn(self, *args: Any, **kwargs: Any) -> Optional[Any]:
320-
if self.summary_writer is None or not self._is_master:
320+
if self.summary_writer is None or not self._is_main:
321321
return None
322322
else:
323323
return fn(self, *args, **kwargs)
@@ -344,7 +344,7 @@ def formatMessage(self, record):
344344
class TensorboardLogger:
345345
def __init__(self, log_folder="./logs", iteration=0):
346346
self._summary_writer = None
347-
self._is_master = is_master()
347+
self._is_main = is_main()
348348
self.timer = Timer()
349349
self.log_folder = log_folder
350350
self.time_format = "%Y-%m-%dT%H:%M:%S"
@@ -356,7 +356,7 @@ def __init__(self, log_folder="./logs", iteration=0):
356356
@property
357357
def summary_writer(self):
358358
# Only on rank zero
359-
if not self._is_master:
359+
if not self._is_main:
360360
return None
361361

362362
if self._summary_writer is None:
@@ -431,7 +431,7 @@ def setup(self):
431431
"""
432432
Setup `Weights and Biases` for logging.
433433
"""
434-
if is_master():
434+
if is_main():
435435

436436
if self._wandb.run is None:
437437
self._wandb.init(**self._wandb_init)
@@ -448,7 +448,7 @@ def __del__(self):
448448
self._wandb.finish()
449449

450450
def _should_log_wandb(self):
451-
if self._wandb is None or not is_master():
451+
if self._wandb is None or not is_main():
452452
return False
453453
else:
454454
return True

‎mmf/utils/vocab.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
import numpy as np
77
import torch
88
from mmf.utils.configuration import get_mmf_cache_dir
9-
from mmf.utils.distributed import is_master, synchronize
9+
from mmf.utils.distributed import is_main, synchronize
1010
from mmf.utils.file_io import PathManager
1111
from mmf.utils.general import get_absolute_path
1212
from torchtext import vocab
@@ -294,7 +294,7 @@ def __init__(self, vocab_file, embedding_name, *args, **kwargs):
294294

295295
# First test loading the vectors in master so that everybody doesn't
296296
# download it in case it doesn't exist
297-
if is_master():
297+
if is_main():
298298
vocab.pretrained_aliases[embedding_name](cache=vector_cache)
299299
synchronize()
300300

@@ -342,7 +342,7 @@ def __init__(self, embedding_name, *args, **kwargs):
342342

343343
# First test loading the vectors in master so that everybody doesn't
344344
# download it in case it doesn't exist
345-
if is_master():
345+
if is_main():
346346
vocab.pretrained_aliases[embedding_name](cache=vector_cache)
347347
synchronize()
348348

‎mmf/utils/xla.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Copyright (c) Facebook, Inc. and its affiliates.
22

33
import torch
4-
from mmf.utils.distributed import is_master
4+
from mmf.utils.distributed import is_main
55

66

77
try:
@@ -16,10 +16,10 @@ def save_xla_ckpt(ckpt, file_or_path):
1616
checkpoint to CPU, since they hold PyTorch tensors. Other items like lr_scheduler
1717
often cannot be saved with xm.save due to its errors in handling mappingproxy.
1818
19-
Only save on the global master process (which is different from the default behavior
19+
Only save on the global main process (which is different from the default behavior
2020
of xm.save that saves a checkpoint on each node).
2121
"""
22-
should_write_data = is_master()
22+
should_write_data = is_main()
2323

2424
is_full_ckpt = isinstance(ckpt, dict) and "model" in ckpt and "optimizer" in ckpt
2525
if is_full_ckpt:

‎projects/hateful_memes/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Please cite the following paper if you use these models and the hateful memes da
1414
year={2020}
1515
}
1616
```
17-
* [Citation for MMF](https://github.com/facebookresearch/mmf/tree/master/README.md#citation)
17+
* [Citation for MMF](https://github.com/facebookresearch/mmf/tree/main/README.md#citation)
1818

1919
Links: [[arxiv]](https://arxiv.org/abs/2005.04790) [[challenge]](https://www.drivendata.org/competitions/70/hateful-memes-phase-2/) [[blog post]](https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set)
2020

@@ -54,7 +54,7 @@ In the table, we provide configuration corresponding to each of the baselines in
5454
| Visual BERT COCO | visual_bert | visual_bert.finetuned.hateful_memes.from_coco | projects/hateful_memes/configs/visual_bert/from_coco.yaml |
5555

5656

57-
For individual baselines and their proper citation have a look at their project pages: [[Visual BERT]](https://github.com/facebookresearch/mmf/tree/master/projects/visual_bert) [[VilBERT]](https://github.com/facebookresearch/mmf/tree/master/projects/vilbert) [[MMBT]](https://github.com/facebookresearch/mmf/tree/master/projects/mmbt)
57+
For individual baselines and their proper citation have a look at their project pages: [[Visual BERT]](https://github.com/facebookresearch/mmf/tree/main/projects/visual_bert) [[VilBERT]](https://github.com/facebookresearch/mmf/tree/main/projects/vilbert) [[MMBT]](https://github.com/facebookresearch/mmf/tree/main/projects/mmbt)
5858

5959
## Training
6060

‎projects/hateful_memes/fine_grained/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ This folder contains configs required to reproduce results and baselines for [20
66

77
Install MMF following the [installation docs](https://mmf.sh/docs/getting_started/installation/).
88

9-
To acquire the hateful memes data, follow th instructions [here](https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes).
9+
To acquire the hateful memes data, follow th instructions [here](https://github.com/facebookresearch/mmf/tree/main/projects/hateful_memes).
1010

11-
The additional fine grained labels can be found [here](https://github.com/facebookresearch/fine_grained_hateful_memes/tree/master/data).
11+
The additional fine grained labels can be found [here](https://github.com/facebookresearch/fine_grained_hateful_memes/tree/main/data).
1212

1313

1414
## Reproducing Baselines
15-
We provide the configration fine to reproduce the baseline results we have in the [GitHub repo](https://github.com/facebookresearch/fine_grained_hateful_memes). The instrustions for training and evaluation are the same as [hateful memes](https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes). The output format is different and specified in the [GitHub repo](https://github.com/facebookresearch/fine_grained_hateful_memes).
15+
We provide the configration fine to reproduce the baseline results we have in the [GitHub repo](https://github.com/facebookresearch/fine_grained_hateful_memes). The instrustions for training and evaluation are the same as [hateful memes](https://github.com/facebookresearch/mmf/tree/main/projects/hateful_memes). The output format is different and specified in the [GitHub repo](https://github.com/facebookresearch/fine_grained_hateful_memes).
1616

1717
The baselines are all based on VisualBert with image features.
1818

‎website/docs/challenges/hateful_memes_challenge.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ sidebar_label: Hateful Memes Challenge
66

77
The Hateful Memes challenge is available at [this link](https://www.drivendata.org/competitions/70/hateful-memes-phase-2/data/).
88

9-
In MMF, we provide the starter code and baseline pretrained models for this challenge and the configurations used for training the reported baselines. For more details check [this link](https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes).
9+
In MMF, we provide the starter code and baseline pretrained models for this challenge and the configurations used for training the reported baselines. For more details check [this link](https://github.com/facebookresearch/mmf/tree/main/projects/hateful_memes).
1010

1111
In this tutorial, we provide steps for running training and evaluation with MMBT model on hateful memes dataset and generating submission file for the challenge. The same steps can be used for your own models.
1212

1313
## Installation and Preparing the dataset
1414

15-
Follow the prerequisites for installation and dataset [here](https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes#prerequisites).
15+
Follow the prerequisites for installation and dataset [here](https://github.com/facebookresearch/mmf/tree/main/projects/hateful_memes#prerequisites).
1616

1717
## Training and Evaluation
1818

‎website/docs/getting_started/installation.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -84,4 +84,4 @@ pytest ./tests/
8484

8585
## Contributing to MMF
8686

87-
We welcome all contributions to MMF. Have a look at our [contributing guidelines](https://github.com/facebookresearch/mmf/tree/master/.github/CONTRIBUTING.md) to get started.
87+
We welcome all contributions to MMF. Have a look at our [contributing guidelines](https://github.com/facebookresearch/mmf/tree/main/.github/CONTRIBUTING.md) to get started.

‎website/docs/getting_started/quickstart.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Quickstart
44
sidebar_label: Quickstart
55
---
66

7-
In this quickstart guide, we are going to train the [M4C](https://github.com/facebookresearch/mmf/tree/master/projects/m4c) model on the TextVQA dataset. TextVQA requires models to read and reason about text in images to answer questions about them. `M4C` is a recent SOTA model on TextVQA which consists of a multimodal transformer architecture accompanied by a rich representation for text in images.
7+
In this quickstart guide, we are going to train the [M4C](https://github.com/facebookresearch/mmf/tree/main/projects/m4c) model on the TextVQA dataset. TextVQA requires models to read and reason about text in images to answer questions about them. `M4C` is a recent SOTA model on TextVQA which consists of a multimodal transformer architecture accompanied by a rich representation for text in images.
88

99
To train other models or understand more about MMF, follow Next Steps at the bottom of this tutorial.
1010

‎website/docs/notes/concepts.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,9 @@ To achieve this, MMF has few opinions about architecture of your research projec
1717

1818
## Datasets
1919

20-
You can find all the latest datasets [here](https://github.com/facebookresearch/mmf/tree/master/mmf/configs/datasets).
20+
You can find all the latest datasets [here](https://github.com/facebookresearch/mmf/tree/main/mmf/configs/datasets).
2121

22-
The dataset's key is available under the particular dataset's config, ie., for vizwiz's key, you can look in vizwiz's config available [here](https://github.com/facebookresearch/mmf/blob/master/mmf/configs/datasets/vizwiz/defaults.yaml)
22+
The dataset's key is available under the particular dataset's config, ie., for vizwiz's key, you can look in vizwiz's config available [here](https://github.com/facebookresearch/mmf/blob/main/mmf/configs/datasets/vizwiz/defaults.yaml)
2323

2424
```yaml
2525
dataset_config:
@@ -41,11 +41,11 @@ Reference implementations for state-of-the-art models have been included to act
4141
- [Towards VQA Models That Can Read (LoRRA model)](https://arxiv.org/abs/1904.08920)
4242
- [VQA 2018 Challenge winner](https://arxiv.org/abs/1807.09956)
4343
- [VizWiz 2018 Challenge winner](https://vizwiz.org/wp-content/uploads/2019/06/workshop2018_slides_FAIR_A-STAR.pdf)
44-
- [VQA 2020 Challenge winner](https://github.com/facebookresearch/mmf/tree/master/projects/movie_mcan)
44+
- [VQA 2020 Challenge winner](https://github.com/facebookresearch/mmf/tree/main/projects/movie_mcan)
4545

46-
Similar to datasets, each model has been registered with a unique key for easy reference in configuration and command line arguments. For a more complete list of models, please see [here](https://github.com/facebookresearch/mmf/tree/master/mmf/configs/models)
46+
Similar to datasets, each model has been registered with a unique key for easy reference in configuration and command line arguments. For a more complete list of models, please see [here](https://github.com/facebookresearch/mmf/tree/main/mmf/configs/models)
4747

48-
The model's key is available under the particular model's config, ie., for mmf_transformer, the model's config file is available under [here](https://github.com/facebookresearch/mmf/blob/master/mmf/configs/models/mmf_transformer/defaults.yaml)
48+
The model's key is available under the particular model's config, ie., for mmf_transformer, the model's config file is available under [here](https://github.com/facebookresearch/mmf/blob/main/mmf/configs/models/mmf_transformer/defaults.yaml)
4949

5050
```yaml
5151
model_config:
@@ -76,11 +76,11 @@ Find more details about Registry class in its documentation [common/registry](ht
7676

7777
## Configuration
7878

79-
As is necessary with research, most of the parameters/settings in MMF are configurable. MMF specific default values (`training`) are present in [mmf/configs/defaults.yaml](https://github.com/facebookresearch/mmf/blob/master/mmf/configs/defaults.yaml) with detailed comments delineating the usage of each parameter.
79+
As is necessary with research, most of the parameters/settings in MMF are configurable. MMF specific default values (`training`) are present in [mmf/configs/defaults.yaml](https://github.com/facebookresearch/mmf/blob/main/mmf/configs/defaults.yaml) with detailed comments delineating the usage of each parameter.
8080

8181
For ease of usage and modularity, configuration for each dataset is kept separately in `mmf/configs/datasets/[dataset]/[variants].yaml` where you can get `[dataset]` value for the dataset from the tables in [Datasets](#datasets) section.
8282

83-
The most dynamic part, model configurations are also kept separate and are the ones which need to be defined by the user if they are creating their own models. We include configurations for the models included in the model zoo of MMF. You can find the model configurations [here](https://github.com/facebookresearch/mmf/tree/master/mmf/configs/models)
83+
The most dynamic part, model configurations are also kept separate and are the ones which need to be defined by the user if they are creating their own models. We include configurations for the models included in the model zoo of MMF. You can find the model configurations [here](https://github.com/facebookresearch/mmf/tree/main/mmf/configs/models)
8484

8585

8686
It is possible to include other configs into your config using `includes` directive. Thus, in MMF config above you can include `lxmert`'s config like this:

‎website/docs/notes/configuration.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ MMF relies on [OmegaConf](https://omegaconf.readthedocs.io/en/latest/) for its c
99
**TL;DR**
1010

1111
- MMF uses OmegaConf for its configuration system with some sugar on top.
12-
- MMF defines [base defaults config](#base-defaults-config) containing all MMF specific parameters and then each dataset and model define their own configs (example configs: [[model]](https://github.com/facebookresearch/mmf/blob/master/mmf/configs/models/mmbt/defaults.yaml) [[dataset]](https://github.com/facebookresearch/mmf/blob/master/mmf/configs/datasets/hateful_memes/defaults.yaml)).
12+
- MMF defines [base defaults config](#base-defaults-config) containing all MMF specific parameters and then each dataset and model define their own configs (example configs: [[model]](https://github.com/facebookresearch/mmf/blob/main/mmf/configs/models/mmbt/defaults.yaml) [[dataset]](https://github.com/facebookresearch/mmf/blob/main/mmf/configs/datasets/hateful_memes/defaults.yaml)).
1313
- The user can define its own config specified by `config=<x>` at command line for each unique experiment or training setup. This has higher priority then base, model and dataset default configs and can override anything in those.
1414
- Finally, user can override (highest priority) the final config generated by merge of all above configs by specifying config parameters as [dotlist](https://omegaconf.readthedocs.io/en/latest/usage.html#from-a-dot-list) in their command. This is the **recommended** way of overriding the config parameters in MMF.
1515
- How MMF knows which config to pick for dataset and model? The user needs to specify those in his command as `model=x` and `dataset=y`.
@@ -100,7 +100,7 @@ model_config:
100100

101101
## User Config
102102

103-
User can specify their configuration specific to an experiment or training setup by adding `config=<config_path>` argument to their command. User config can specify for e.g. training parameters according to their experiment such as batch size using `training.batch_size`. Most common use case for user config is to specify optimizer, scheduler and training parameters. Other than that user config can also include configs for variations of models and datasets they want to test on. Have a look at an example user config [here](https://github.com/facebookresearch/mmf/blob/master/projects/hateful_memes/configs/mmbt/defaults.yaml).
103+
User can specify their configuration specific to an experiment or training setup by adding `config=<config_path>` argument to their command. User config can specify for e.g. training parameters according to their experiment such as batch size using `training.batch_size`. Most common use case for user config is to specify optimizer, scheduler and training parameters. Other than that user config can also include configs for variations of models and datasets they want to test on. Have a look at an example user config [here](https://github.com/facebookresearch/mmf/blob/main/projects/hateful_memes/configs/mmbt/defaults.yaml).
104104

105105
## Command Line Dot List Override
106106

@@ -210,4 +210,4 @@ MMF supports overriding some of the config parameters through environment variab
210210

211211
## Base Defaults Config
212212

213-
Have a look at the [defaults config of MMF](https://github.com/facebookresearch/mmf/blob/master/mmf/configs/defaults.yaml) along with description of parameters from which you may need to override parameters for your experiments.
213+
Have a look at the [defaults config of MMF](https://github.com/facebookresearch/mmf/blob/main/mmf/configs/defaults.yaml) along with description of parameters from which you may need to override parameters for your experiments.

‎website/docs/notes/projects.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ sidebar_label: MMF Projects
66

77
MMF contains references implementations or has been used to develop following projects (in no particular order):
88

9-
- Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA [[arXiv](https://arxiv.org/abs/1911.06258)] [[project](https://github.com/facebookresearch/mmf/tree/master/projects/m4c)]
10-
- ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks [[arXiv](https://arxiv.org/abs/1908.02265)] [[project](https://github.com/facebookresearch/mmf/tree/master/projects/vilbert)]
9+
- Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA [[arXiv](https://arxiv.org/abs/1911.06258)] [[project](https://github.com/facebookresearch/mmf/tree/main/projects/m4c)]
10+
- ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks [[arXiv](https://arxiv.org/abs/1908.02265)] [[project](https://github.com/facebookresearch/mmf/tree/main/projects/vilbert)]
1111
- Visualbert: A simple and performant baseline for vision and language [[arXiv](https://arxiv.org/abs/1908.03557)] [[project](https://arxiv.org/abs/1908.03557)]
12-
- The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes [[arXiv](https://arxiv.org/abs/2005.04790)] [[project](https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes)]
13-
- Towards VQA Models That Can Read [[arXiv](https://arxiv.org/abs/1904.08920)] [[project](https://github.com/facebookresearch/mmf/tree/master/projects/lorra)]
14-
- TextCaps: a Dataset for Image Captioning with Reading Comprehension [[arXiv](https://arxiv.org/abs/2003.12462)] [[project](https://github.com/facebookresearch/mmf/tree/master/projects/m4c_captioner)]
15-
- Pythia v0. 1: the winning entry to the vqa challenge 2018 [[arXiv](https://arxiv.org/abs/1807.09956)] [[project](https://github.com/facebookresearch/mmf/tree/master/projects/pythia)]
16-
- Bottom-up and top-down attention for image captioning and visual question answering [[arXiv](https://arxiv.org/abs/1707.07998)] [[project](https://github.com/facebookresearch/mmf/tree/master/projects/butd)]
17-
- Supervised Multimodal Bitransformers for Classifying Images and Text [[arXiv](https://arxiv.org/abs/1909.02950)] [[project](https://github.com/facebookresearch/mmf/tree/master/projects/mmbt)]
18-
- Are we pretraining it right? Digging deeper into visio-linguistic pretraining [[arXiv](https://arxiv.org/abs/2004.08744)][[project](https://github.com/facebookresearch/mmf/tree/master/projects/pretrain_vl_right)]
12+
- The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes [[arXiv](https://arxiv.org/abs/2005.04790)] [[project](https://github.com/facebookresearch/mmf/tree/main/projects/hateful_memes)]
13+
- Towards VQA Models That Can Read [[arXiv](https://arxiv.org/abs/1904.08920)] [[project](https://github.com/facebookresearch/mmf/tree/main/projects/lorra)]
14+
- TextCaps: a Dataset for Image Captioning with Reading Comprehension [[arXiv](https://arxiv.org/abs/2003.12462)] [[project](https://github.com/facebookresearch/mmf/tree/main/projects/m4c_captioner)]
15+
- Pythia v0. 1: the winning entry to the vqa challenge 2018 [[arXiv](https://arxiv.org/abs/1807.09956)] [[project](https://github.com/facebookresearch/mmf/tree/main/projects/pythia)]
16+
- Bottom-up and top-down attention for image captioning and visual question answering [[arXiv](https://arxiv.org/abs/1707.07998)] [[project](https://github.com/facebookresearch/mmf/tree/main/projects/butd)]
17+
- Supervised Multimodal Bitransformers for Classifying Images and Text [[arXiv](https://arxiv.org/abs/1909.02950)] [[project](https://github.com/facebookresearch/mmf/tree/main/projects/mmbt)]
18+
- Are we pretraining it right? Digging deeper into visio-linguistic pretraining [[arXiv](https://arxiv.org/abs/2004.08744)][[project](https://github.com/facebookresearch/mmf/tree/main/projects/pretrain_vl_right)]

‎website/docs/projects/m4c.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ The released imdbs contain OCR results and normalized bounding boxes (i.e. in th
3535

3636
For the TextVQA dataset, the downloaded file contains both imdbs with the Rosetta-en OCRs (better performance) and imdbs with Rosetta-ml OCRs (same OCR results as in the previous [LoRRA](http://openaccess.thecvf.com/content_CVPR_2019/papers/Singh_Towards_VQA_Models_That_Can_Read_CVPR_2019_paper.pdf) model). Please download the corresponding OCR feature files.
3737

38-
Note that the object Faster R-CNN features are extracted with [`extract_features_vmb.py`](https://github.com/facebookresearch/mmf/blob/master/tools/scripts/features/extract_features_vmb.py) and the OCR Faster R-CNN features are extracted with [`extract_ocr_frcn_feature.py`](https://github.com/facebookresearch/mmf/blob/master/projects/m4c/scripts/extract_ocr_frcn_feature.py).
38+
Note that the object Faster R-CNN features are extracted with [`extract_features_vmb.py`](https://github.com/facebookresearch/mmf/blob/main/tools/scripts/features/extract_features_vmb.py) and the OCR Faster R-CNN features are extracted with [`extract_ocr_frcn_feature.py`](https://github.com/facebookresearch/mmf/blob/main/projects/m4c/scripts/extract_ocr_frcn_feature.py).
3939

4040
## Pretrained M4C Models
4141

‎website/docs/tutorials/checkpointing.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ In this tutorial, we will learn about the different details around finetuning fr
88

99
## Pre-requisites and installation
1010

11-
Follow the prerequisites for installation and dataset [here](https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes#prerequisites).
11+
Follow the prerequisites for installation and dataset [here](https://github.com/facebookresearch/mmf/tree/main/projects/hateful_memes#prerequisites).
1212

1313
## Finetuning from a pretrained model
1414

‎website/docs/tutorials/concat_bert_tutorial.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,13 @@ title: 'Tutorial: Adding a model - Concat BERT'
44
sidebar_label: Adding a model - Concat BERT
55
---
66

7-
In this tutorial, we will go through the step-by-step process of creating a new model using MMF. In this case, we will create a fusion model and train it on the [Hateful Memes dataset](https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes).
7+
In this tutorial, we will go through the step-by-step process of creating a new model using MMF. In this case, we will create a fusion model and train it on the [Hateful Memes dataset](https://github.com/facebookresearch/mmf/tree/main/projects/hateful_memes).
88

99
The fusion model that we will create concatenates embeddings from a text encoder and an image encoder and passes them through a two-layer classifier. MMF provides standard image and text encoders out of the box. For the image encoder, we will use ResNet152 image encoder and for the text encoder, we will use BERT-Base Encoder.
1010

1111
## Prerequisites
1212

13-
Follow the prerequisites for installation and dataset [here](https://github.com/facebookresearch/mmf/tree/master/projects/hateful_memes#prerequisites).
13+
Follow the prerequisites for installation and dataset [here](https://github.com/facebookresearch/mmf/tree/main/projects/hateful_memes#prerequisites).
1414

1515
## Using MMF to build the model
1616

‎website/docs/tutorials/losses.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ This is a tutorial on how to add a new loss function to MMF.
1010

1111
MMF is agnostic to the kind of losses that can be added to it.
1212
Adding a loss requires adding a loss class and adding your new loss to your config yaml.
13-
For example, the [ConcatBERT](https://github.com/facebookresearch/mmf/blob/master/website/docs/tutorials/concat_bert_tutorial.md) model uses the `cross_entropy` loss when training on the hateful memes dataset.
14-
The loss class is `CrossEntropyLoss` defined in [mmf/modules/losses.py](https://github.com/facebookresearch/mmf/blob/master/mmf/modules/losses.py)
13+
For example, the [ConcatBERT](https://github.com/facebookresearch/mmf/blob/main/website/docs/tutorials/concat_bert_tutorial.md) model uses the `cross_entropy` loss when training on the hateful memes dataset.
14+
The loss class is `CrossEntropyLoss` defined in [mmf/modules/losses.py](https://github.com/facebookresearch/mmf/blob/main/mmf/modules/losses.py)
1515
The loss key `cross_entropy` is added to the list of losses in the config yaml at [mmf/projects/hateful_memes/configs/concat_bert/defaults.yaml](https://github.com/facebookresearch/mmf/blob/15fa63071bfaed56db43deba871cfec76439c66f/projects/others/concat_bert/hateful_memes/defaults.yaml#L11).
1616

1717

@@ -62,7 +62,7 @@ For losses with params you can do,
6262
6363
If a loss class is responsible for calculating multiple losses, for example, maybe due to shared calculations you can return a dictionary of tensors.
6464
The resulting loss that is optimized is the sum of all losses configured for the model.
65-
For an example, take a look at the `BCEAndKLLoss` class in [mmf/modules/losses.py](https://github.com/facebookresearch/mmf/blob/master/mmf/modules/losses.py)
65+
For an example, take a look at the `BCEAndKLLoss` class in [mmf/modules/losses.py](https://github.com/facebookresearch/mmf/blob/main/mmf/modules/losses.py)
6666

6767
```python
6868
@registry.register_loss("bce_kl")

‎website/docs/tutorials/metrics.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ This is a tutorial on how to add a new metric to MMF.
1010

1111
MMF is agnostic to the kind of metrics that can be added to it.
1212
Adding a metric requires adding a metric class and adding your new metric to your config yaml.
13-
For example, the [ConcatBERT](https://github.com/facebookresearch/mmf/blob/master/website/docs/tutorials/concat_bert_tutorial.md) model uses the `binary_f1` metric when evaluating on the hateful memes dataset.
14-
The metric class is `BinaryF1` defined in [mmf/modules/metrics.py](https://github.com/facebookresearch/mmf/blob/master/mmf/modules/metrics.py)
13+
For example, the [ConcatBERT](https://github.com/facebookresearch/mmf/blob/main/website/docs/tutorials/concat_bert_tutorial.md) model uses the `binary_f1` metric when evaluating on the hateful memes dataset.
14+
The metric class is `BinaryF1` defined in [mmf/modules/metrics.py](https://github.com/facebookresearch/mmf/blob/main/mmf/modules/metrics.py)
1515
The metric key `binary_f1` is added to the list of metrics in the config yaml at [mmf/projects/hateful_memes/configs/concat_bert/defaults.yaml](https://github.com/facebookresearch/mmf/blob/15fa63071bfaed56db43deba871cfec76439c66f/projects/others/concat_bert/hateful_memes/defaults.yaml#L28).
1616

1717

@@ -88,7 +88,7 @@ evaluation:
8888
- binary_f1
8989
```
9090
91-
For metrics that take parameters your yaml config will specify params. You can also specify a custom key to be assigned to the metric. For [example](https://github.com/facebookresearch/mmf/blob/master/projects/unit/configs/vg/single_task.yaml),
91+
For metrics that take parameters your yaml config will specify params. You can also specify a custom key to be assigned to the metric. For [example](https://github.com/facebookresearch/mmf/blob/main/projects/unit/configs/vg/single_task.yaml),
9292
9393
```yaml
9494
evaluation:
@@ -104,7 +104,7 @@ evaluation:
104104

105105
```
106106

107-
If your model uses early stopping, make sure that the early_stop.criteria is added as an evaluation metric. For example the [vizwiz](https://github.com/facebookresearch/mmf/blob/master/projects/ban/configs/vizwiz/defaults.yaml) config,
107+
If your model uses early stopping, make sure that the early_stop.criteria is added as an evaluation metric. For example the [vizwiz](https://github.com/facebookresearch/mmf/blob/main/projects/ban/configs/vizwiz/defaults.yaml) config,
108108

109109
```yaml
110110
evaluation:
@@ -121,7 +121,7 @@ training:
121121
122122
If a loss class is responsible for calculating multiple metrics, for example, maybe due to shared calculations, you can return a dictionary of tensors.
123123
124-
For an example, take a look at the `BinaryF1PrecisionRecall` class in [mmf/modules/metrics.py](https://github.com/facebookresearch/mmf/blob/master/mmf/modules/metrics.py)
124+
For an example, take a look at the `BinaryF1PrecisionRecall` class in [mmf/modules/metrics.py](https://github.com/facebookresearch/mmf/blob/main/mmf/modules/metrics.py)
125125

126126
```python
127127
@registry.register_metric("f1_precision_recall")

‎website/docs/tutorials/slurm.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ Add `--dry_run` argument to first print out what exactly is going to be run with
8282

8383
:::
8484

85-
An actual complex sweep config for visual bert with more options can be found at [./tools/sweep/sweep_visual_bert.py](https://github.com/facebookresearch/mmf/blob/master/tools/sweeps/sweep_visual_bert.py). Command following the above command to run it:
85+
An actual complex sweep config for visual bert with more options can be found at [./tools/sweep/sweep_visual_bert.py](https://github.com/facebookresearch/mmf/blob/main/tools/sweeps/sweep_visual_bert.py). Command following the above command to run it:
8686

8787
```sh
8888
python tools/sweeps/sweep_visual_bert.py \

‎website/docusaurus.config.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ module.exports = {
111111
showLastUpdateTime: true,
112112
editUrl: fbContent({
113113
internal: 'https://www.internalfb.com/intern/diffusion/FBS/browse/master/fbcode/faim/mmf/website',
114-
external: 'https://github.com/facebookresearch/mmf/edit/master/website/'
114+
external: 'https://github.com/facebookresearch/mmf/edit/main/website/'
115115
}),
116116
},
117117
theme: {

0 commit comments

Comments
 (0)
Please sign in to comment.