-
|
I am running out of memory running the following code. Would anyone be willing to provide insight into what parameters effect compute resources the most when using torchio? Specifically wondering about 'queue_length', 'samples_per_volume', 'num_workers', and 'batch_size'. # dataset contains 500 subjects
patch_size = (128, 128, 64)
queue_length = 300
samples_per_volume = 10
sampler = tio.data.UniformSampler(patch_size)
patches_queue = tio.Queue(
dataset,
queue_length,
samples_per_volume,
sampler,
num_workers=4,
)
patches_loader = DataLoader(patches_queue, batch_size=16)
num_epochs = 2
model = torch.nn.Identity()
#model = torch.hub.load('fepegar/highresnet', 'highres3dnet', pretrained=True)
for epoch_index in range(num_epochs):
for patches_batch in patches_loader:
inputs = patches_batch['image'][tio.DATA]
targets = patches_batch['label']
logits = model(inputs)
torch.save(model.state_dict(), f'./saved_model') |
Beta Was this translation helpful? Give feedback.
Answered by
fepegar
Sep 3, 2021
Replies: 1 comment 9 replies
-
|
Hi, @laynr. It might also be related to the way you build your |
Beta Was this translation helpful? Give feedback.
9 replies
Answer selected by
laynr
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, @laynr. It might also be related to the way you build your
dataset. First, what is the output ofpatches_queue.get_max_memory_pretty()(docs)? How much RAM do you have?