-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify amount of memory (RAM) to be used #233
Comments
When given data and parameters, the memory usage is fixed. The program detects total RAM but won't make tradeoff between RAM and runtime. |
Thank you for the answer. I am setting up wtdbg on our HPC cluster. The processing is submitted as a job. Each job should specify how much resources needed. For example: However when the job is submitted, wtdbg2 detect all the resources in the compute node and plan accordingly. A user has circumvented this by occupying the whole node with all resources. This results in monitoring scripts reports enormous e resource wastage. I am trying to find a solution for this as your program seems to be the only realistic option for his pacbio reads. In addition, wtdbg2 is writing to disk very-frequently. I see that this may be to avoid exceeding RAM limitations. At the same time on some of our nodes with about 3Tb RAM we prefer if the user could do more work on RAM and access disk less. Could you help me to set this up so I can help to solve this limitations. I would gladly provide any assistance and also contribute back the findings. |
Please ignore the message of RAM and cores, the only one option be affected is |
Thank you very much I will try this. |
This is the comparison when using wtdbg2 -t 8 -x rs -X 32 -g 32g -L 5000 -i ${INPUT_FILE} -fo axolotl
wtdbg2 -t 8 --minimal-output -x rs -X 32 -g 32g -L 5000 -i ${INPUT_FILE} -fo axolotl
I forked your repo , any recommendations on how can I test changing the disk write frequency ? |
Thanks for the information. With |
|
Please have a look at the usage of wtdbg2 --minimal-output
Will generate as less output files (<--prefix>.*) as it can |
I was able to use your software to optimally use our HPC setup using a sample of Axolotl data. Thank you for that help. However, now when handling the real genome. "-x sq -X 80 -g 7.5g -L 5000 " input size 1.7 Tb, it is going to take about 80 days on a single node. So I was wondering whether wtdgb2 can use multiple nodes (mpi) ? |
Try |
Is it possible to specify amount of memory (RAM) to be used instead of automatically detecting the amount of RAM?
The text was updated successfully, but these errors were encountered: