Skip to content

How to modify the train_cache_rate parameter through the configuration file #1131

@wojiazaiyugang

Description

@wojiazaiyugang

I am using auto3dseg for segmentation experiments, my running script is

from monai.apps.auto3dseg import AutoRunner

if __name__ == '__main__':
    runner = AutoRunner(input={"name": "Task500_XXX",
                                "task": "segmentation",
                                "modality": "CT",
                                "datalist": "/home/XXX/Projects/MONAI/dataset.json",
                                "dataroot": "/media/DATA2/XXX/20221124/",
                                "class_names": ["upper", "lower"]},
                        work_dir="/home/XXX/Projects/MONAI/work_dir",
                        analyze=True,
                        algo_gen=True)
    runner.run()

First, there is no problem with generating dataset.yaml by dataset analysis, and then there is no problem with generating algorithm. Next, start training and run the file work_dir/dints_0/scripts/search.py
After a period of time, the machine has a memory overflow and the program is killed (my machine has 128G memory)
After looking at the code, I suspect that it is caused by data caching
https://github.com/Project-MONAI/research-contributions/blob/1dde7a112c69785b4fb5cd58d5b73f3fa5bdc577/auto3dseg/algorithm_templates/dints/scripts/search.py#L168
My data has about 300 copies of CT data, and they cannot all be cached.
I want to know how to modify this parameter, I tried to modify input.yaml or runner.set_train_param() ,but it didn’t work

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions