-
Notifications
You must be signed in to change notification settings - Fork 634
Different behaviour adding new jobs to queue in >5.27 #724
Description
Snakemake version
5.27.4
Slurm 19.05.7
Describe the bug
When I invoke a pipeline with the -c and -j option (for instance: snakemake -c "sbatch -c1" -j 5), snakemake runs max 5 simultaneous jobs on the cluster (for which slurm assigns each job to one cpu here). In versions older than 5.27.3 (for instance 5.26.1), new jobs would get scheduled immediately after one of the jobs was done. Now it looks like jobs are scheduled in batches, meaning that a new batch of 5 (-j) jobs is only submitted to the cluster as soon as all previously submitted jobs are done.
I'm not sure whether this behaviour changed intentionally, but if so, is there a way to get the old behaviour back?
Logs
None
Minimal example
Any pipeline that results in multiple parallel jobs will do, but the following minimal pipeline should be ok for reproducing:
rule all:
input:
expand("{n}.txt",n=range(100))
rule test:
output:
"{n}.txt"
shell:
"sleep {wildcards.n} && touch {wildcards.n}.txt"
When the above pipeline is invoked with:
snakemake -c "sbatch -c1" -j 5
In 5.27.4 this will result in batches of 5 jobs that er being scheduled as soon as all jobs from a previous batch are done, while in 5.26.2 this results in (more or less) constant occupation of 5 slots on the cluster.
Additional context
None