GNU Make: How many jobs is optimal for the '-j' option?
The most common recommendation for the best value for the '-j' option is number of CPUs including virtual CPUs plus one. This is in fact the default for Ninja - another build system similar to make. However, I read an interesting article recently recommending that this value should be just the number of CPUs. Although this article is now a little dated it makes some sense because if each job fully utilizes a CPU then after all CPUs are utilized adding further jobs should only slow down the existing jobs. So where does the plus one come from? Perhaps it's better to ask the more general question of why it may be optimal to set this value to anything more than the number of CPUs? This really boils down to what the jobs are doing. If these jobs are I/O bound for example they will likely not fully utilize the CPUs meaning a higher value may be optimal. For a normal compilation cycle when a job is started it will initially need to start reading the source files before it can start compiling. This means there is a small initial period where a job will not fully utilize a core and it may therefore be advantageous to run additional jobs. There is however a trade off here. If these jobs take a long time to complete then the scheduler will most likely swap out some of these jobs because of the additional waiting job. This will be costly and may outweigh the benefits of the additional speedup.
Having said all that I personally stick with number of cores plus one. For me this work pretty well in the average case.
Having said all that I personally stick with number of cores plus one. For me this work pretty well in the average case.
Comments