How to use GNU make --max-load on a multicore Linux machine?

偶尔善良 提交于 2019-12-30 04:01:05

问题


From the documentation for GNU make: http://www.gnu.org/software/make/manual/make.html#Parallel

When the system is heavily loaded, you will probably want to run fewer jobs than when it is lightly loaded. You can use the ‘-l’ option to tell make to limit the number of jobs to run at once, based on the load average. The ‘-l’ or ‘--max-load’ option is followed by a floating-point number. For example,

 -l 2.5

will not let make start more than one job if the load average is above 2.5. The ‘-l’ option with no following number removes the load limit, if one was given with a previous ‘-l’ option.

More precisely, when make goes to start up a job, and it already has at least one job running, it checks the current load average; if it is not lower than the limit given with ‘-l’, make waits until the load average goes below that limit, or until all the other jobs finish.

From the Linux man page for uptime: http://www.unix.com/man-page/Linux/1/uptime/

System load averages is the average number of processes that are either in a runnable or uninterruptable state. A process in a runnable state is either using the CPU or waiting to use the CPU. A process in uninterruptable state is waiting for some I/O access, eg waiting for disk. The averages are taken over the three time intervals. Load averages are not normalized for the number of CPUs in a system, so a load average of 1 means a single CPU system is loaded all the time while on a 4 CPU system it means it was idle 75% of the time.

I have a parallel makefile and I want to do the obvious thing: have make to keep adding processes until I am getting full CPU usage but I'm not inducing thrashing.

Many (all?) machines today are multicore, so that means that the load average is not the number make should be checking, as that number needs to be adjusted for the number of cores.

Does this mean that the --max-load (aka -l) flag to GNU make is now useless? What are people doing who are running parallel makefiles on multicore machines?


回答1:


My short answer: --max-load is useful if you're willing to invest the time it takes to make good use of it. With its current implementation there's no simple formula to pick good values, or a pre-fab tool for discovering them.


The build I maintain is fairly large. Before I started maintaining it the build was 6 hours. With -j64 on a ramdisk, now it finishes in 5 minutes (30 on an NFS mount with -j12). My goal here was to find reasonable caps for -j and -l that allows our developers to build quickly but doesn't make the server (build server or NFS server) unusable for everyone else.

To begin with:

  • If you choose a reasonable -jN value (on your machine) and find a reasonable upper bound for load average (on your machine), they work nicely together to keep things balanced.
  • If you use a very large -jN value (or unspecified; eg, -j without a number) and limit the load average, gmake will:
    • continue spawning processes (gmake 3.81 added a throttling mechanism, but that only helps mitigate the problem a little) until the max # of jobs is reached or until the load average goes above your threshold
    • while the load average is over your threshold:
      • do nothing until all sub-processes are finished
      • spawn one job at a time
    • do it all over again

On Linux at least (and probably other *nix variants), load average is an exponential moving average (UNIX Load Average Reweighed, Neil J. Gunther) that represents the avg number of processes waiting for CPU time (can be caused by too many processes, waiting for IO, page faults, etc). Since it's an exponential moving average, it's weighted such that newer samples have a stronger influence on the current value than older samples.

If you can identify a good "sweet spot" for the right max load and number of parallel jobs (through a combination of educated guesses and empirical testing), assuming you have a long running build: your 1 min avg will hit an equilibrium point (won't fluctuate much). However, if your -jN number is too high for a given max load average, it'll fluctuate quite a bit.

Finding that sweet spot is essentially equivalent to finding optimal parameters to a differential equation. Since it will be subject to initial conditions, the focus is on finding parameters that get the system to stay at equilibrium as opposed to coming up with a "target" load average. By "at equilibrium" I mean: 1m load avg doesn't fluctuate much.

Assuming you're not bottlenecked by limitations in gmake: When you've found a -jN -lM combination that gives a minimum build time: that combination will be pushing your machine to its limits. If the machine needs to be used for other purposes ...

... you may want to scale it back a bit when you're finished optimizing.

Without regard to load avg, the improvements I saw in build time with increasing -jN appeared to be [roughly] logarithmic. That is to say, I saw a larger difference between -j8 and -j12 than between -j12 and -j16.

Things peaked for me somewhere between -j48 and -j64 (on the Solaris machine it was about -j56) because the initial gmake process is single-threaded; at some point that thread cannot start new jobs faster than they finish.

My tests were performed on:

  • A non-recursive build
    • recursive builds may see different results; they won't run into the bottleneck I did around -j64
    • I've done my best to minimize the amount of make-isms (variable expansions, macros, etc) in recipes because recipe parsing occurs in the same thread that spawns parallel jobs. The more complicated recipes are, the more time it spends in the parser instead of spawning/reaping jobs. For example:
      • No $(shell ...) macros are used in recipes; those are ran during the 1st parsing pass and cached
      • Most variables are assigned with := to avoid recursive expansion
  • Solaris 10/sparc
    • 256 cores
    • no virtualization/logical domains
    • the build ran on a ramdisk
  • x86_64 linux
    • 32-core (4x hyper threaded)
    • no virtualization
    • the build ran on a fast local drive



回答2:


Many (all?) machines today are multicore, so that means that the load average is not the number make should be checking, as that number needs to be adjusted for the number of cores.

Does this mean that the --max-load (aka -l) flag to GNU make is now useless?

No. Imagine jobs with demanding disk i/o. If you started as many jobs as you had CPUs, you still wouldn't utilize the CPU very well.

Personally, I simply use -j because so far it worked well enough for me.




回答3:


Even for a build where the CPU is the bottleneck, -l is not ideal. I use -jN, where N is the number of cores that exist or that I want to spend on the build. Choosing a bigger number doesn't speed up the build in my situation. It doesn't slow it down either, as long as you don't go overboard (such as by specifying infinite through -j).

Using -lN is broadly equivalent to -jN, and can work better if the machine has other independent work to do, but there are two quirks (apart from the one you mentioned, the number of cores not accounted for):

  • Initial spike: when the build starts, make launches a lot of jobs, many more than N. The system load number doesn't immediately increase when a process is forked. That's not a problem in my situation.
  • Starvation: when some build jobs take a long time compared to others, at the moment the first M quick jobs have ended, the system load is still >N. Soon the system load drops to N - M, but as long as those few slow jobs are dragging on, no new jobs are launched, and cores are left hungry. Make only thinks about launching new jobs when an old job ends, and at the start. It doesn't notice the system load dropping in between.



回答4:


Does this mean that the --max-load (aka -l) flag to GNU make is now useless? What are people doing who are running parallel makefiles on multicore machines?

One of examples is running jobs in test-suite where each test has to compile and link a program. Linking sometimes load system too much, as result - fatal error: ld terminated with signal 9 [Killed]. In my case, it was not memory overhead but CPU usage, so usually suggested swap file didn't help.

With option -l 1 execution is still parallel but linking is almost sequential:



来源:https://stackoverflow.com/questions/13909355/how-to-use-gnu-make-max-load-on-a-multicore-linux-machine

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!