Speed up compile time with SSD

家住魔仙堡 提交于 2019-12-21 06:56:16

问题


I want to try to speed up my compile-time of our C++ projects. They have about 3M lines of code.

Of course, I don't need to always compile every project, but sometimes there are lot of source files modified by others, and I need to recompile all of them (for example, when someone updates an ASN.1 source file).

I've measured that compiling a mid-project (that does not involves all the source files) takes about three minutes. I know that's not too much, but sometimes it's really boring waiting for a compile..

I've tried to move the source code to an SSD (an old OCZ Vertex 3 60 GB) that, benchmarked, it's from 5 to 60 times faster than the HDD (especially in random reading/writing). Anyway, the compile-time is almost the same (maybe 2-3 seconds faster, but it should be a chance).

Maybe moving the Visual Studio bin to SSD would grant additional increment in performance?

Just to complete the question: I've a W3520 Xeon @2.67 GHz and 12 GB of DDR3 ECC.


回答1:


C++ compilation/linking is limited by processing speed, not HDD I/O. That's why you're not seeing any increase in compilation speed. (Moving the compiler/linker binaries to the SSD will do nothing. When you compile a big project, the compiler/linker and the necessary library are read into memory once and stay there.)

I have seen some minor speedups from moving the working directory to an SSD or ramdisk when compiling C projects (which is a lot less time consuming than C++ projects that make heavy use of templates etc), but not enough to make it worth it.




回答2:


This all greatly depends on your build environment and other setup. For example, on my main compile server, I have 96 GiB of RAM and 16 cores. The HDD is rather slow, but that doesn't really matter as about everything is cached in RAM.

On my desktop (where I also compile sometimes) I only have 8 Gib of RAM, and six cores. Doing the same parallel build there could be greatly sped up, because six compilers running in parallel eat up enough memory for the SSD speed difference being very noticeable.

There are many things that influence the build times, including the ratio of CPU to I/O "boundness". In my experience (GCC on Linux) they include:

  • Complexity of code. Lots of metatemplates make it use more CPU time, more C-like code might make the I/O of generated objects (more) dominant
  • Compiler settings for temporary files, like -pipe for GCC.
  • Optimization being used. Usually, the more optmization, the more the CPU work dominates.
  • Parallel builds. Compiling a single file at a time will likely never produce enough I/O to get today's slowest harddisk to any limit. Compiling with eight cores (or more) at once however might.
  • OS/filesystem being used. It seems that some filesystems in the past have choked on the access pattern for many files built in parallel, essentially putting the I/O bottleneck into the filesystem code, rather than the underlying hardware.
  • Available RAM for buffering. The more aggressively an OS can buffer your I/O, the less important the HDD speed gets. This is why sometimes a make -j6 can be a slower than a make -j4 despite having enough idle cores.

To make it short: It depends on enough things to make any "yes, it will help you" or "no, it will help you not" pure speculation, so if you have the possibility to try it out, do it. But don't spend too much time on it, for every hour you try to cut your compile times into half, try to estimate how often you (or your coworkers if you have any) could have rebuilt the project, and how that relates to the possible time saved.




回答3:


I found that compiling a project of around 1 million lines of C++ sped up by about a factor of two when the code was on an SSD (system with an eight-core Core i7, 12 GB RAM). Actually, the best possible performance we got was with one SSD for the system and a second one for the source -- it wasn't that the build was much faster, but the OS was much more responsive while a big build was underway.

The other thing that made a huge difference was enabling parallel building. Note that there are two separate options that both need to be enabled:

  • Menu ToolsOptionsProjects and Solutions → maximum number of parallel project builds
  • Project properties → C++/GeneralMulti-processor compilation

The multiprocessor compilation is incompatible with a couple of other flags (including minimal rebuild, I think) so check the output window for warnings. I found that with the MP compilation flag set all cores were hitting close to 100% load, so you can at least see the CPU is being used aggressively.




回答4:


One point not mentioned is that when using ccache and a highly parallel build, you'll see benefits to using an SSD.




回答5:


I did replace my hard disk drive with an SSD hoping that it will reduce the compilation time of my C++ project. Simply replacing the hard disk drive with an SSD did not solve the problem and compilation time with both were almost the same.

However, after initial failures, I got success in speeding up the compilation by approximately six times.

The following steps were done to increase the compilation speed.

  1. Turned off hibernation: "powercfg -h off" in command prompt

  2. Turned off drive indexing on C drive

  3. Shrunk page file to 800 min/1024 max (it was initially set to system managed size of 8092).



来源:https://stackoverflow.com/questions/15199356/speed-up-compile-time-with-ssd

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!