How to profile my C++ application on linux

痴心易碎 提交于 2019-11-29 17:07:00

问题


I would like to profile my c++ application on linux. I would like to find out how much time my application spent on CPU processing vs time spent on block by IO/being idle.

I know there is a profile tool call valgrind on linux. But it breaks down time spent on each method, and it does not give me an overall picture of how much time spent on CPU processing vs idle? Or is there a way to do that with valgrind.


回答1:


I can recommend valgrind's callgrind tool in conjunction with KCacheGrind for visualization. KCacheGrind makes it pretty easy to see where the hotspots are.

Note: It's been too long since I used it, so I'm not sure if you'll be able to get I/O Wait time out of that. Perhaps in conjunction with iostat or pidstat you'll be able to see where all the time was spent.




回答2:


Check out oprofile. Also for more system-level diagnostics, try systemtap.




回答3:


You might want to check out Zoom, which is a lot more polished and full-featured than oprofile et al. It costs money ($199), but you can get a free 30 day evaluation licence.




回答4:


LTTng is a good tool to use for full system profiling.




回答5:


If your app simply runs "flat out" (ie it's either using CPU or waiting for I/O) until it exits, and there aren't other processes competing, just do time myapp (or maybe /usr/bin/time myapp, which produces slightly different output to the shell builtin).

This will get you something like:

real    0m1.412s
user    0m1.288s
sys     0m0.056s

In this case, user+sys (kernel) time account for almost all the real time and there's just 0.068s unaccounted for... (probably time spent initally loading the app and its supporting libs).

However, if you were to see:

real    0m5.732s
user    0m1.144s
sys     0m0.078s

then your app spent 4.51s not consuming CPU and presumably blocked on IO. Which is the information I think you're looking for.

However, where this simple analysis technique breaks down is:

  • Apps which wait on a timer/clock or other external stimulus (e.g event-driven GUI apps). It can't distinguish time waiting on the clock and time waiting on disk/network.
  • Multithreaded apps, which need a bit more thinking about to interpret the numbers.



回答6:


callgrind is a very good tool but I found OProfile to me more 'complete'. Also, it is the only one that lets you specify module and/or kernel source to allow deeper insight into your bottlenecks. The output is supposed to be able to interface with KCacheGrind but I had trouble with that so I used Gprof2Dot instead. You can export your callgraph to a .png.

Edit:

OProfile looks at the overall system so the process will just be:

[setup oprofile]

opcontrol --init
opcontorl --vmlinux=/path/to/vmlinux     (or --no-vmlinux)
opcontrol --start

[run your app here]

opcontrol --stop   (or opcontrol --shutdown [man for difference]

then to start looking at the results look at the man page on opreport




回答7:


The lackey and/or helgrind tools in valgrind should allow you to do this.




回答8:


google-perf-tools - much faster alternative to callgrind (and it can generate output with the same format as callgrind, so you can use KCacheGrind).




回答9:


See this post.

And this post.

Basically, between the time the program starts and when it finishes, it has a call stack. During I/O, the stack terminates in a system call. During computation, it terminates in a typical instruction.

Either way, if you can sample the stack at random wall-clock times, you can see exactly why it's spending that time.

The only remaining point is - thousands of samples might give a sense of confidence, but they won't tell you much more than 10 or 20 samples will.



来源:https://stackoverflow.com/questions/2822357/how-to-profile-my-c-application-on-linux

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!