How to observe CUDA events and metrics for a subsection of an executable (e.g. only during a kernel execution time)?

假如想象 提交于 2019-12-19 09:03:22

问题


I'm familiar with using nvprof to access the events and metrics of a benchmark, e.g.,

nvprof --system-profiling on --print-gpu-trace -o (file name) --events inst_issued1 ./benchmarkname

The

system-profiling on --print-gpu-trace -o (filename)    

command gives timestamps for start time, kernel end times, power, temp and saves the info an nvvp files so we can view it in the visual profiler. This allows us to see what's happening in any section of a code, in particular when a specific kernel is running. My question is this--

Is there a way to isolate the events counted for only a section of the benchmark run, for example during a kernel execution? In the command above,

--events inst_issued1    

just gives the instructions tallied for the whole executable. Thanks!


回答1:


You may want to read the profiler documentation.

You can turn profiling on and off within an executable. The cuda runtime API for this is:

cudaProfilerStart() 
cudaProfilerStop() 

So, if you wanted to collect profile information only for a specific kernel, you could do:

#include <cuda_profiler_api.h>
...

cudaProfilerStart();
myKernel<<<...>>>(...);
cudaProfilerStop();

and excerpting from the documentation:

When using the start and stop functions, you also need to instruct the profiling tool to disable profiling at the start of the application. For nvprof you do this with the --profile-from-start off flag. For the Visual Profiler you use the Start execution with profiling enabled checkbox in the Settings View.

Also from the documentation for nvprof specifically, you can limit event/metric tabulation to a single kernel with a command line switch:

 --kernels <kernel name>

The documentation gives additional usage possibilities.




回答2:


After looking into this a bit more, it turns out that kernel level information is also given for all kernels (w/o using --kernels and specifying them specifically) by using

nvprof --events <event names> --metrics <metric names> ./<cuda benchmark>   

In fact, it gives output of the form

"Device","Kernel","Invocations","Event Name","Min","Max","Avg"

If a kernel is called multiple times in the benchmark, this allows you to see the Min, Max, Avg of the desired events for those kerne runs. Evidently the --kernels option on Cuda 7.5 Profiler allows each run of each kernel to be specified.



来源:https://stackoverflow.com/questions/32636261/how-to-observe-cuda-events-and-metrics-for-a-subsection-of-an-executable-e-g-o

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!