profiling

Logging the data in Trace.axd to a text/xml file

末鹿安然 提交于 2019-12-20 02:44:06
问题 In trying to track down a performance issue that is only occurring in our production environment, we have enabled tracing within the app so see method calls and page load times. This is working well and there is lots of information that helps to track down issues. However, the only way of viewing this information is to browse to the Trace.axd and then view each request individually. It is also only possible to track the first X requests in this way and X has a maximum limit of 10,000. Is

What profiler should I use to measure _real_ time (including waiting for syscalls) spend in this function, not _CPU_ one

こ雲淡風輕ζ 提交于 2019-12-20 01:46:59
问题 The application does not calculate things, but does i/o, read files, uses network. I want profiler to show it. I expect something like something like in callgrind that calls clock_gettime each proble. Or like oprofile that interrupts my application (while it is sleeping or waiting for socket/file/whatever) to see what is it doing. I want things like "read", "connect", "nanosleep", "send" and especially "fsync" ( And all their callers ) to be bold (not things like string or number functions

Time Sampling Problems with gprof

会有一股神秘感。 提交于 2019-12-19 22:26:07
问题 I am attempting to profile some c++ code, compiled with g++ including the option -pg, using gprof. However, in spite of the fact that the program takes 10-15 minutes to run on my computer (with the CPU maxed out), the % time, cumulative seconds and self seconds columns of the table produced by gprof are entirely 0.00s! The calls column contains correct looking data, for example over 150,000 calls to a basic function. Here is a sample of the data collected: % cumulative self self total time

PHP profiling delay before shutdown function

拈花ヽ惹草 提交于 2019-12-19 19:58:07
问题 // VERY BEGIN OF SCRIPT $_SERVER['HX_startTime'] = microtime(true); ... // MY SHUTDOWN FUNCTION register_shutdown_function('HX_shutdownFn'); function HX_shutdownFn() { // formatTimeSpan is simple time to string conversion function var_dump(formatTimeSpan(microtime(true) - $_SERVER['HX_startTime'])); } ... // VERY END OF SCRIPT var_dump(formatTimeSpan(microtime(true) - $_SERVER['HX_startTime'])); I've got 0.0005s . at end of script and 1.1s . at shutdown function. Is it normal? Where 1 second

When would you use reduce() instead of sum()?

泄露秘密 提交于 2019-12-19 16:38:21
问题 I began learning functional programming recently, and came up with this example when attempting to calculate my quiz average for a class. The example I came up with is: scores = [90, 91, 92, 94, 95, 96, 97, 99, 100] def add(num1, num2): '''returns the sum of the parameters''' return num1 + num2 import operator timeit reduce(add, scores) / len(scores) #--> 1000000 loops, best of 3: 799 ns per loop timeit sum(scores) / len(scores) #--> 1000000 loops, best of 3: 207 ns per loop timeit reduce

Profiling Mule Container and Application using JProfiler

一笑奈何 提交于 2019-12-19 10:49:10
问题 I am trying to profile Mule ESB apps deployed on a mule container(CE v3.4) using Jprofiler but have been unsuccessful this far. My Mule server is running remotely on a linux 64 bit server and the jprofiler is running on my local windows machine. I am trying to remotely connect the jprofiler running on my local windows machine to the Mule server running remotely on a linux server, but till now I have failed to connect the local running jprofiler to the remotely running Mule server. Has

Profiling _mm_setzero_ps and {0.0f,0.0f,0.0f,0.0f}

风格不统一 提交于 2019-12-19 10:12:06
问题 EDIT: As Cody Gray pointed out in his comment, profiling with disabled optimization is complete waste of time. How then should i approach this test? Microsoft in its XMVectorZero in case if defined _XM_SSE_INTRINSICS_ uses _mm_setzero_ps and {0.0f,0.0f,0.0f,0.0f} if don't. I decided to check how big is the win. So i used the following program in Release x86 and Configuration Properties>C/C++>Optimization>Optimization set to Disabled (/Od) . constexpr __int64 loops = 1e9; inline void fooSSE()

How to observe CUDA events and metrics for a subsection of an executable (e.g. only during a kernel execution time)?

假如想象 提交于 2019-12-19 09:03:22
问题 I'm familiar with using nvprof to access the events and metrics of a benchmark, e.g., nvprof --system-profiling on --print-gpu-trace -o (file name) --events inst_issued1 ./benchmarkname The system-profiling on --print-gpu-trace -o (filename) command gives timestamps for start time, kernel end times, power, temp and saves the info an nvvp files so we can view it in the visual profiler. This allows us to see what's happening in any section of a code, in particular when a specific kernel is

How reliable is windows task manager for determining memory usage of programs?

戏子无情 提交于 2019-12-19 08:53:18
问题 Can I use task manager to detect huge memory leaks? I have a small text parsing program that shows memory usage of around 640K when I launch it. When I parse a file and index it the memory usage grows depending on the size of the file. Then when I "clear" the index, my memory usage drops down to around 1400K. After this point, I can add as many files as I want and when I clear the index, memory usage drops down to this 1400k level + or - a ~5%. This is after I made a change in my program.

How reliable is windows task manager for determining memory usage of programs?

点点圈 提交于 2019-12-19 08:52:08
问题 Can I use task manager to detect huge memory leaks? I have a small text parsing program that shows memory usage of around 640K when I launch it. When I parse a file and index it the memory usage grows depending on the size of the file. Then when I "clear" the index, my memory usage drops down to around 1400K. After this point, I can add as many files as I want and when I clear the index, memory usage drops down to this 1400k level + or - a ~5%. This is after I made a change in my program.