profiling

ruby fast reading from std

半城伤御伤魂 提交于 2020-01-14 06:41:30
问题 What is the fastest way to read from STDIN a number of 1000000 characters (integers), and split it into an array of one character integers (not strings) ? 123456 > [1,2,3,4,5,6] 回答1: This should be reasonably fast: a = [] STDIN.each_char do |c| a << c.to_i end although some rough benchmarking shows this hackish version is considerably faster: a = STDIN.bytes.map { |c| c-48 } 回答2: The quickest method I have found so far is as follows :- gets.unpack("c*").map { |c| c-48} Here are some results

ruby fast reading from std

…衆ロ難τιáo~ 提交于 2020-01-14 06:41:29
问题 What is the fastest way to read from STDIN a number of 1000000 characters (integers), and split it into an array of one character integers (not strings) ? 123456 > [1,2,3,4,5,6] 回答1: This should be reasonably fast: a = [] STDIN.each_char do |c| a << c.to_i end although some rough benchmarking shows this hackish version is considerably faster: a = STDIN.bytes.map { |c| c-48 } 回答2: The quickest method I have found so far is as follows :- gets.unpack("c*").map { |c| c-48} Here are some results

My (huge) application throws an OutOfMemoryException, now what?

≯℡__Kan透↙ 提交于 2020-01-13 09:26:08
问题 This is by far the most complex software I've built and now it seems to be running out of memory at some point. I haven't done extensive testing yet, because I'm a bit lost how I should approach the problem at hand. HandleCount: 277 NonpagedSystemMemorySize: 48136 PagedMemorySize: 1898590208 PagedSystemMemorySize: 189036 PeakPagedMemorySize: 1938321408 VirtualMemorySize: 2016473088 PeakVirtualMemory: 2053062656 WorkingSet: 177774592 PeakWorkingSet: 883834880 PrivateMemorySize: 1898590208

A change in my library made it much slower. Profiling isn't helping me. What might be the reason for the slow-down?

怎甘沉沦 提交于 2020-01-13 08:28:08
问题 My Problem, Briefly I made a change to my library, now it's much slower but I can't figure out where it spends all that additional time. Profiling reports are not helping. Please help me figure out what the reason might be. Some Context I made a Redis client-library called Hedis and have a benchmark program for it. Now, I made some internal changes to the library, to clean up the architecture. This caused performance (in Redis-requests per second, as measured by said benchmark) to drop by a

Is there any way to profile performance of a WCF Application?

試著忘記壹切 提交于 2020-01-13 06:13:50
问题 We're trying to measure performance of our system, which is a .NET 3.5 application that uses WCF calls. Problem is until now, we weren't able to profile the methods inside these calls. A winforms client application was coded to test our system. We tried using ANTS 4 Profiler and VS2008 built-in Performance Analyzer, but we only got the total time of the WCF call. We would like to be able to measure all the calls that are being made inside of the WCF call. Does anybody know if that's possible?

Is there any way to profile performance of a WCF Application?

﹥>﹥吖頭↗ 提交于 2020-01-13 06:12:19
问题 We're trying to measure performance of our system, which is a .NET 3.5 application that uses WCF calls. Problem is until now, we weren't able to profile the methods inside these calls. A winforms client application was coded to test our system. We tried using ANTS 4 Profiler and VS2008 built-in Performance Analyzer, but we only got the total time of the WCF call. We would like to be able to measure all the calls that are being made inside of the WCF call. Does anybody know if that's possible?

Making PHP performance profiling predictable

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-13 03:25:07
问题 I'm using xdebug with PHP to do some performance profiling. But when I run the same script more than once, I often get very different times. So it's hard to know how much faith to put in the results. Obviously there's a lot happening on a machine that can affect PHP's performance. But is there anything I can do to reduce the number of variables, so multiple tests are more consistent? I'm running PHP under Apache, on Mac OS X. 回答1: Reduce the number of unrelated services on the box as much as

Read and parse perf.data

混江龙づ霸主 提交于 2020-01-13 02:47:37
问题 I am recording a performance counters frm linux using the command perf record. I want to use the result perf.data as an input to other programming apps. Do you know how shall I read and parse the data in perf.data ? Is there a way to transform it to .text file or .csv ? 回答1: An example command definition that redirects service check performance data to a text file for later processing by another application is shown below: define command{ command_name store-service-perfdata command_line /bin

PHP Profiler for a live system on top of Apache

二次信任 提交于 2020-01-12 20:54:34
问题 I have a PHP website on a Apache server, and I would like to know if there are tools and other ways I can profile this to find bottlenecks on the code. What I need to know is what functions are taking long to process, etc. Something like gprof, except for PHP on live apache server. What are other ways to find bottlenecks in a PHP system. 回答1: You can use xdebug - once installed you can trigger profiling of requests in a variety of ways, and you wind up with a valgrind format profile for each

how to profile sequential launched multiple OpenCL kernels by one clFinish?

≡放荡痞女 提交于 2020-01-12 08:36:06
问题 I have multiple kernels, and they are launched in sequential manner like this: clEnqueueNDRangeKernel(..., kernel1, ...); clEnqueueNDRangeKernel(..., kernel2, ...); clEnqueueNDRangeKernel(..., kernel3, ...); and, multiple kernels share one global buffer. Now, I profile every kernel execution and sum them up to count total execution time by adding the code block after clEnqueueNDRangeKernel: clFinish(cmdQueue); status = clGetEventProfilingInfo(...,&starttime,...); clGetEventProfilingInfo(...,