cpu

why does an infinite loop of the unintended kind increase the CPU use?

安稳与你 提交于 2019-12-04 01:54:26
I know an infinite loop to the unintended kind usually causes a high CPU usage. But, I don't quite understand why. Can anyone explain that to me? The CPU cannot do anything else while it's executing that loop (which never ends). Even if you're using a pre-emptive multi-tasking system (so that infinite loop will only clog forever its own process or thread), the loop will "eat" its time slice each time the OS's pre-emptive scheduler hands it the CPU for the next slice -- doing nothing, but eating up one slice's worth of CPU time each and every time, so that much CPU is lost to all other threads

How many SHA256 hashes can a modern computer compute?

∥☆過路亽.° 提交于 2019-12-04 00:53:44
I want to know the mathematical time required for cracking hashes based off different sets of characters. For example, using only 7 letter, US-ASCII alphabetic characters we know that there are 26 7 possible sequences that could be used. Knowing how many of these could be generated by a computer each minute would give me an idea of how long it would take to generate all possible hashes and crack a certain 7 character hash ( birthday attacks aside ). For example, taking the number above, if a modern quad core could generate 1 million hashes each minute it would take 8031810176 / 1000000 / 60 =

Theano CNN on CPU: AbstractConv2d Theano optimization failed

…衆ロ難τιáo~ 提交于 2019-12-04 00:03:45
I'm trying to train a CNN for object detection on images with the CIFAR10 dataset for a seminar at my university but I get the following Error: AssertionError: AbstractConv2d Theano optimization failed: there is no implementation available supporting the requested options. Did you exclude both "conv_dnn" and "conv_gemm" from the optimizer? If on GPU, is cuDNN available and does the GPU support it? If on CPU, do you have a BLAS library installed Theano can link against? I am running Anaconda 2.7 within a Jupyter notebook (CNN training on CPU) from a Windows 10 machine. As I already have updated

CPU Utilization high for sleeping processes

可紊 提交于 2019-12-03 23:38:42
I have a process that appears to be deadlocked: # strace -p 5075 Process 5075 attached - interrupt to quit futex(0x419cf9d0, FUTEX_WAIT, 5095, NULL It is sitting on the "futex" system call, and seems to be indefinitely waiting on a lock. The process is shown to be consuming a large amount of CPU when "top" is run: # top -b -n 1 top - 23:13:18 up 113 days, 4:19, 1 user, load average: 1.69, 1.74, 1.72 Tasks: 269 total, 1 running, 268 sleeping, 0 stopped, 0 zombie Cpu(s): 8.1%us, 0.1%sy, 0.0%ni, 91.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 12165696k total, 3810476k used, 8355220k free, 29440k

Java limiting resource usage

99封情书 提交于 2019-12-03 23:24:28
Is there a way to limit both the number of cores that java uses? And in that same vein, is it possible to limit how much of that core is being used? You can use taskset on linux. You can also lower the priority of a process, but unless the CPU(S) are busy, a process will get as much CPU as it can use. I have a library for dedicating thread to a core, called Java Thread Affinity, but it may have a different purpose to what you have in mind. Can you clarify why you want to do this? I don't think that there are built-in JVM options to do these kind of tweaks, however you can limit CPU usage by

CPU Usage using WMI & C#

て烟熏妆下的殇ゞ 提交于 2019-12-03 22:36:10
问题 How can i retrieve the current CPU usage in c# using WMI? I've seen plenty of posts using performance counters, but I need a solution that can work with remote machines. I've also found a VB solution here, but I'd prefer to accomplish this in C# if possible. 回答1: Performance with WMI is messy, to say the least. Performance counters work OK with remote machines. Use the System.Diagnostics.PerformanceCounterXxx classes, the constructors have overloads which take a machineName argument. 回答2: the

How many CPU cores has a heroku dyno?

房东的猫 提交于 2019-12-03 22:26:02
I'm using Django with Celery 3.0.17 and now trying to figure out how many celery workers are run by default. From this link I understand that (not having modified this config) the number of workers must be currently equal to the number of CPU cores. And that's why I need the former. I wasn't able to find an official answer by googling or searching heroku's dev center . I think it's 4 cores as I'm seeing 4 concurrent connections to my AMQP server, but I wanted to confirm that. Thanks, J The number of CPUs is not published and is subject to change, but you can find out at runtime by running grep

LINQ to SQL : Too much CPU Usage: What happens when there are multiple users

廉价感情. 提交于 2019-12-03 21:47:30
I am using LINQ to SQL and seeing my CPU Usage sky rocketting. See below screenshot. I have three questions What can I do to reduce this CPU Usage. I have done profiling and basically removed everything. Will making every LINQ to SQL statement into a compiled query help? I also find that even with compiled queries simple statements like ByID() can take 3 milliseconds on a server with 3.25GB RAM 3.17GHz - this will just become slower on a less powerful computer. Or will the compiled query get faster the more it is used? The CPU Usage (on the local server goes to 12-15%) for a single user will

Why doesn't the instruction reorder issue occur on a single CPU core?

倖福魔咒の 提交于 2019-12-03 21:15:34
From this post : Two threads being timesliced on a single CPU core won't run into a reordering problem. A single core always knows about its own reordering and will properly resolve all its own memory accesses. Multiple cores however operate independently in this regard and thus won't really know about each other's reordering. Why can't the instruction reorder issue occur on a single CPU core? This article doesn't explain it. EXAMPLE : The following pictures are picked from Memory Reordering Caught in the Act : Below is recorded: I think the recorded instructions can also cause issue on a

RenderScript speedup 10x when forcing default CPU implementation

旧时模样 提交于 2019-12-03 20:49:26
I have implemented a CNN in RenderScript, described in a previous question which spawned this one. Basically, when running adb shell setprop debug.rs.default-CPU-driver 1 there is a 10x speedup on both Nvidia Shield and Nexus 7. The average computation time goes from around 50ms to 5ms, the test app goes from around 50fps to 130 or more. There are two convolution algorithms: (1) moving kernel (2) im2col and GEMM from RenderScriptIntrinsicsBLAS. Both experience similar speedup. The question is: why is this happening and can this effect be instantiated from the code in a predictable way? And is