cpu

How do you detect the CPU architecture type during run-time with GCC and inline asm?

故事扮演 提交于 2019-12-18 07:07:04
问题 I need to find the architecture type of a CPU. I do not have access to /proc/cpuinfo, as the machine is running syslinux. I know there is a way to do it with inline ASM, however I believe my syntax is incorrect as my variable iedx is not being set properly. I'm drudging along with ASM, and by no means an expert. If anyone has any tips or can point me in the right direction, I would be much obliged. static int is64Bit(void) { int iedx = 0; asm("mov %eax, 0x80000001"); asm("cpuid"); asm("mov %0

Is there hardware support for 128bit integers in modern processors?

廉价感情. 提交于 2019-12-18 05:42:54
问题 Do we still need to emulate 128bit integers in software, or is there hardware support for them in your average desktop processor these days? 回答1: The x86-64 instruction set can do 64-bit*64-bit to 128-bit using one instruction ( mul for unsigned imul for signed each with one operand) so I would argue that to some degree that the x86 instruction set does include some support for 128-bit integers. If your instruction set does not have an instruction to do 64-bit*64-bit to 128-bit then you need

CPU and Memory Cap for an AppDomain

耗尽温柔 提交于 2019-12-18 05:39:26
问题 I want to host an exe in an appdomain and assign a CPU and Memory cap to it so that it does not use more than the assigned processing power. Is this possible to do and how? 回答1: You can't cap the maximum memory directly, as far as I know. However, from .NET 4 on, the memory currently allocated by an AppDomain is available in the AppDomain.MonitoringSurvivedMemorySize property if AppDomain.MonitoringIsEnabled is set to true . You can spin up a watchdog thread to monitor allocations. 回答2: Looks

What has a better performance: multiplication or division?

三世轮回 提交于 2019-12-18 05:11:32
问题 Which version is faster ? x * 0.5 or x / 2 Ive had a course at the university called computer systems some time ago. From back then i remember that multiplying two values can be achieved with comparably "simple" logical gates but division is not a "native" operation and requires a sum register that is in a loop increased by the divisor and compared to the dividend. Now i have to optimise an algorithm with a lot of divisions. Unfortunately its not just dividing by two so binary shifting is no

Xcode 4.3.2 and 100% CPU constantly in the idle time

放肆的年华 提交于 2019-12-18 04:42:14
问题 My Xcode started to behave very heavily from yesterday when working on medium size project (around 200 source files). Project compiles correctly and runs in both simulator and device. I do not use any 3rd party libraries, except few widely used includes (like JSON or facebook ios sdk). It constantly uses CPU(s) at full speed, even if it is in idle state (no indexing, no compiling, no editing). The usage of RAM is relatively normal (300-50MB). My machine uses: Core 2 Duo 3.04Ghz CPU, 8GB of

Xcode 4.3.2 and 100% CPU constantly in the idle time

这一生的挚爱 提交于 2019-12-18 04:42:08
问题 My Xcode started to behave very heavily from yesterday when working on medium size project (around 200 source files). Project compiles correctly and runs in both simulator and device. I do not use any 3rd party libraries, except few widely used includes (like JSON or facebook ios sdk). It constantly uses CPU(s) at full speed, even if it is in idle state (no indexing, no compiling, no editing). The usage of RAM is relatively normal (300-50MB). My machine uses: Core 2 Duo 3.04Ghz CPU, 8GB of

Python multiprocessing.cpu_count() returns '1' on 4-core Nvidia Jetson TK1

喜你入骨 提交于 2019-12-18 04:40:22
问题 Can anyone tell me why Python's multiprocessing.cpu_count() function would return 1 when when called on a Jetson TK1 with four ARMv7 processors? >>> import multiprocessing >>> multiprocessing.cpu_count() 1 The Jetson TK1 board is more or less straight out of the box, and no one has messed with cpusets. From within the same Python shell I can print the contents of /proc/self/status and it tells me that the process should have access to all four cores: >>> print open('/proc/self/status').read()

Cache bandwidth per tick for modern CPUs

安稳与你 提交于 2019-12-18 02:20:33
问题 What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD? Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any. PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest

What are the common causes for high CPU usage?

与世无争的帅哥 提交于 2019-12-17 22:34:36
问题 Background: In my application written in C++, I have created 3 threads: AnalysisThread (or Producer) : it reads an input file, parses it, and generates patterns, and enqueue them into std::queue 1 . PatternIdRequestThread (or Consumer) : it deque patterns from the queue, and sends them, one by one, to database through a client (written in C++), which returns pattern uid which is then assigned to the corresponding pattern. ResultPersistenceThread : it does few more things, talks to database,

How is CPU usage calculated?

只愿长相守 提交于 2019-12-17 22:05:00
问题 On my desktop, I have a little widget that tells me my current CPU usage. It also shows the usage for each of my two cores. I always wondered, how does the CPU calculate how much of its processing power is being used? Also, if the CPU is hung up doing some intense calculations, how can it (or whatever handles this activity) examine the usage, without getting hung up as well? 回答1: The CPU doesn't do the usage calculations by itself. It may have hardware features to make that task easier, but