cpu

Throttling CPU/Memory usage of a Thread in Java?

左心房为你撑大大i 提交于 2019-11-26 23:59:44
问题 I'm writing an application that will have multiple threads running, and want to throttle the CPU/memory usage of those threads. There is a similar question for C++, but I want to try and avoid using C++ and JNI if possible. I realize this might not be possible using a higher level language, but I'm curious to see if anyone has any ideas. EDIT: Added a bounty; I'd like some really good, well thought out ideas on this. EDIT 2: The situation I need this for is executing other people's code on my

Accurate calculation of CPU usage given in percentage in Linux?

拜拜、爱过 提交于 2019-11-26 23:56:34
问题 It's a question which has been asked many times, however there is no well supported answer I could find. Many people suggest the use of top command, but if you run top once (because you have a script for example collecting Cpu usage every 1 second) it will always give the same Cpu usage result (example 1, example 2). A more accurate way to calculate CPU usage, is by reading the values from /proc/stat , but most of the answers use only the first 4 fields from /proc/stat to calculate it (one

When are x86 LFENCE, SFENCE and MFENCE instructions required?

核能气质少年 提交于 2019-11-26 23:54:34
问题 Ok, I have been reading the following Qs from SO regarding x86 CPU fences ( LFENCE , SFENCE and MFENCE ): Does it make any sense instruction LFENCE in processors x86/x86_64? What is the impact SFENCE and LFENCE to caches of neighboring cores? Is the MESI protocol enough, or are memory barriers still required? (Intel CPUs) and: http://www.puppetmastertrading.com/images/hwViewForSwHackers.pdf https://onedrive.live.com/view.aspx?resid=4E86B0CF20EF15AD!24884&app=WordPdf&authkey=!AMtj_EflYn2507c

Linux Process States

女生的网名这么多〃 提交于 2019-11-26 23:49:52
问题 In Linux, what happens to the state of a process when it needs to read blocks from a disk? Is it blocked? If so, how is another process chosen to execute? 回答1: While waiting for read() or write() to/from a file descriptor return, the process will be put in a special kind of sleep, known as "D" or "Disk Sleep". This is special, because the process can not be killed or interrupted while in such a state. A process waiting for a return from ioctl() would also be put to sleep in this manner. An

How to write super-fast file-streaming code in C#?

与世无争的帅哥 提交于 2019-11-26 23:41:37
I have to split a huge file into many smaller files. Each of the destination files is defined by an offset and length as the number of bytes. I'm using the following code: private void copy(string srcFile, string dstFile, int offset, int length) { BinaryReader reader = new BinaryReader(File.OpenRead(srcFile)); reader.BaseStream.Seek(offset, SeekOrigin.Begin); byte[] buffer = reader.ReadBytes(length); BinaryWriter writer = new BinaryWriter(File.OpenWrite(dstFile)); writer.Write(buffer); } Considering that I have to call this function about 100,000 times, it is remarkably slow. Is there a way to

Detect CPU Speed/Memory/Internet Speed using Java?

妖精的绣舞 提交于 2019-11-26 23:35:18
问题 Is it possible within Java to identify the total CPU speed available as well as the total system memory? Network connection speed to the web would also be awesome. 回答1: This really depends on your OS, since Java will tell you little about the underlying machine. Unfortunately you have to use differing approaches depending on your OS. If you're on Linux, take a look at the /proc/cpuinfo filesystem for CPU info. /proc generally has a wealth of information. Network (IO) will be reflected via the

Threading vs single thread

心不动则不痛 提交于 2019-11-26 22:36:10
问题 Is it always guaranteed that a multi-threaded application would run faster than a single threaded application? I have two threads that populates data from a data source but different entities (eg: database, from two different tables), seems like single threaded version of the application is running faster than the version with two threads. Why would the reason be? when i look at the performance monitor, both cpu s are very spikey ? is this due to context switching? what are the best practices

GCC's reordering of read/write instructions

混江龙づ霸主 提交于 2019-11-26 22:16:52
问题 Linux's synchronization primitives (spinlock, mutex, RCUs) use memory barrier instructions to force the memory access instructions from getting re-ordered. And this reordering can be done either by the CPU itself or by the compiler. Can someone show some examples of GCC produced code where such reordering is done ? I am interested mainly in x86. The reason I am asking this is to understand how GCC decides what instructions can be reordered. Different x86 mirco architectures (for ex: sandy

What happens after a L2 TLB miss?

江枫思渺然 提交于 2019-11-26 22:16:34
I'm struggling to understand what happens when the first two levels of the Translation Lookaside Buffer result in misses? I am unsure whether "page walking" occurs in special hardware circuitry, or whether the page tables are stored in the L2/L3 cache, or whether they only reside in main memory. Modern x86 microarchitectures have dedicated page-walk hardware . They can even speculatively do page-walks to load TLB entries before a TLB miss actually happens . Skylake can even have two page walks in flight at once, see Section 2.1.3 of Intel's optimization manual . This may be related to the page

Timing the CPU time of a python program?

核能气质少年 提交于 2019-11-26 21:50:56
问题 I would like to time a snippet of my code, and I would like just the CPU execution time (ignoring operating system processes etc). I've tried time.clock(), it appears too imprecise, and gives a different answer each time. (In theory surely if I run it again for the same code snippet it should return the same value??) I've played with timeit for about an hour. Essentially what kills it for me is the "setup" process, I end up having to import around 20 functions which is impractical as I'm