processor

Floating point versus fixed point: what are the pros/cons?

末鹿安然 提交于 2020-05-09 19:32:28
问题 Floating point type represents a number by storing its significant digits and its exponent separately on separate binary words so it fits in 16, 32, 64 or 128 bits. Fixed point type stores numbers with 2 words, one representing the integer part, another representing the part past the radix, in negative exponents, 2^-1, 2^-2, 2^-3, etc. Float are better because they have wider range in an exponent sense, but not if one wants to store number with more precision for a certain range, for example

Floating point versus fixed point: what are the pros/cons?

為{幸葍}努か 提交于 2020-05-09 19:31:58
问题 Floating point type represents a number by storing its significant digits and its exponent separately on separate binary words so it fits in 16, 32, 64 or 128 bits. Fixed point type stores numbers with 2 words, one representing the integer part, another representing the part past the radix, in negative exponents, 2^-1, 2^-2, 2^-3, etc. Float are better because they have wider range in an exponent sense, but not if one wants to store number with more precision for a certain range, for example

Why actual runtime for a larger search value is smaller than a lower search value in a sorted array?

 ̄綄美尐妖づ 提交于 2020-04-13 16:59:50
问题 I executed a linear search on an array containing all unique elements in range [1, 10000], sorted in increasing order with all search values i.e., from 1 to 10000 and plotted the runtime vs search value graph as follows: Upon closely analysing the zoomed in version of the plot as follows: I found that the runtime for some larger search values is smaller than the lower search values and vice versa My best guess for this phenomenon is that it is related to how data is processed by CPU using

Does the Code Block of a method live in the stack or heap at the moment of execution?

限于喜欢 提交于 2020-02-25 02:16:05
问题 I'm relatively new to learning programming languages, and I feel I have 20 to 25% of understanding of Object Oriented Programming Language, more specifically C# language. So I really state this question without knowing the actual significance of its answer, if any, to my process of learning the language, but I really felt I need to ask it. When a method is called for execution, I know that all its local variables and its parameters and return value are actually present in the stack memory.

is it possible to run 64 bit code in a machine with 32 bit processor?

余生颓废 提交于 2020-01-24 06:24:55
问题 I have searched around to get the answers for these questions. but not much luck. Is it possible to run 32-bit code in a machine with 64-bit processor ? The answer seems to be yes. but there is a debate on performance issues, since 32-bits are left unused on the processor. Now my question is vice-versa, Is it possible to run 64-bit code in a machine with 32-bit processor? from my little understanding, the answer is NO, because the code designed to run on 64-bit will be using 64-process

How do I know if my server has NUMA?

两盒软妹~` 提交于 2020-01-22 05:21:10
问题 Hopping from Java Garbage Collection, I came across JVM settings for NUMA. Curiously I wanted to check if my CentOS server has NUMA capabilities or not. Is there a *ix command or utility that could grab this info? 回答1: I'm no expert here, but here's something: Box 1, no NUMA: ~$ dmesg | grep -i numa [ 0.000000] No NUMA configuration found Box 2, some NUMA: ~$ dmesg | grep -i numa [ 0.000000] NUMA: Initialized distance table, cnt=8 [ 0.000000] NUMA: Node 4 [0,80000000) + [100000000,280000000)

How do I know if my server has NUMA?

不想你离开。 提交于 2020-01-22 05:21:06
问题 Hopping from Java Garbage Collection, I came across JVM settings for NUMA. Curiously I wanted to check if my CentOS server has NUMA capabilities or not. Is there a *ix command or utility that could grab this info? 回答1: I'm no expert here, but here's something: Box 1, no NUMA: ~$ dmesg | grep -i numa [ 0.000000] No NUMA configuration found Box 2, some NUMA: ~$ dmesg | grep -i numa [ 0.000000] NUMA: Initialized distance table, cnt=8 [ 0.000000] NUMA: Node 4 [0,80000000) + [100000000,280000000)

Inclusive or exclusive ? L1, L2 cache in Intel Core IvyBridge processor

帅比萌擦擦* 提交于 2020-01-21 02:13:06
问题 I am having Intel Core IvyBridge processor , Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz( L1-32KB,L2-256KB,L3-8MB). I know L3 is inclusive and shared among multiple core. I want to know the following with respect to my system PART1 : L1 is inclusive or exclusive ? L2 is inclusive or exclusive ? PART2 : If L1 and L2 are both inclusive then to find the access time of L2 we first declare an array(1MB) of size more than L2 cache(256KB) , then start accessing the whole array to load into L2 cache.

Inclusive or exclusive ? L1, L2 cache in Intel Core IvyBridge processor

╄→гoц情女王★ 提交于 2020-01-21 02:12:04
问题 I am having Intel Core IvyBridge processor , Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz( L1-32KB,L2-256KB,L3-8MB). I know L3 is inclusive and shared among multiple core. I want to know the following with respect to my system PART1 : L1 is inclusive or exclusive ? L2 is inclusive or exclusive ? PART2 : If L1 and L2 are both inclusive then to find the access time of L2 we first declare an array(1MB) of size more than L2 cache(256KB) , then start accessing the whole array to load into L2 cache.

What are some tricks that a processor does to optimize code?

与世无争的帅哥 提交于 2020-01-20 18:11:40
问题 I am looking for things like reordering of code that could even break the code in the case of a multiple processor. 回答1: Wikipedia has a fairly comprehensive list of optimization techniques here. 回答2: The most important one would be memory access reordering. Absent memory fences or serializing instructions, the processor is free to reorder memory accesses. Some processor architectures have restrictions on how much they can reorder; Alpha is known for being the weakest (i.e., the one which can