long-integer

high bits of long multiplication in Java?

心已入冬 提交于 2019-12-01 09:06:46
Is there any way to get the high half of the multiplication of two long s in Java? I.e. the part that vanishes due to overflow. (So the upper 64 bits of the 128-bit result) I'm used to writing OpenCL code where the command mul_hi does exactly this: http://www.khronos.org/registry/cl/sdk/1.0/docs/man/xhtml/mul_hi.html Since OpenCL can do it efficiently on my CPU, Java should be able to do so as well, but I can't find how I should do this (or even mimic its behaviour efficiently) in Java. Is this possible in Java, and if so, how? The accepted solution is wrong most of the time (66%), though the

high bits of long multiplication in Java?

谁说胖子不能爱 提交于 2019-12-01 07:12:06
问题 Is there any way to get the high half of the multiplication of two long s in Java? I.e. the part that vanishes due to overflow. (So the upper 64 bits of the 128-bit result) I'm used to writing OpenCL code where the command mul_hi does exactly this: http://www.khronos.org/registry/cl/sdk/1.0/docs/man/xhtml/mul_hi.html Since OpenCL can do it efficiently on my CPU, Java should be able to do so as well, but I can't find how I should do this (or even mimic its behaviour efficiently) in Java. Is

Bitwise XOR java long

这一生的挚爱 提交于 2019-12-01 05:47:59
I am using Oracle Java 7.51 on Ubuntu 12.04, and trying to do this long a = 0x0000000080000001 ^ 0x4065DE839A6F89EEL; System.out.println("result "+ Long.toHexString(a)); Output: result bf9a217c1a6f89ef But I was expecting result to be 4065de831a6f89ef , since ^ operator is a bitwise XOR in Java. Which part of Java specification am I reading wrong? You need an L at the end of the first integer literal: long a = 0x0000000080000001L ^ 0x4065DE839A6F89EEL; Otherwise it is an int literal, not a long (the leading zeroes being ignored). The ^ operator then promotes the first operand value from

What is the historical context for long and int often being the same size?

喜你入骨 提交于 2019-12-01 05:23:28
According to numerous answers here, long and int are both 32 bits in size on common platforms in C and C++ (Windows & Linux, 32 & 64 bit.) (I'm aware that there is no standard, but in practice, these are the observed sizes.) So my question is, how did this come about? Why do we have two types that are the same size? I previously always assumed long would be 64 bits most of the time, and int 32. I'm not saying it "should" be one way or the other, I'm just curious as to how we got here. From the C99 rationale (PDF) on section 6.2.5: [...] In the 1970s, 16-bit C (for the PDP-11) first represented

double to long without conversion in Java

旧时模样 提交于 2019-12-01 05:23:21
I need to turn double into long preserving its binary structure, not number value. Just change type, but leave binary value as it is. Is there a native way to do it? There is Double with to doubleToLongBits and doubleToLongRawBits. Javadoc is your friend. 来源: https://stackoverflow.com/questions/15065869/double-to-long-without-conversion-in-java

What does 'Natural Size' really mean in C++?

江枫思渺然 提交于 2019-12-01 05:16:32
I understand that the 'natural size' is the width of integer that is processed most efficiently by a particular hardware. When using short in an array or in arithmetic operations, the short integer must first be converted into int . Q: What exactly determines this 'natural size'? I am not looking for simple answers such as If it has a 32-bit architecture, it's natural size is 32-bit I want to understand why this is most efficient, and why a short must be converted before doing arithmetic operations on it. Bonus Q: What happens when arithmetic operations are conducted on a long integer? the

(a * b) / c MulDiv and dealing with overflow from intermediate multiplication

旧街凉风 提交于 2019-12-01 03:48:38
I need to do the following arithmetic: long a,b,c; long result = a*b/c; While the result is guaranteed to fit in long , the multiplication is not, so it can overflow. I tried to do it step by step (first multiply and then divide) while dealing with the overflow by splitting the intermediate result of a*b into an int array in size of max 4 ( much like the BigInteger is using its int[] mag variable). Here I got stuck with the division. I cannot get my head around the bitwise shifts required to do a precise division. All I need is the quotient (don't need the remainder). The hypothetical method

How big is the precision loss converting long to double?

爷,独闯天下 提交于 2019-12-01 03:16:48
I have read in different post on stackoverflow and in the C# documentation, that converting long (or any other data type representing a number) to double loses precision. This is quite obvious due to the representation of floating point numbers. My question is, how big is the loss of precision if I convert a larger number to double ? Do I have to expect differences larger than +/- X ? The reason I would like to know this, is that I have to deal with a continuous counter which is a long . This value is read by my application as string , needs to be cast and has to be divided by e.g. 10 or some

Bitwise XOR java long

好久不见. 提交于 2019-12-01 03:14:59
问题 I am using Oracle Java 7.51 on Ubuntu 12.04, and trying to do this long a = 0x0000000080000001 ^ 0x4065DE839A6F89EEL; System.out.println("result "+ Long.toHexString(a)); Output: result bf9a217c1a6f89ef But I was expecting result to be 4065de831a6f89ef , since ^ operator is a bitwise XOR in Java. Which part of Java specification am I reading wrong? 回答1: You need an L at the end of the first integer literal: long a = 0x0000000080000001L ^ 0x4065DE839A6F89EEL; Otherwise it is an int literal, not

Making 'long' 4 bytes in gcc on a 64-bit Linux machine

泪湿孤枕 提交于 2019-12-01 03:11:14
I am working on porting an application to 64-bit on Linux platform. The application is currently supported on Linux, Windows, Mac 32-bit and Windows 64-bit. One of the issues we are frequently encountering is the usage of long for int and vice versa. This wasn't a problem till now since long and int are interchangeable (both are 4 bytes) in the platforms the application is currently supported on. The codebase being a huge one, with lots of legacy code with #defines for many data types, makes it cumbersome to search all usage of long and replace appropriately with int. As a short term solution,