128-bit

Fastest way to convert binary to decimal?

北战南征 提交于 2019-11-27 16:38:53
问题 I've got four unsigned 32-bit integers representing an unsigned 128-bit integer, in little endian order: typedef struct { unsigned int part[4]; } bigint_t; I'd like to convert this number into its decimal string representation and output it to a file. Right now, I'm using a bigint_divmod10 function to divide the number by 10, keeping track of the remainder. I call this function repeatedly, outputting the remainder as a digit, until the number is zero. It's pretty slow. Is this the fastest way

128 bit integer on cuda?

▼魔方 西西 提交于 2019-11-27 06:30:53
I just managed to install my cuda SDK under Linux Ubuntu 10.04. My graphic card is an NVIDIA geForce GT 425M, and I'd like to use it for some heavy computational problem. What I wonder is: is there any way to use some unsigned 128 bit int var? When using gcc to run my program on the CPU, I was using the __uint128_t type, but using it with cuda doesn't seem to work. Is there anything I can do to have 128 bit integers on cuda? Thank you very much Matteo Monti Msoft Programming For best performance, one would want to map the 128-bit type on top of a suitable CUDA vector type, such as uint4, and

128-bit division intrinsic in Visual C++

偶尔善良 提交于 2019-11-27 03:29:06
问题 I'm wondering if there really is no 128-bit division intrinsic function in Visual C++? There is a 64x64=128 bit multiplication intrinsic function called _umul128() , which nicely matches the MUL x64 assembler instruction. Naturally, I assumed there would be a 128/64=64 bit division intrinsic as well (modelling the DIV instruction), but to my amazement neither Visual C++ nor Intel C++ seem to have it, at least it's not listed in intrin.h. Can someone confirm that? I tried grep'ing for the

How can I add and subtract 128 bit integers in C or C++ if my compiler does not support them?

左心房为你撑大大i 提交于 2019-11-26 22:23:00
I'm writing a compressor for a long stream of 128 bit numbers. I would like to store the numbers as differences -- storing only the difference between the numbers rather than the numbers themselves because I can pack the differences in fewer bytes because they are smaller. However, for compression then I need to subtract these 128 bit values, and for decompression I need to add these values. Maximum integer size for my compiler is 64 bits wide. Anyone have any ideas for doing this efficiently? If all you need is addition and subtraction, and you already have your 128-bit values in binary form,

Is there any way to do 128-bit shifts on gcc <4.4?

爱⌒轻易说出口 提交于 2019-11-26 21:37:43
问题 gcc 4.4 seems to be the first version when they added int128_t . I need to use bit shifting and I have run out of room for some bit fields. Edit : It might be because I'm on a 32-bit computer, there's no way to have it for a 32-bit computer (Intel Atom), is there? I wouldn't care if it generated tricky slow machine code if I would work as expected with bit shifting. 回答1: I'm pretty sure that __int128_t is available on earlier versions of gcc. Just checked on 4.2.1 and FreeBSD and sizeof(_

Does gcc support 128-bit int on amd64? [duplicate]

一笑奈何 提交于 2019-11-26 21:28:35
问题 This question already has an answer here: Is there a 128 bit integer in gcc? 3 answers Does gcc support 128-bit int on amd64? How to define it? How to use scanf/printf to read/write it? 回答1: GCC supports built-in __int128 and unsigned __int128 types (on 64-bit platforms only), but it looks like formatting support for 128-bit integers is less common in libc. Note: <stdint.h> defines __int128_t and __uint128_t on versions before gcc4.6. See also Is there a 128 bit integer in gcc? for a table of

Is there a 128 bit integer in gcc?

喜夏-厌秋 提交于 2019-11-26 17:58:40
I want a 128 bit integer because I want to store results of multiplication of two 64 bit numbers. Is there any such thing in gcc 4.4 and above? A 128-bit integer type is only ever available on 64-bit targets , so you need to check for availability even if you have already detected a recent GCC version. In theory gcc could support TImode integers on machines where it would take 4x 32-bit registers to hold one, but I don't think there are any cases where it does. GCC 4.6 and later has a __int128 / unsigned __int128 defined as a built-in type. Use #ifdef __SIZEOF_INT128__ to detect it. GCC 4.1

128 bit integer on cuda?

喜你入骨 提交于 2019-11-26 10:24:24
问题 I just managed to install my cuda SDK under Linux Ubuntu 10.04. My graphic card is an NVIDIA geForce GT 425M, and I\'d like to use it for some heavy computational problem. What I wonder is: is there any way to use some unsigned 128 bit int var? When using gcc to run my program on the CPU, I was using the __uint128_t type, but using it with cuda doesn\'t seem to work. Is there anything I can do to have 128 bit integers on cuda? 回答1: For best performance, one would want to map the 128-bit type

Is there a 128 bit integer in gcc?

喜欢而已 提交于 2019-11-26 05:54:30
问题 I want a 128 bit integer because I want to store results of multiplication of two 64 bit numbers. Is there any such thing in gcc 4.4 and above? 回答1: A 128-bit integer type is only ever available on 64-bit targets , so you need to check for availability even if you have already detected a recent GCC version. In theory gcc could support TImode integers on machines where it would take 4x 32-bit registers to hold one, but I don't think there are any cases where it does. GCC 4.6 and later has a _