128-bit

Cannot use 128bit float in Python on 64bit architecture

大城市里の小女人 提交于 2019-12-04 00:29:12
问题 I checked the size of a pointer in my python terminal (in Enthought Canopy IDE) via import ctypes print (ctypes.sizeof(ctypes.c_voidp) * 8) I've a 64bit architecture and working with numpy.float64 is just fine. But I cannot use np.float128 ? np.array([1,1,1],dtype=np.float128) or np.float128(1) results in: AttributeError: 'module' object has no attribute 'float128' I'm running the following version: sys.version_info(major=2, minor=7, micro=6, releaselevel='final', serial=0) 回答1: Update: From

128 bit integer with c on windows?

拟墨画扇 提交于 2019-12-03 13:53:50
Is there any c compiler on windows able to use 128 bit integers natively? On example, you can use gcc on linux, with __uint128_t... any other chance on windows? (It would be great if 128 bit worked on 32 bit computers as well! :D) Matteo Kerrek SB In GCC you can try `` attribute ((mode(...)))`, see here and here , e.g. typedef unsigned int myU128 __attribute__((mode(TI))); The results depend on your platform, though. You could try using SSE intrinsics built into Visual C++ http://msdn.microsoft.com/en-us/library/5eawz414%28v=VS.80%29.aspx (Look at the __m128 type). 来源: https://stackoverflow

Just want to decode the code into plain text

怎甘沉沦 提交于 2019-12-02 20:53:16
问题 We would like to know more about function get_rnd_iv() and md5_encrypt() these function for using 128bit encoding. Now here we just want to know how to decode that code into plain text... Here is my all code lines.. function get_rnd_iv($iv_len) { $iv = ''; while ($iv_len-- > 0) { $iv .= chr(mt_rand() & 0xff); } return $iv; } function md5_encrypt($plain_text, $password, $iv_len = 16) { $plain_text .= "\x13"; $n = strlen($plain_text); if ($n % 16) $plain_text .= str_repeat("\0", 16 - ($n % 16))

How to use Gcc 4.6.0 libquadmath and __float128 on x86 and x86_64

☆樱花仙子☆ 提交于 2019-12-02 19:33:17
I have medium size C99 program which uses long double type (80bit) for floating-point computation. I want to improve precision with new GCC 4.6 extension __float128 . As I get, it is a software-emulated 128-bit precision math. How should I convert my program from classic long double of 80-bit to quad floats of 128 bit with software emulation of full precision? What need I change? Compiler flags, sources? My program have reading of full precision values with strtod , doing a lot of different operations on them (like +-*/ sin, cos, exp and other from <math.h> ) and printf -ing of them. PS:

Cannot use 128bit float in Python on 64bit architecture

对着背影说爱祢 提交于 2019-12-01 03:18:55
I checked the size of a pointer in my python terminal (in Enthought Canopy IDE) via import ctypes print (ctypes.sizeof(ctypes.c_voidp) * 8) I've a 64bit architecture and working with numpy.float64 is just fine. But I cannot use np.float128 ? np.array([1,1,1],dtype=np.float128) or np.float128(1) results in: AttributeError: 'module' object has no attribute 'float128' I'm running the following version: sys.version_info(major=2, minor=7, micro=6, releaselevel='final', serial=0) Update: From the comments, it seems pointless to even have a 128 bit float on a 64 bit system. I am using anaconda on a

128-bit shifts using assembly language?

守給你的承諾、 提交于 2019-11-30 03:25:29
问题 What is the most efficient way to do 128 bit shift on a modern Intel CPU (core i7, sandy bridge). A similar code is in my most inner loop: u128 a[N]; void xor() { for (int i = 0; i < N; ++i) { a[i] = a[i] ^ (a[i] >> 1) ^ (a[i] >> 2); } } The data in a[N] is almost random. 回答1: Using instruction Shift Double . So SHLD or SHRD instruction, because SSE isn't intended for this purpose. There is a clasic method, here are you have test cases for 128 bit left shift by 16 bits under 32 and 64 bit CPU

Fastest way to convert binary to decimal?

狂风中的少年 提交于 2019-11-29 02:19:06
I've got four unsigned 32-bit integers representing an unsigned 128-bit integer, in little endian order: typedef struct { unsigned int part[4]; } bigint_t; I'd like to convert this number into its decimal string representation and output it to a file. Right now, I'm using a bigint_divmod10 function to divide the number by 10, keeping track of the remainder. I call this function repeatedly, outputting the remainder as a digit, until the number is zero. It's pretty slow. Is this the fastest way to do it? If so, is there a clever way to implement this function that I'm not seeing? I've tried

128-bit division intrinsic in Visual C++

拈花ヽ惹草 提交于 2019-11-28 10:04:20
I'm wondering if there really is no 128-bit division intrinsic function in Visual C++? There is a 64x64=128 bit multiplication intrinsic function called _umul128() , which nicely matches the MUL x64 assembler instruction. Naturally, I assumed there would be a 128/64=64 bit division intrinsic as well (modelling the DIV instruction), but to my amazement neither Visual C++ nor Intel C++ seem to have it, at least it's not listed in intrin.h. Can someone confirm that? I tried grep'ing for the function names in the compiler executable files, but couldn't find _umul128 in the first place, so I guess

Is there any way to do 128-bit shifts on gcc <4.4?

北城余情 提交于 2019-11-27 23:59:34
gcc 4.4 seems to be the first version when they added int128_t . I need to use bit shifting and I have run out of room for some bit fields. Edit : It might be because I'm on a 32-bit computer, there's no way to have it for a 32-bit computer (Intel Atom), is there? I wouldn't care if it generated tricky slow machine code if I would work as expected with bit shifting. janm I'm pretty sure that __int128_t is available on earlier versions of gcc. Just checked on 4.2.1 and FreeBSD and sizeof(__int128_t) gives 16. You could also use a library. This would have the advantage that it is portable

Does gcc support 128-bit int on amd64? [duplicate]

て烟熏妆下的殇ゞ 提交于 2019-11-27 23:29:10
This question already has an answer here: Is there a 128 bit integer in gcc? 3 answers Does gcc support 128-bit int on amd64? How to define it? How to use scanf/printf to read/write it? rkhayrov GCC supports built-in __int128 and unsigned __int128 types (on 64-bit platforms only), but it looks like formatting support for 128-bit integers is less common in libc. Note: <stdint.h> defines __int128_t and __uint128_t on versions before gcc4.6. See also Is there a 128 bit integer in gcc? for a table of gcc/clang/ICC versions. How to know if __uint128_t is defined for detecting __int128 void f(_