arbitrary-precision

Which library should I use on OSX for arbitrary precision arithmetic?

江枫思渺然 提交于 2019-11-28 13:04:40
问题 I tried already GMP, MPFR. But I can't accomplish a simple division like below. BTW I have LLVM compiler in Xcode. I try to compile, run it to IOS Simulator. mpf_t a; mpf_init2 (a, 256); mpf_set_d(a, 0.7); mpf_t b; mpf_init2 (b, 256); mpf_set_d(b, 1.0); mpf_t l; mpf_init2 (l, 256); gmp_printf ("%.*Ff \n", 5, a); --- 0.70000 gmp_printf ("%.*Ff \n", 5, b); --- 1.00000 mpf_div(l, a, b); gmp_printf ("%.*Ff", 5, l); --- 0.52502 回答1: Have you tried MPIR? OpenSSL also provides a big number library..

arbitrary precision addition using lists of digits

主宰稳场 提交于 2019-11-28 12:46:52
问题 What I'm trying to do is take two lists and add them together like each list is a whole number. (define (reverse lst) (if (null? lst) '() (append (reverse (cdr lst)) (list (car lst))))) (define (apa-add l1 l2) (define (apa-add-help l1 l2) (cond ((and (null? l1) (null? l2)) '()) ((null? l1) (list (+ (apa-add-help '() (cdr l2))))) ((null? l2) (list (+ (apa-add-help (cdr l1) '())))) ((>= (+ (car l1) (car l2)) 10) (append (apa-add-help (cdr l1) (cdr l2)) (list (quotient (+ (car l1) (car l2)) 10))

x86-64 Big Integer Representation?

北战南征 提交于 2019-11-28 12:45:41
How do hig-performance native big-integer libraries on x86-64 represent a big integer in memory? (or does it vary? Is there a most common way?) Naively I was thinking about storing them as 0-terminated strings of numbers in base 2 64 . For example suppose X is in memory as: [8 bytes] Dn . . [8 bytes] D2 [8 bytes] D1 [8 bytes] D0 [8 bytes] 0 Let B = 2 64 Then X = D n * B n + ... + D 2 * B 2 + D 1 * B 1 + D 0 The empty string (i.e. 8 bytes of zero) means zero. Is this a reasonable way? What are the pros and cons of this way? Is there a better way? How would you handle signedness? Does 2's

Translation from Complex-FFT to Finite-Field-FFT

泪湿孤枕 提交于 2019-11-28 12:43:37
Good afternoon! I am trying to develop an NTT algorithm based on the naive recursive FFT implementation I already have. Consider the following code ( coefficients ' length, let it be m , is an exact power of two): /// <summary> /// Calculates the result of the recursive Number Theoretic Transform. /// </summary> /// <param name="coefficients"></param> /// <returns></returns> private static BigInteger[] Recursive_NTT_Skeleton( IList<BigInteger> coefficients, IList<BigInteger> rootsOfUnity, int step, int offset) { // Calculate the length of vectors at the current step of recursion. // - int n =

What's the best (for speed) arbitrary-precision library for C++? [duplicate]

好久不见. 提交于 2019-11-28 11:13:52
This question already has an answer here: The best cross platform (portable) arbitrary precision math library [closed] 5 answers I need the fastest library that is available for C++. My platform will be x86 and x86-64 which supports floating points. GMPLIB GMP is a free library for arbitrary precision arithmetic, operating on signed ... C++ class based interface to all of the above. 来源: https://stackoverflow.com/questions/4486882/whats-the-best-for-speed-arbitrary-precision-library-for-c

What class to use for money representation?

China☆狼群 提交于 2019-11-27 23:29:17
What class should I use for representation of money to avoid most rounding errors? Should I use Decimal , or a simple built-in number ? Is there any existing Money class with support for currency conversion that I could use? Any pitfalls that I should avoid? I assume that you talking about Python. http://code.google.com/p/python-money/ "Primitives for working with money and currencies in Python" - the title is self explanatory :) Never use a floating point number to represent money. Floating numbers do not represent numbers in decimal notation accurately. You would end with a nightmare of

Python and “arbitrary precision integers”

徘徊边缘 提交于 2019-11-27 22:55:20
Python is supposed to have "arbitrary precision integers," according to the answer in Python integer ranges . But this result is plainly not arbitrary precision: $ python -c 'print("%d" % (999999999999999999999999/3))' 333333333333333327740928 According to PEP 237 , bignum is arbitrarily large (not just the size of C's long type). And Wikipedia says Python's bignum is arbitrary precision. So why the incorrect result from the above line of code? Actually in python3 whenever you divide ints you get float as a result. There is a // operator that does integer division: >>> 999999999999999999999999

Can long integer routines benefit from SSE?

有些话、适合烂在心里 提交于 2019-11-27 14:21:09
I'm still working on routines for arbitrary long integers in C++. So far, I have implemented addition/subtraction and multiplication for 64-bit Intel CPUs. Everything works fine, but I wondered if I can speed it a bit by using SSE. I browsed through the SSE docs and processor instruction lists, but I could not find anything I think I can use and here is why: SSE has some integer instructions, but most instructions handle floating point. It doesn't look like it was designed for use with integers (e.g. is there an integer compare for less?) The SSE idea is SIMD (same instruction, multiple data),

What's the best (for speed) arbitrary-precision library for C++? [duplicate]

久未见 提交于 2019-11-27 06:00:45
问题 This question already has an answer here: The best cross platform (portable) arbitrary precision math library [closed] 5 answers I need the fastest library that is available for C++. My platform will be x86 and x86-64 which supports floating points. 回答1: GMPLIB GMP is a free library for arbitrary precision arithmetic, operating on signed ... C++ class based interface to all of the above. 来源: https://stackoverflow.com/questions/4486882/whats-the-best-for-speed-arbitrary-precision-library-for-c

numpy arbitrary precision linear algebra

风格不统一 提交于 2019-11-27 05:15:58
I have a numpy 2d array [medium/large sized - say 500x500]. I want to find the eigenvalues of the element-wise exponent of it. The problem is that some of the values are quite negative (-800,-1000, etc), and their exponents underflow (meaning they are so close to zero, so that numpy treats them as zero). Is there anyway to use arbitrary precision in numpy? The way I dream it: import numpy as np np.set_precision('arbitrary') # <--- Missing part a = np.array([[-800.21,-600.00],[-600.00,-1000.48]]) ex = np.exp(a) ## Currently warns about underflow eigvals, eigvecs = np.linalg.eig(ex) I have