biginteger

JavaScript summing large integers

旧时模样 提交于 2019-12-27 11:05:14
问题 In JavaScript I would like to create the binary hash of a large boolean array (54 elements) with the following method: function bhash(arr) { for (var i = 0, L = arr.length, sum = 0; i < L; sum += Math.pow(2,i)*arr[i++]); return sum; } In short: it creates the smallest integer to store an array of booleans in. Now my problem is that javascript apparently uses floats as default. The maximum number I have to create is 2^54-1 but once javascript reaches 2^53 it starts doing weird things:

Is it possible to get a natural log of a big-integer instance?

蓝咒 提交于 2019-12-25 16:54:28
问题 I am using big-integer for JavaScript. var bigInt = require('big-integer') I have a bigInt instance: var ratherLargeNumber = bigInt(2).pow(2048) Can I get a (natural) log of it? 回答1: Say you have a big integer x = 5384932048329483948394829348923849 . If you convert x to a decimal string and count the digits, you can then represent x by 0.5384932048329483948394829348923849 × 10 34 . You want to take the natural logarithm of x . Observe the following . log e (x) = 34 log e ( 10 ) + log e ( 0

Determining if a BigInteger is Prime in Java

旧时模样 提交于 2019-12-25 07:21:15
问题 I am trying hands on validation of whether a BigInteger number entered is a Prime Number or not! But, it is running fine for smaller numbers like 13,31 ,but it yields error in the case of 15 ;by declaring it as a Prime. I am unable to figure-out the mistake,probably it is hidden in the squareroot() method approach involving binary-search ! Please view the code and help me point out the mistake!!! Calling code :- boolean p=prime(BigInteger.valueOf(15)); System.out.println("P="+p); Called code

BigInteger.ToString() returns more than 50 decimal digits

本秂侑毒 提交于 2019-12-24 16:22:49
问题 I'm using .NET 4 System.Numerics.BigInteger Structure and I'm getting results different from the documentation. In the documentation of BigInteger.ToString() Method It says: The ToString() method supports 50 decimal digits of precision. That is, if the BigInteger value has more than 50 digits, only the 50 most significant digits are preserved in the output string; all other digits are replaced with zeros. I have some code that takes a 60 decimal digits BigInteger and converts it to a string .

Conversion of a binary representation stored in a list of integers (little endian) into a Biginteger

不羁的心 提交于 2019-12-24 12:15:04
问题 I have a list of integers, say L which contains the binary representation of a number. Each integer in the list L can be 0 or 1. The "least significant bit" is on the left (not on the right). Example: 1000001111 for the (decimal) number 961, or 0111010001 for 558. I want to convert the list into a Biginteger. I have tried the following so far: Dim bytes(L.Count - 1) As Byte For i As Integer = 0 to L.Count - 1 bytes(i) = CByte(L(i)) Next Dim Value As New BigInteger(bytes) Return Value but the

How does this integer encoding work?

北慕城南 提交于 2019-12-24 09:03:54
问题 In this code golf question, there is a python answer that encodes the lengths of all integers from 1 to 99 in english to a big number: 7886778663788677866389978897746775667552677566755267756675527886778663788677866355644553301220112001 To get the length of n , you just have to calculate 3 + (the_big_number / (10**n)) % 10 . How does this work? 回答1: (the_big_number / (10^n)) % 10 pulls out the n th least significant digit of the big number, so the lengths are just stored starting with the

How does one subtract 1 from a BigInt in Rust?

♀尐吖头ヾ 提交于 2019-12-24 08:06:47
问题 I'd like this program to compile and print 314158 when executed: extern crate num; use num::{BigInt, FromPrimitive, One}; fn main() { let p: BigInt = FromPrimitive::from_usize(314159).unwrap(); let q: BigInt = p - One::one(); println!("q = {}", q); } // end main The compiler error is: error[E0284]: type annotations required: cannot resolve `<num::BigInt as std::ops::Sub<_>>::Output == num::BigInt` --> src/main.rs:7:23 | 7 | let q: BigInt = p - One::one(); | ^ 回答1: Rust follows an open world

What base would be more appropriate for my BigInteger library?

我们两清 提交于 2019-12-24 07:06:21
问题 I have developed my own BigInteger library in C++, for didactic purpose, initially I have used base 10 and it works fine for add, subtract and multiply, but for some algorithms such as exponentiation, modular exponentiation and division it appears to be more appropriate use base 2. I am thinking restart my project from scratch and I would know what base do you think is more adequate and why?. Thanks in advance! 回答1: If you look at most BigNum type libraries you will see that they are built on

Extracting bits from boost multiprecision

随声附和 提交于 2019-12-24 01:05:52
问题 I'm using uint256_t to make arithmetic operation on big integers; I would like to extract the bits of the numbers in a regular form, (i.e. not in a floating point form) without any precision since I'm only using integers and not floats. For example: if my code has: #include <boost/multiprecision/cpp_int.hpp> uint256_t v = 0xffffffffffffffffffffffffffffff61; Then I would like to have 32 bytes: 61 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 回答1:

Trouble with OpenSSL's BN_bn2bin function

我的未来我决定 提交于 2019-12-24 00:16:33
问题 I'm trying to use the BN_* functions in OpenSSL. Specifically, I have the following code: #import <openssl/bn.h> BIGNUM * num = BN_new(); BN_set_word(num, 42); char * buffer = malloc((BN_num_bytes(num)+1) * sizeof(char)); buffer[BN_num_bytes(num)] = '\0'; int len = BN_bn2bin(num, buffer); printf("42 in binary is %s\n", buffer); However, when I do this, I don't get a string of ones and zeros. Instead it prints "42 in binary is *" . As far as I can tell, and from the very limited number of