precision

Exponential of large negative numbers

随声附和 提交于 2019-12-12 01:36:43
问题 How does one compute a number such as np.exp(-28000) on Python? The answer is around 5E-12161. I've been told that due to the double-precision floating point format, I would only be able to calculate a number > 1e-2048 回答1: Try the decimal module. Decimal(math.exp(1))**-28000 回答2: Try mpmath for floating-point arithmetic with arbitrary precision Edit 1: >>> import mpmath as mp >>> import numpy as np >>> a = np.matrix((0,0)) >>> print(a) [0.0 0.0] >>> b = mp.matrix(a.tolist()) >>> c = b.apply

Inverting frexp with ldexp

女生的网名这么多〃 提交于 2019-12-11 22:46:43
问题 If I understand the documentation correctly, we should be able to use ldexp to recover a floating point number decomposed into a signed mantissa and an exponent by frexp . I have been unable to achieve this. Consider the following code: #include <cmath> #include <iostream> #include <limits> template <typename T> void float_info() { std::cout << "max=" << std::numeric_limits<T>::max() << ", max_exp=" << std::numeric_limits<T>::max_exponent << ", max_10_exp=" << std::numeric_limits<T>::max

multiplication R without float [closed]

孤人 提交于 2019-12-11 20:45:25
问题 Closed . This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 4 years ago . is there a way to do the multiplication with a float in a serie sof two multiplications with integers. I would need to write a function which accepts as input values such as 3.4556546e-8 1.3 0.134435 Instead of doing 100*0.134435 it would do 100/1000000 and then multiply with 134435 the function

integers or floating point in situations when either would do?

佐手、 提交于 2019-12-11 19:27:44
问题 Moving a discussion on relative merits of integers and floats into a separate question. Here it is: what is your preference between an integer type or a floating point type in situations that are neither inherently integral nor inherently floating point? For example, when developing geometric engine for a well-conntrolled range of scales would you prefer integer coordinates in the smallest feasible units or float/double coordinates? 回答1: Some reasons to prefer floating-point are: When you

PHP Scientific Notation Shortening

谁都会走 提交于 2019-12-11 17:57:19
问题 I'm looking for an elegant solution here (if one exists). I've got a bunch of numbers, with an arbitrary amount of decimal places. I want to force the number to use 8 decimal places if it's got more than 8 trailing zeroes i.e. 0.000000004321 That would be converted to: 0.00000001 But I don't want to use number format because if I force it to 8 decimals with number format my numbers without 8 decimal places will look like: 1.00000000 I'd rather these just look like (for amounts >= 1): 1

platform independent way to reduce precision of floating point constant values

末鹿安然 提交于 2019-12-11 17:55:52
问题 The use case: I have some large data arrays containing floating point constants that. The file defining that array is generated and the template can be easily adapted. I would like to make some tests, how reduced precision does influence the results in terms of quality, but also in compressibility of the binary. Since I do not want to change other source code than the generated file, I am looking for a way to reduce the precision of the constants. I would like to limit the mantissa to a fixed

C's float and double precision? [duplicate]

蓝咒 提交于 2019-12-11 16:56:53
问题 This question already has answers here : 'float' vs. 'double' precision (6 answers) Is floating point math broken? (31 answers) Closed last year . I recently found this C code online. #include <stdio.h> int main() { double x = 3.1; float y = 3.1; if(x==y) printf("yes"); else printf("No"); } The output is No . I added a few more printf calls to investigate #include <stdio.h> int main() { double x = 3.1; float y = 3.1; if(x==y) printf("yes"); else printf("No"); printf("%.10f\n", y); printf("%

How can i round off data to mark it as interpolated or stale by suffixing in 0.0000001 or 0.0000002?

久未见 提交于 2019-12-11 15:55:18
问题 I have some missing data like :-- 1995 1996 1997 1998 1999 4 NA NA 5 NA what i want to do here is this :- 1995 1996 1997 1998 1999 4 4.3300001 4.6700001 5 5.0000002 Iam able to write code for the above interpolation (missing and stale data) but........ The data i input is not always clean. It might come as 1995 1996 1997 1998 1999 4.032 NA NA 5.134 5.0000002 This might interrupt with the precision of the interpolated numbers (who go to 7 decimal places) So i wanted to round off the data

Why do certain floating point calculations turn the way they do? (e.g. 123456789f +1 = 123456792)

本小妞迷上赌 提交于 2019-12-11 14:28:31
问题 I'm trying to get a better understanding of floating point arithmetic, the attending errors that occur and accrue, as well as why exactly the results turn out the way they do. Here are 3 Examples in particular I'm currently working on: 1.) 0.1+0.1 +0.1 +0.1 +0.1 +0.1 +0.1 +0.1 +0.1 +0.1 -1.0 = -1.1102230246251565E-16 aka adding 0.1 10 times gives me a number slightly less than 1.0 . However, 0.1 is represented (as a double) as slightly larger than 0.1 . Also *0.1*3* is slightly larger than 0

Floating point: Can I recursively divide/multiple numbers by 2 and never get rounding errors?

血红的双手。 提交于 2019-12-11 14:26:49
问题 http://floating-point-gui.de/formats/binary/ binary can only represent those numbers as a finite fraction where the denominator is a power of 2 Does this mean that the numbers calculated by this process can all be added to each other or multiplied by 2 any number of times and still have an exact binary/floating point representation with no rounding errors? const dv2 = (num, limit) => { limit--; if (limit === 0) { return; } else { console.log(num, limit); dv2((num / 2), limit) } }; Is it