precision

How to round numeric output from yaml.dump, in Python?

╄→尐↘猪︶ㄣ 提交于 2019-12-12 19:19:47
问题 Is there a clean way to control number rounding output of yaml.dump ? For example, I have a class with different complexity variables, some of which are double precision numbers I want to be round to say 4th digit. This yaml output is for display only; it will not be loaded (i.e. yaml.load will not be used). As a naive example, consider class A below: import yaml class A: def __init__(self): self.a = 1/7 self.b = 'some text' self.c = [1/11, 1/13, 1/17, 'some more text'] def __repr__(self):

python sine and cosine precision

这一生的挚爱 提交于 2019-12-12 17:21:53
问题 How to improve python sine and cosine precision? For example I want to use follow code (just calculate y = cos(acos(x)) for a random complex vector x): import numpy as np N = 100000 x = np.zeros(N)+1j*np.zeros(N) for k in range(0,N): x[k] = np.random.normal(0,500)+1j*np.random.normal(0,500) y = np.cos(np.arccos(x)) m= np.max(np.abs(x)) print np.max(np.abs(x-y)/m) y must be equal x . But my difference is approx 1E-9 . I think is too big. For example matlab returns less than 1E-15 for the same

google maps v3: Invalid value for constructor parameter 0 while drawing polylines

◇◆丶佛笑我妖孽 提交于 2019-12-12 14:43:32
问题 I have a problem when constructing a polygon. The error message says something like: Invalid value for constructor parameter 0: (49.27862248020283, -122.79301448410035),(49.277964542440955, -122.79370112960816),(49.278524490028595, -122.7950207764435) It must be something ridiculously simple, but I just can't see it. Any tips you have are useful. I'm basically painting a map inside an iframe on a modal window (with wicket). Everything is ok, but when I'm trying show a polygon (the points are

Matlab Importdata Precision

亡梦爱人 提交于 2019-12-12 14:07:39
问题 I'm trying to use importdata for several data files containing data of a precision up to 11 digits after the decimal, is Matlab seems to think I am only interested in the first 5 digits when using importdata, is there an alternative method I could use to load my data, or a method to define the precision to which I want my data loaded? 回答1: First try: format long g Also, can you paste some of the data you are trying to load? 来源: https://stackoverflow.com/questions/9621027/matlab-importdata

Convert Hex to single precision

和自甴很熟 提交于 2019-12-12 11:33:10
问题 I'm struggling with converting a 32-bit hex expression into a single precision number in Matlab. The num2hex function works fine for both. For example, >> b = 0.4 b = 0.400000000000000 >> class(b) ans = double >> num2hex(b) ans = 3fd999999999999a >> num2hex(single(b)) ans = 3ecccccd However, this does not work the other way around. The hex2num function only converts hexadecimal expression into doubles. So, >> b = 0.4 b = 0.400000000000000 >> num2hex(single(b)) ans = 3ecccccd >> hex2num(ans)

What is the numerical stability of std::pow() compared to iterated multiplication?

允我心安 提交于 2019-12-12 11:02:53
问题 What sort of stability issues arise or are resolved by using std::pow() ? Will it be more stable (or faster, or at all different) in general to implement a simple function to perform log(n) iterated multiplies if the exponent is known to be an integer? How does std::sqrt(x) compare, stability-wise, to something of the form std::pow(x, k/2) ? Would it make sense to choose the method preferred for the above to raise to an integer power, then multiply in a square root, or should I assume that

The precision of a large floating point sum

不羁的心 提交于 2019-12-12 10:45:41
问题 I am trying to sum a sorted array of positive decreasing floating points. I have seen that the best way to sum them is to start adding up numbers from lowest to highest. I wrote this code to have an example of that, however, the sum that starts on the highest number is more precise. Why? (of course, the sum 1/k^2 should be f=1.644934066848226). #include <stdio.h> #include <math.h> int main() { double sum = 0; int n; int e = 0; double r = 0; double f = 1.644934066848226; double x, y, c, b;

Java: Math.random() Max Value (double just less than 1)

点点圈 提交于 2019-12-12 10:44:49
问题 I've been a little curious about this. Math.random() gives a value in the range [0.0,1.0). So what might the largest value it can give be? In other words, what is the closest double value to 1.0 that is less than 1.0? 回答1: Java uses 64-bit IEEE-754 representation, so the closest number smaller than one is theoretically 3FEFFFFFFFFFFFFF in hexadecimal representation, which is 0 for sign, -1 for the exponent, and 1.9999999999999997 for the 52-bit significand. This equals to roughly 0

Adding floating point precision to qnorm/pnorm?

ε祈祈猫儿з 提交于 2019-12-12 10:44:33
问题 I would be interested to increase the floating point limit for when calculating qnorm / pnorm from their current level, for example: x <- pnorm(10) # 1 qnorm(x) # Inf qnorm(.9999999999999999444) # The highst limit I've found that still return a <<Inf number Is that (under a reasonable amount of time) possible to do? If so, how? 回答1: If the argument is way in the upper tail, you should be able to get better precision by calculating 1-p. Like this: > x = pnorm(10, lower.tail=F) > qnorm(x, lower

Why does 0.1 + 0.2 get 0.3 in Google Go?

烈酒焚心 提交于 2019-12-12 10:42:45
问题 As long as floating point is used, 0.1 can not be represented exactly in memory, so we know that this value usually comes out to 0.10000000000000004. But when using go to add 0.1 and 0.2. I'm getting 0.3. fmt.Println(0.1 + 0.2) // Output : 0.3 Why is 0.3 coming out instead of 0.30000000000000004 ? 回答1: It is because when you print it (e.g. with the fmt package), the printing function already rounds to a certain amount of fraction digits. See this example: const ca, cb = 0.1, 0.2 fmt.Println