precision

Hamming numbers and double precision

≯℡__Kan透↙ 提交于 2020-05-15 02:09:07
问题 I was playing around with generating Hamming numbers in Haskell, trying to improve on the obvious (pardon the naming of the functions) mergeUniq :: Ord a => [a] -> [a] -> [a] mergeUniq (x:xs) (y:ys) = case x `compare` y of EQ -> x : mergeUniq xs ys LT -> x : mergeUniq xs (y:ys) GT -> y : mergeUniq (x:xs) ys powers :: [Integer] powers = 1 : expand 2 `mergeUniq` expand 3 `mergeUniq` expand 5 where expand factor = (factor *) <$> powers I noticed that I can avoid the (slower) arbitrary precision

Hamming numbers and double precision

老子叫甜甜 提交于 2020-05-15 02:08:07
问题 I was playing around with generating Hamming numbers in Haskell, trying to improve on the obvious (pardon the naming of the functions) mergeUniq :: Ord a => [a] -> [a] -> [a] mergeUniq (x:xs) (y:ys) = case x `compare` y of EQ -> x : mergeUniq xs ys LT -> x : mergeUniq xs (y:ys) GT -> y : mergeUniq (x:xs) ys powers :: [Integer] powers = 1 : expand 2 `mergeUniq` expand 3 `mergeUniq` expand 5 where expand factor = (factor *) <$> powers I noticed that I can avoid the (slower) arbitrary precision

Should I use numeric or float to avoid calculaton prolems in PostgreSQL

删除回忆录丶 提交于 2020-05-14 14:29:39
问题 I have encountered a topic regarding to calculation errors on Accuracy problems . I end up with the following values in one of my queries in PostgreSQL: 1.0752688172043 (when using float) 1.07526881720430110000 (when using numeric) 1) So, for these values I think I should use numeric data type in order to obtain result more accurately. Is that right? 2) What if the following values (assume that the rest digits after the last number is 0). In that case should I still use numeric rather than

How to deal with inexact floating point arithmetic results in Rust? [closed]

牧云@^-^@ 提交于 2020-05-14 13:52:08
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 2 years ago . How to deal with floating point arithmetic in Rust? For example: fn main() { let vector = vec![1.01_f64, 1.02, 1.03, 1.01, 1.05]; let difference: Vec<f64> = vector.windows(2).map(|slice| slice[0] - slice[1]).collect(); println!("{:?}", difference); } Returns: [-0

How does sklearn select threshold steps in precision recall curve?

一曲冷凌霜 提交于 2020-05-13 06:19:20
问题 I trained a basic FFNN on a example breast cancer dataset. For the results the precision_recall_curve function gives datapoints for 416 different thresholds. My Data contains 569 unique prediction values, as far as I understand the Precision Recall Curve I could apply 568 different threshold values and check the resulting Precision and Recall. But how do I do so? is there a way to set the number of thresholds to test with sklearn ? Or at least an explanation of how sklearn selects those

What is the correct/standard way to check if difference is smaller than machine precision?

别等时光非礼了梦想. 提交于 2020-05-09 18:27:46
问题 I often end up in situations where it is necessary to check if the obtained difference is above machine precision. Seems like for this purpose R has a handy variable: .Machine$double.eps . However when I turn to R source code for guidelines about using this value I see multiple different patterns. Examples Here are a few examples from stats library: t.test.R if(stderr < 10 *.Machine$double.eps * abs(mx)) chisq.test.R if(abs(sum(p)-1) > sqrt(.Machine$double.eps)) integrate.R rel.tol < max(50*

What is the correct/standard way to check if difference is smaller than machine precision?

为君一笑 提交于 2020-05-09 18:27:38
问题 I often end up in situations where it is necessary to check if the obtained difference is above machine precision. Seems like for this purpose R has a handy variable: .Machine$double.eps . However when I turn to R source code for guidelines about using this value I see multiple different patterns. Examples Here are a few examples from stats library: t.test.R if(stderr < 10 *.Machine$double.eps * abs(mx)) chisq.test.R if(abs(sum(p)-1) > sqrt(.Machine$double.eps)) integrate.R rel.tol < max(50*

PHP floating point precision: Is var_dump secretly rounding and how can I debug precisley then?

老子叫甜甜 提交于 2020-04-13 03:58:47
问题 That floating point numbers in PHP are inaccurate is well known (http://php.net/manual/de/language.types.float.php), however I am a bit unsatisfied after the following experiment: var_dump((2.30 * 100)); // float(230) var_dump(round(2.30 * 100)); // float(230) var_dump(ceil(2.30 * 100)); // float(230) var_dump(intval(2.30 * 100)); // int(229) var_dump((int)(2.30 * 100)); // int(229) var_dump(floor(2.30 * 100)); // float(229) The internal representation must be something like 229.999998 . var

Differing floating point behaviour between uniform and constants in GLSL

五迷三道 提交于 2020-02-24 10:23:41
问题 I am trying to implement emulated double-precision in GLSL, and I observe a strange behaviour difference leading to subtle floating point errors in GLSL. Consider the following fragment shader, writing to a 4-float texture to print the output. layout (location = 0) out vec4 Output uniform float s; void main() { float a = 0.1f; float b = s; const float split = 8193.0; // = 2^13 + 1 float ca = split * a; float cb = split * b; float v1a = ca - (ca - a); float v1b = cb - (cb - b); Output = vec4(a

Differing floating point behaviour between uniform and constants in GLSL

孤街浪徒 提交于 2020-02-24 10:23:26
问题 I am trying to implement emulated double-precision in GLSL, and I observe a strange behaviour difference leading to subtle floating point errors in GLSL. Consider the following fragment shader, writing to a 4-float texture to print the output. layout (location = 0) out vec4 Output uniform float s; void main() { float a = 0.1f; float b = s; const float split = 8193.0; // = 2^13 + 1 float ca = split * a; float cb = split * b; float v1a = ca - (ca - a); float v1b = cb - (cb - b); Output = vec4(a