precision

Machine epsilon computation issue

半城伤御伤魂 提交于 2020-01-16 18:50:46
问题 I stumbled on the difference between the result of Machine epsilon calculation. When compared to 0 PHP yields 4.9406564584125E-324. While for 1 it pops up with 1.1102230246252E-16. Quite a difference. Guess it's something with the type of data initially set by default in PHP. The code is: <?php //Machine epsilon calculation $e = 1; $eTmp = null; for ($i = 0; 0 != 0 + $e; $i++){ //Changing 0 by 1 produces absolutely different result $e = $e/2; if ($e != 0) {$eTmp = $e;} } echo $eTmp; //var

Machine epsilon computation issue

你离开我真会死。 提交于 2020-01-16 18:50:27
问题 I stumbled on the difference between the result of Machine epsilon calculation. When compared to 0 PHP yields 4.9406564584125E-324. While for 1 it pops up with 1.1102230246252E-16. Quite a difference. Guess it's something with the type of data initially set by default in PHP. The code is: <?php //Machine epsilon calculation $e = 1; $eTmp = null; for ($i = 0; 0 != 0 + $e; $i++){ //Changing 0 by 1 produces absolutely different result $e = $e/2; if ($e != 0) {$eTmp = $e;} } echo $eTmp; //var

Matlab array arithmetic inaccuracy

人走茶凉 提交于 2020-01-15 12:12:21
问题 When i am trying to simulate my sine approximation in matlab i found a strange problem. The problem is that when i apply my function to an array, it returns one results, whereas applying functions to individual values ​​gives a slightly different result. I was able to get same behaviour in this example: z = single(0:0.001:1); F = @(x) (x.^2 - single(1.2342320e-001)).*x.^2; %some test function z(999) % Returns 9.9800003e-001 F(z(999)) % Returns 8.6909407e-001 temp = F(z); temp(999) % Voila! It

Matlab array arithmetic inaccuracy

淺唱寂寞╮ 提交于 2020-01-15 12:09:47
问题 When i am trying to simulate my sine approximation in matlab i found a strange problem. The problem is that when i apply my function to an array, it returns one results, whereas applying functions to individual values ​​gives a slightly different result. I was able to get same behaviour in this example: z = single(0:0.001:1); F = @(x) (x.^2 - single(1.2342320e-001)).*x.^2; %some test function z(999) % Returns 9.9800003e-001 F(z(999)) % Returns 8.6909407e-001 temp = F(z); temp(999) % Voila! It

Matlab array arithmetic inaccuracy

我们两清 提交于 2020-01-15 12:09:27
问题 When i am trying to simulate my sine approximation in matlab i found a strange problem. The problem is that when i apply my function to an array, it returns one results, whereas applying functions to individual values ​​gives a slightly different result. I was able to get same behaviour in this example: z = single(0:0.001:1); F = @(x) (x.^2 - single(1.2342320e-001)).*x.^2; %some test function z(999) % Returns 9.9800003e-001 F(z(999)) % Returns 8.6909407e-001 temp = F(z); temp(999) % Voila! It

Swift's Decimal precision issue

随声附和 提交于 2020-01-15 10:39:28
问题 According to the docs here, Swift 3/4 Decimal type is a representation in base 10 bridged to NSDecimalNumber. However I'm having precision issues that do not reproduce when using NSDecimalNumber. let dec24 = Decimal(integerLiteral: 24) let dec1 = Decimal(integerLiteral: 1) let decResult = dec1/dec24*dec24 // prints 0.99999999999999999999999999999999999984 let dn24 = NSDecimalNumber(value: 24) let dn1 = NSDecimalNumber(value: 1) let dnResult = dn1.dividing(by: dn24).multiplying(by: dn24) //

How do I use parameter epsabs in scipy.integrate.quad in Python?

青春壹個敷衍的年華 提交于 2020-01-15 09:46:47
问题 I am trying to make my integral more precise by specifying the parameter epsabs for scipy.integrate.quad , here say we will be integrating the function sin(x) / x^2 from 1e-16 to 1.0 from scipy.integrate import quad import numpy integrand = lambda x: numpy.sin(x) / x ** 2 integral = quad(integrand, 1e-16, 1.0) This gives you (36.760078801255595, 0.01091187908038005) However, if you specify the absolute error tolerance with epsabs with the following from scipy.integrate import quad import

Lost precision on GMP mpf_add. Where have my digits gone?

雨燕双飞 提交于 2020-01-15 01:52:24
问题 I'm summing two negative floats: char * lhs = "-2234.6016114467412141"; char * rhs = "-4939600281397002.2812"; According to Perl, using bignum and Math::BigFloat, the answer is -4939600281399236.8828114467412141 However, according to GMP, using the code below, the answer is -4939600281399236.88281 Where have I gone wrong? What happened to the remaining "14467412141"? #include "stdafx.h" #include "gmp-static\gmp.h" #include <stdlib.h> /* For _MAX_PATH definition */ #include <stdio.h> #include

Arbitrary precision for decimals square roots in golang

浪子不回头ぞ 提交于 2020-01-15 01:22:10
问题 I am looking for a way to calculate a square root with an arbitrary precision (something like 50 digits after the dot). In python, it is easily accessible with Decimal: from decimal import * getcontext().prec = 50 Decimal(2).sqrt() # and here you go my 50 digits After seeing the power of math/big I skimmed through the documentation but have not found anything similar. So is my only option is to write some sort of numerical computing method which will iteratively try to compute the answer? 回答1

Keras costum Callback. When generating precision recall I get an error in _flow_index

大憨熊 提交于 2020-01-14 19:58:54
问题 I'm training a binary classifier using Keras. I want to generate the precision_score and recall_score after each epoch in order to analyze the training more in depth. On the internet I found tutorials/help, such as: https://medium.com/@thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2 https://github.com/keras-team/keras/issues/2607 I found Accessing validation data within a custom callback which worked best for me since I'm using Keras fit_generator. It managed to