precision

Newton Raphson iteration - unable to iterate

谁说胖子不能爱 提交于 2019-12-24 10:35:19
问题 I am not sure this question is on topic here or elsewhere (or not on topic at all anywhere). I have inherited Fortran 90 code that does Newton Raphson interpolation where logarithm of temperature is interpolated against logarithm of pressure. The interpolation is of the type t = a ln(p) + b and where a, b are defined as a = ln(tup/tdwn)/(alogpu - alogpd) and b = ln T - a * ln P Here is the test program. It is shown only for a single iteration. But the actual program runs over three FOR loops

The precision of the long double output is not correct. What might be wrong?

爷,独闯天下 提交于 2019-12-24 10:29:57
问题 I have a long double constant that I am setting either as const or not-const. It is longer (40 digits) than the precision of a long double on my test workstation (19 digits). When I print it out, it no longer is displayed at 19 digits of precision, but at 16. Here is the code I am testing: #include <iostream> #include <iomanip> #include <limits> #include <cstdio> int main () { const long double constLog2 = 0.6931471805599453094172321214581765680755; long double log2 = 0

c++ exp function different results under x64 on i7-3770 and i7-4790

我怕爱的太早我们不能终老 提交于 2019-12-24 08:16:04
问题 When I execute a simple x64 application with the following code, I get different results on Windows PCs with a i7-3770 and i7-4790 CPU. #include <cmath> #include <iostream> #include <limits> void main() { double val = exp(-10.240990982718174); std::cout.precision(std::numeric_limits<double>::max_digits10); std::cout << val; } Result on i7-3770: 3.5677476354876406e-05 Result on i7-4790: 3.5677476354876413e-05 When I modify the code to call unsigned int control_word; _controlfp_s(&control_word,

Storing numbers with higher precision in C

我们两清 提交于 2019-12-24 05:41:12
问题 I am writing a program in which I need to store numbers with a very high precision(around 10^-10 ) and then further use them a parameter( create_bloomfilter ([yet to decide the type] falsePositivity, long expected_num_of_elem) ). The highest precision I am able to get is with double (something around 10^-6 ) which is not sufficient. How can we store numbers with more higher precision in c? 回答1: You have been misinformed about double . The smallest positive number you can store in a double is

(Racket/Scheme) Subtraction yields result off by incredibly small margin

喜你入骨 提交于 2019-12-24 02:07:48
问题 I'm currently messing around with "How To Design Programs" - using Scheme/Racket; I've come across a really peculiar feature in the R5RS version of Scheme. When conducting a simple subtraction, albeit using values with decimal point accurace, answers are minutely off what would be expected. For example; given the following subtraction operation: (- 5.00 4.90) => 0.09999999999999965 When one should surely be expecting a clean 0.10? Whole number subtraction works as expected; > (- 5 4) => 1 > (

Possible loss of precision between two different compiler configurations

℡╲_俬逩灬. 提交于 2019-12-24 00:53:48
问题 I am currently stuck on a problem at work that involves a possible loss of precision when the compiler configuration is changed from Debug to Release, which have different levels of optimization. For some reason, elsewhere in our code, extremely large values have been used for covariance matrices (and things of that sort), values somewhere along the lines of 1e90. The problem I'm encountering is that whenever there is any sort of loss of precision in a calculation and one of these extremely

Jooq casting String to BigDecimal

人走茶凉 提交于 2019-12-23 22:29:17
问题 Is there a way to cast a String to a BigDecimal in a jooq-query without losing precision? When i do endResER.VALUE.cast(BigDecimal.class) where VALUE is a field with a String-value in the database it returns a BigDecimal without any fraction digits. I need to compare two amounts that are saved as Strings in the DB. 回答1: You can cast your value to a SQLDataType like this: endResER.VALUE.cast(SQLDataType.DECIMAL.precision(10, 5)) Beware though, that there is a known issue for jOOQ 3.1: #2708.

Numpy to weak to calculate a precise mean value

百般思念 提交于 2019-12-23 20:46:16
问题 This question is very similar to this post - but not exactly I have some data in a .csv file. The data has precision to the 4th digit (#.####). Calculating the mean in Excel or SAS gives a result with precision to 5th digit (#.#####) but using numpy gives: import numpy as np data = np.recfromcsv(path2file, delimiter=';', names=['measurements'], dtype=np.float64) rawD = data['measurements'] print np.average(rawD) gives a number like this #.#####999999999994 Clearly something is wrong.. using

How to prevent BigDecimal from truncating results?

Deadly 提交于 2019-12-23 19:19:53
问题 Follow up to this question: I want to calculate 1/1048576 and get the correct result, i.e. 0.00000095367431640625. Using BigDecimal 's / truncates the result: require 'bigdecimal' a = BigDecimal.new(1) #=> #<BigDecimal:7fd8f18aaf80,'0.1E1',9(27)> b = BigDecimal.new(2**20) #=> #<BigDecimal:7fd8f189ed20,'0.1048576E7',9(27)> n = a / b #=> #<BigDecimal:7fd8f0898750,'0.9536743164 06E-6',18(36)> n.to_s('F') #=> "0.000000953674316406" <- should be ...625 This really surprised me, because I was under

Sleep function in c in windows. Does a function with better precision exist?

a 夏天 提交于 2019-12-23 18:16:00
问题 I was wondering if anyone knew of a better sleep function that could be used in windows in c, other than Sleep(), which takes a millisecond input and only guarantees that the input is the minimum amount of time that elapses. I am passing in 1 millisecond, but actually getting a 15-16 millisecond delay. Is there any way to accurately set a specified sleep time? 回答1: No, not really. When you tell your program to sleep, you're giving up the processor and letting the operating system decide what