precision

.NET Framework Library for arbitrary digit precision

吃可爱长大的小学妹 提交于 2019-12-18 09:20:49
问题 I'm reviving this question, and making it more specific: Is there a .NET framework library that supports numbers with arbitrary digits of precision? 回答1: There are a few options here. A good option is W3b.Sine, which is native C#/.NET, and supports arbitrary precision floating point values. If you are only dealing with integer values, IntX provides support for arbitrary precision integer values. A potentially more mature option would be C# BigInt, but again, this will not support floating

Why floating-points number's significant numbers is 7 or 6

这一生的挚爱 提交于 2019-12-18 07:20:51
问题 I see this in Wikipedia log 224 = 7.22. I have no idea why we should calculate 2^24 and why we should take log10......I really really need your help. 回答1: why floating-points number's significant numbers is 7 or 6 (?) Consider some thoughts employing the Pigeonhole principle: binary32 float can encode about 2 32 different numbers exactly . The numbers one can write in text like 42.0, 1.0, 3.1415623... are infinite, even if we restrict ourselves to a range like -10 38 ... +10 38 . Any time

Floating point precision in Python array

折月煮酒 提交于 2019-12-18 06:19:13
问题 I apologize for the really simple and dumb question; however, why is there a difference in precision displayed for these two cases? 1) >> test = numpy.array([0.22]) >> test2 = test[0] * 2 >> test2 0.44 2) >> test = numpy.array([0.24]) >> test2 = test[0] * 2 >> test2 0.47999999999999998 I'm using python2.6.6 on 64-bit linux. Thank you in advance for your help. This also hold seems to hold for a list in python >>> t = [0.22] >>> t [0.22] >>> t = [0.24] >>> t [0.23999999999999999] 回答1: Because

Strange output when comparing same float values?

喜欢而已 提交于 2019-12-18 05:23:16
问题 Comparing Same Float Values In C strange output in comparison of float with float literal Float addition promoted to double? I read the above links on floating points, but even getting strange output. #include<stdio.h> int main() { float x = 0.5; if (x == 0.5) printf("IF"); else if (x == 0.5f) printf("ELSE IF"); else printf("ELSE"); } Now, according to the promotion rules, Shouldn't " ELSE IF " must be printed ? But, here it is printing " IF " EDIT : Is it because 0.5 = 0.1 in binary and

Is hardcode float precise if it can be represented by binary format in IEEE 754?

混江龙づ霸主 提交于 2019-12-18 04:44:12
问题 for example, 0 , 0.5, 0.15625 , 1 , 2 , 3... are values converted from IEEE 754. Are their hardcode version precise? for example: is float a=0; if(a==0){ return true; } always return true? other example: float a=0.5; float b=0.25; float c=0.125; is a * b always equal to 0.125 and a * b==c always true? And one more example: int a=123; float b=0.5; is a * b always be 61.5? or in general, is integer multiply by IEEE 754 binary float precise? Or a more general question: if the value is hardcode

Why can't I get a p-value smaller than 2.2e-16?

痴心易碎 提交于 2019-12-17 23:07:11
问题 I've found this issue with t-tests and chi-squared in R but I assume this issue applies generally to other tests. If I do: a <- 1:10 b <- 100:110 t.test(a,b) I get: t = -64.6472, df = 18.998, p-value < 2.2e-16 . I know from the comments that 2.2e-16 is the value of .Machine$double.eps - the smallest floating point number such that 1 + x != 1 , but of course R can represent numbers much smaller than that. I know also from the R FAQ that R has to round floats to 53 binary digits accuracy: R FAQ

Half-precision floating-point in Java

帅比萌擦擦* 提交于 2019-12-17 22:30:05
问题 Is there a Java library anywhere that can perform computations on IEEE 754 half-precision numbers or convert them to and from double-precision? Either of these approaches would be suitable: Keep the numbers in half-precision format and compute using integer arithmetic & bit-twiddling (as MicroFloat does for single- and double-precision) Perform all computations in single or double precision, converting to/from half precision for transmission (in which case what I need is well-tested

Remove More Than 2 Trailing zero

扶醉桌前 提交于 2019-12-17 21:29:05
问题 I have read many question in stack overflow, what I want is remove 2 or more than two trailing zero behind the decimal. i.e: 12.00 ==> 12 12.30 ==> 12.30 12.35 ==> 12.35 12.345678 ==> 12.34 回答1: NSNumberFormatter *twoDecimalPlacesFormatter = [[[NSNumberFormatter alloc] init] autorelease]; [twoDecimalPlacesFormatter setMaximumFractionDigits:2]; [twoDecimalPlacesFormatter setMinimumFractionDigits:0]; return [twoDecimalPlacesFormatter stringFromNumber:number]; 回答2: I like @dorada's answer, here

higher precision floating point using boost lib (higher then 16 digits)

泄露秘密 提交于 2019-12-17 20:12:04
问题 I am running a simulation of physical experiments, so I need really high floating point precision (more than 16 digits). I use Boost.Multiprecision, however I can't get a precision higher than 16 digits, no matter what I tried. I run the simulation with C++ and eclipse compiler, for example: #include <boost/math/constants/constants.hpp> #include <boost/multiprecision/cpp_dec_float.hpp> #include <iostream> #include <limits> using boost::multiprecision::cpp_dec_float_50; void main() { cpp_dec

Determine precision and scale of particular number in Python

流过昼夜 提交于 2019-12-17 19:24:37
问题 I have a variable in Python containing a floating point number (e.g. num = 24654.123 ), and I'd like to determine the number's precision and scale values (in the Oracle sense), so 123.45678 should give me (8,5), 12.76 should give me (4,2), etc. I was first thinking about using the string representation (via str or repr ), but those fail for large numbers (although I understand now it's the limitations of floating point representation that's the issue here): >>> num = 1234567890.0987654321 >>>