precision

Java - How to avoid loss of precision during divide and cast to int?

江枫思渺然 提交于 2019-12-11 12:50:00
问题 I have a situation where I need to find out how many times an int goes into a decimal, but in certain cases, I'm losing precision. Here is the method: public int test(double decimalAmount, int divisor) { return (int) (decimalAmount/ (1d / divisor)); } The problem with this is if I pass in 1.2 as the decimal amount and 5 as the divisor, I get 5 instead of 6. How can I restrusture this so I know how many times 5 goes into the decimal amount as an int? 回答1: public int test(double decimalAmount,

Sqlite make sure input data is correct length

最后都变了- 提交于 2019-12-11 12:29:22
问题 I asked a question to which I got a great answer but which bring many other questions. Say I've created a table: CREATE TABLE test(my_id INTEGER(2)); How can I make sure that when INSERTING data in there (or imporing from csv atually) the field is exactly an INTEGER(2), not INTEGER(1) or anything else it would dynamically stretch to..? If I cannot are there no memory/performance issues with this? Thanks! 回答1: All values imported from CSV files are strings (but type affinity might change that)

Test Number For Maximum Precision In Java

雨燕双飞 提交于 2019-12-11 11:10:08
问题 I would like to test a double for a maximum percision of 3 or less. What is the best way to do this in Java? 20.44567567 <- Fail 20.444 <- Pass 20.1 <- Pass 20 <- Pass 回答1: 1) Do not use double . Floating point logic is approximated at best. Use BigDecimal instead. 2) I think BigDecimal already has a way of setting a precision. If not, just multiply by 1000 and trunc. Do the operation, get a new number, and compare to the original one. If it is different, fail. 回答2: This passes your tests:

VBScript: Expanding precision to 16 decimals to circumvent scientific notation?

拥有回忆 提交于 2019-12-11 10:10:14
问题 I've searched for this answer but cannot find anything for VBS. For instance: dim num num = 1234567890123456 'a 16 digit number msgbox num Working with num in any way will result in the number being displayed in scientific notation. How can I avoid this? 回答1: The 16 digit number is changed to a Double by VBScript because neither Int , nor Long can store that number. You can use the FormatNumber function to display it as an integer: FormatNumber(Expression, NumDigitsAfterDecimal,

Why not use a two's complement based floating-point?

我怕爱的太早我们不能终老 提交于 2019-12-11 09:45:00
问题 IEEE 754 standard for float64, 32 and 16 use a signed significand and a biased exponent. As a student designing hardware architectures, it makes more sense to me to use two's complement for the significand and exponent parts. For example, 32 bit (half precision) float is defined such that the first bit represents sign, next 8 bits - exponent (biased by 127) and last 23 bits represent the mantissa. To implement addition/multiplication (of negative numbers), we need to convert mantissa to two's

Losing precision when multiplying Doubles in C

末鹿安然 提交于 2019-12-11 09:33:29
问题 I am trying to get multiplay decimal part of a double number about 500 times. This number starts to lose precision as time goes on. Is there any trick to be able to make the continued multiplication accurate? double x = 0.3; double binary = 2.0; for (i=0; i<500; i++){ x = x * binary; printf("x equals to : %f",x); if(x>=1.0) x = x - 1; } Ok after i read some of the things u posted i am thinking how could i remove this unwanted stuff from my number to keep multiplication stable. For instance in

BigDecimal variable padded with random numbers in JDBC Insert

泪湿孤枕 提交于 2019-12-11 07:06:24
问题 Consider SQL statement INSERT INTO table (field) VALUES (-0.11111111) with field Oracle type NUMBER. When the value to be inserted is of type float or double, you get the exact value in field , i.e. -0.11111111. But when the value to be inserted is of type BigDecimal , you get the value padded with random numbers, i.e. 0.1111111099999999990428634077943570446223. Why? Java states that " BigDecimal is an immutable, arbitrary-precision signed decimal numbers." The code is: String sql = "INSERT

Fix precision issues when *displaying* floats in python

穿精又带淫゛_ 提交于 2019-12-11 07:03:17
问题 I'm reading out a text file with some float numbers using np.loadtxt . This is what my numpy array looks like: x = np.loadtxt(t2) print(x) array([[ 1.00000000e+00, 6.61560000e-13], [ 2.00000000e+00, 3.05350000e-13], [ 3.00000000e+00, 6.22240000e-13], [ 4.00000000e+00, 3.08850000e-13], [ 5.00000000e+00, 1.11170000e-10], [ 6.00000000e+00, 3.82440000e-11], [ 7.00000000e+00, 5.39160000e-11], [ 8.00000000e+00, 1.75910000e-11], [ 9.00000000e+00, 2.27330000e-10]]) I separate out the first column

print 30 digits of a double value in c++

雨燕双飞 提交于 2019-12-11 06:42:45
问题 My understanding is that numeric_limits::max_digits10 gives the max number of digits after the decimal point that are available. But if I setprecision() to a value that is greater than this, it still gives me nonzero digits beyond this max value : assert(std::numeric_limits<double>::max_digits10 == 17); std::cout << std::setprecision(30) << double(.1) << '\n'; This prints out: 0.100000000000000005551115123126 Are the digits beyond 17 not to be trusted to be accurate? 回答1: Converting the 53

Precision discrepancy between Fortran and Python (sin function)

旧巷老猫 提交于 2019-12-11 06:15:15
问题 I see a discrepancy between python and Fortran when using the sinus function. Could anyone shed light on this, please? in python: import math print(math.sin(6.28318530717959)) >> 3.3077843189710302e-15 in fortran90: print*, sin(6.28318530717959d0) >> 3.3077720792452914E-15 EDIT: As it seems to be a Fortran compiler issue, I used g95 with g95 -O3 test.f90 -o test.exe 回答1: According to IEEE 754 for float representation: In [7]: bin(3.3077720792452914e-15.view(np.uint64)) Out[7]: