precision

Wrong value returned from mysql float

此生再无相见时 提交于 2019-12-10 11:49:02
问题 I have a table with high precision value, stored as Float . When I query the table for that value it returns rounded off value, rounded to 1st digit. But when I run the below query I am getting the value that I have stored, SELECT MY_FLOAT_COL*1 FROM MY_TABLE; What's going on inside Mysql? 回答1: If you want to store exact values, you'd use the DECIMAL data types. By manual of FLOAT: The FLOAT and DOUBLE types represent approximate numeric data values. MySQL uses four bytes for single-precision

Rails precision error

北城以北 提交于 2019-12-10 10:24:37
问题 When I run this in my Rails application: my_envelope.transactions.sum(:amount) This SQL is shown in the log files: SQL (0.3ms) SELECT SUM("transactions"."amount") AS sum_id FROM "transactions" WHERE (envelope_id = 834498537) And this value is returned: <BigDecimal:1011be570,'0.2515999999 9999997E2',27(27)> As you can see, the value is 25.159999. It should be 25.16. When I run the same SQL on the database myself, the correct value is returned. I'm a little confused because I know that there

Hexfloat manipulator and precision

六眼飞鱼酱① 提交于 2019-12-10 10:24:30
问题 How come output using the hexfloat manipulator ignores any precision on ostream ? #include <iostream> #include <cmath> #include <iomanip> using namespace std; int main(){ cout << setw(17) << left << "default format: " << setw(20) << right << 100 * sqrt(2.0) << " " << cout.precision() << '\n' << setw(17) << left << "scientific: " << setw(20) << right << scientific << 100 * sqrt(2.0) << " " << cout.precision() << '\n' << setw(17) << left << "fixed decimal: " << fixed << setw(20) << right << 100

Ordering operation to maximize double precision

巧了我就是萌 提交于 2019-12-10 09:56:25
问题 I'm working on some tool that gets to compute numbers that can get close to 1e-25 in the worst cases, and compare them together, in Java. I'm obviously using double precision. I have read in another answer that I shouldn't expect more than 1e-15 to 1e-17 precision, and this other question deals with getting better precision when ordering operations in a "better" order. Which double precision operations are more keen to loose precision along the way? Should I try to work with number as big as

atof and stringstream produce different results

天涯浪子 提交于 2019-12-10 05:45:16
问题 I have been looking into a problem whereby I am converting a float to a human readable format, and back. Namely a string. I have ran into issues using stringstream and found that atof produces "better" results. Notice, I do not print out the data in this case, I used the debugger to retrieve the values: const char *val = "73.31"; std::stringstream ss; ss << val << '\0'; float floatVal = 0.0f; ss >> floatVal; //VALUE IS 73.3100052 floatVal = atof(val); //VALUE IS 73.3099976 There is probably a

logistic / sigmoid function implementation numerical precision

落爺英雄遲暮 提交于 2019-12-10 02:52:05
问题 in scipy.special.expit , logistic function is implemented like the following: if x < 0 a = exp(x) a / (1 + a) else 1 / (1 + exp(-x)) However, I have seen implementations in other languages/frameworks that simply do 1 / (1 + exp(-x)) I am wondering how much benefit the scipy version actually brings. For very small x , the result approaches to 0. It works even if exp(-x) overflows to Inf . 回答1: It's really just for stability - putting in values that are very large in magnitude might return

Determine the decimal precision of an input number

♀尐吖头ヾ 提交于 2019-12-10 02:48:57
问题 We have an interesting problem were we need to determine the decimal precision of a users input (textbox). Essentially we need to know the number of decimal places entered and then return a precision number, this is best illustrated with examples: 4500 entered will yield a result 1 4500.1 entered will yield a result 0.1 4500.00 entered will yield a result 0.01 4500.450 entered will yield a result 0.001 We are thinking to work with the string, finding the decimal separator and then calculating

PI and accuracy of a floating-point number

倖福魔咒の 提交于 2019-12-10 02:19:23
问题 A single/double/extended-precision floating-point representation of Pi is accurate up to how many decimal places? 回答1: #include <stdio.h> #define E_PI 3.1415926535897932384626433832795028841971693993751058209749445923078164062 int main(int argc, char** argv) { long double pild = E_PI; double pid = pild; float pif = pid; printf("%s\n%1.80f\n%1.80f\n%1.80Lf\n", "3.14159265358979323846264338327950288419716939937510582097494459230781640628620899", pif, pid, pild); return 0; } Results: [quassnoi #

Determine MAX Decimal Scale Used on a Column

让人想犯罪 __ 提交于 2019-12-10 01:02:54
问题 In MS SQL, I need a approach to determine the largest scale being used by the rows for a certain decimal column. For example Col1 Decimal(19,8) has a scale of 8, but I need to know if all 8 are actually being used, or if only 5, 6, or 7 are being used. Sample Data: 123.12345000 321.43210000 5255.12340000 5244.12345000 For the data above, I'd need the query to either return 5, or 123.12345000 or 5244.12345000. I'm not concerned about performance, I'm sure a full table scan will be in order, I

How can I fix error code C4146 “unary minus operator applied to unsigned type.result still unsigned”?

耗尽温柔 提交于 2019-12-09 15:56:00
问题 Data type int 's minimum value is -2,147,483,648. So, I typed int val = -2147483648; But, it has an error: unary minus operator applied to unsigned type.result still unsigned How can I fix it? 回答1: 2147483648 is out of int range on your platform. Either use a type with more precision to represent the constant int val = -2147483648L; // or int val = -2147483648LL; (depending on which type has more precision than int on your platform). Or resort to the good old - 1 trick int val = -2147483647 -