precision

PTP sync on windows server (compared to i.e. Linux) - what precision can be guaranteed

纵饮孤独 提交于 2019-12-20 15:21:49
问题 I would like to know if everyone knows how precise PTP synchronization can be guaranteed on Windows Server 2008. I know about this thread: What is the minimum guaranteed time for a process in windows? which discusses windows' native time and yes, this does not give any guarantees at all. But what when it comes to hardware solutions (PTP)? Are there any limitations preventing a guarantee of < 1ms? I know the processes depending on the time will be competing for CPU time but if the process DOES

Python floating-point precision format specifier

旧城冷巷雨未停 提交于 2019-12-20 12:37:01
问题 Let's say I have some 32-bit numbers and some 64-bit numbers: >>> import numpy as np >>> w = np.float32(2.4) >>> x = np.float32(4.555555555555555) >>> y = np.float64(2.4) >>> z = np.float64(4.555555555555555) I can print them out with %f but it has extra, unneeded decimals: >>> '%f %f %f %f' % (w, x, y, z) '2.400000 4.555555 2.400000 4.555556' I can use %g but it seems to have a small default precision: >>> '%g %g %g %g' % (w, x, y, z) '2.4 4.55556 2.4 4.55556' I was thinking I should use

How to use Gcc 4.6.0 libquadmath and __float128 on x86 and x86_64

风格不统一 提交于 2019-12-20 09:20:00
问题 I have medium size C99 program which uses long double type (80bit) for floating-point computation. I want to improve precision with new GCC 4.6 extension __float128 . As I get, it is a software-emulated 128-bit precision math. How should I convert my program from classic long double of 80-bit to quad floats of 128 bit with software emulation of full precision? What need I change? Compiler flags, sources? My program have reading of full precision values with strtod , doing a lot of different

Can I stably invert a Vandermonde matrix with many small values in R?

爱⌒轻易说出口 提交于 2019-12-20 07:55:11
问题 updated on this question: I have closed this question and I will post a new question focus on R package Rmpfr. To conclude this question and to help others, I will post my codes of the inverse of a Vandermonde Matrix from its explicit inverse formula. The generation terms are the x's in [here]1. I am not a skilled programmer. Therefore I don't expect my codes to be the most efficient one. I post the codes here because it is better than nothing. library(gtools) #input is the generation vector

SQL Server datatype precision - Neo, what is real?

你。 提交于 2019-12-20 04:54:16
问题 SQL Sever 2000 documentation: Is a floating point number data with the following valid values: –3.40E + 38 through -1.18E - 38, 0 and 1.18E - 38 through 3.40E + 38. Storage size is 4 bytes. In SQL Server, the synonym for real is float(24). SQL Server 2005 documentation: The ISO synonym for real is float(24). EDIT: Given what I am reading it says the precision is 7 but I can enter in my database(SQL Server 2005) max of 9 see below, as similarly stated here question no. 7. Example: 0.180000082

Objective-C - How to increase the precision of a float number

青春壹個敷衍的年華 提交于 2019-12-20 04:07:31
问题 Can someone please show me the way to set the precision of a float number to desired length. Say I have a number 2504.6. As you see the precision here is only 1. I want to set it to six.I need this because I compare this value with the value obtained from [txtInput.text floatValue]. And even if I enter 2504.6 to the text box it will add 5 more precisions and will be 2504.600098. And when I compare these two values they appear to be not equal. 回答1: You can compare the numbers using

exp() precision between Mac OS and Windows

廉价感情. 提交于 2019-12-20 04:07:17
问题 I got a code here, and when I run them on Win and Mac OS, the precision of the results is different, anyone can help? const double c = 1 - exp(-2.0); double x = (139 + 0.5) / 2282.0; x = ( 1 - exp(-2 * (1 - x))) / c; The results are both 0.979645005277687, but the Hex are different: Win: 3FEF59407B6B6FF1 Mac: 3FEF59407B6B6FF2 How Can I get the same result. 回答1: How Can I get the same result. Unless the math library on OS X uses the very same implementation/algorithm for calculating e ^ x ,

exp() precision between Mac OS and Windows

拥有回忆 提交于 2019-12-20 04:06:04
问题 I got a code here, and when I run them on Win and Mac OS, the precision of the results is different, anyone can help? const double c = 1 - exp(-2.0); double x = (139 + 0.5) / 2282.0; x = ( 1 - exp(-2 * (1 - x))) / c; The results are both 0.979645005277687, but the Hex are different: Win: 3FEF59407B6B6FF1 Mac: 3FEF59407B6B6FF2 How Can I get the same result. 回答1: How Can I get the same result. Unless the math library on OS X uses the very same implementation/algorithm for calculating e ^ x ,

Does scientific notation affect Perl's precision?

[亡魂溺海] 提交于 2019-12-20 03:20:30
问题 I encountered a weird behaviour in Perl. The following subtraction should yield zero as result (which it does in Python): print 7.6178E-01 - 0.76178 -1.11022302462516e-16 Why does it occur and how to avoid it? P.S. Effect appears on "v5.10.0 built for x86_64-linux-gnu-thread-multi" (Ubuntu 9.04) and "v5.8.9 built for darwin-2level" (Mac OS 10.6) 回答1: It's not that scientific notation affects the precision so much as the limitations of floating point notation represented in binary. See the

Why is 24.0000 not equal to 24.0000 in MATLAB?

て烟熏妆下的殇ゞ 提交于 2019-12-20 03:10:37
问题 I am writing a program where I need to delete duplicate points stored in a matrix. The problem is that when it comes to check whether those points are in the matrix, MATLAB can't recognize them in the matrix although they exist. In the following code, intersections function gets the intersection points: [points(:,1), points(:,2)] = intersections(... obj.modifiedVGVertices(1,:), obj.modifiedVGVertices(2,:), ... [vertex1(1) vertex2(1)], [vertex1(2) vertex2(2)]); The result: >> points points =