precision

Why 0.1 + 0.1 == 0.2?

。_饼干妹妹 提交于 2019-12-10 22:09:25
问题 This is concerning Java. From what I've understood, 0.1 cannot be perfectly represented by Java because of binary representations. That makes 0.1 + 0.1 + 0.1 == 0.3 false. However, why does 0.1 + 0.1 == 0.2 gives true? 回答1: 0.1 cannot be perfectly represented by Java because of binary representations. That makes 0.1 + 0.1 + 0.1 == 0.3 false. That is not the entire reason why the equality is false, although it is part of it. 0.3 is not exactly 3/10 either. It so happens that 0.2 is exactly

How to improve the precision of the result due to the lack of precision in C++ division

流过昼夜 提交于 2019-12-10 21:43:52
问题 I am working on the leibniz question as indicated https://www.hackerrank.com/challenges/leibniz here. which computes 1-1/3+1/5-1/7+1/9+... Each element in the sequence can be defined as a(i)=(-1)^i/(2*i+1) start i from 0. The question requires that to add from the first term to the nth term and output the result. My program passes the basic test cases. But it fails in other cases. I guess my program fault is due to the precision when the number is large enough. Can anybody provide a way to

Does ARM support SIMD operations for 64 bit floating point numbers?

天大地大妈咪最大 提交于 2019-12-10 19:31:58
问题 NEON can do SIMD operations for 32 bit float numbers. But does not do SIMD operations for 64 bit float numbers. VFU is not SIMD. It can do 32 bit or 64 bit floating point operations only on one element. Does ARM support SIMD operations for 64 bit floating point numbers? 回答1: This is only possible on processors supporting ARMv8, and only when running Aarch64 instruction set. This is not possible in Aarch32 instruction set. However most processors support 32-bit and 64-bit scalar floating-point

Decimal to binary Half-Precision IEEE 754 in Python

烈酒焚心 提交于 2019-12-10 19:17:21
问题 I was only able to convert a decimal into a binary single-precision IEEE754, using the struct.pack module, or do the opposite (float16 or float32) using numpy.frombuffer Is it possible to convert a decimal to a binary half precision floating point, using Numpy? I need to print the result of the conversion, so if I type "117.0" , it should print "0101011101010000" 回答1: if I type "117.0", it should print "0101011101010000" >>> import numpy as np >>> bin(np.float16(117.0).view('H'))[2:].zfill(16

Convert double to Pascal 6-byte (48 bits) real format

旧城冷巷雨未停 提交于 2019-12-10 19:03:07
问题 I need to do some work on data contained in legacy files. For this purpose, I need to read and write Turbo Pascal's 6-byte (48 bit) floating point numbers, from PHP. The Turbo Pascal data type is commonly known as real48 (specs). I have the following php code to read the format: /** * Convert Turbo Pascal 48-bit (6 byte) real to a PHP float * @param binary 48-bit real (in binary) to convert * @return float number */ function real48ToDouble($real48) { $byteArray = array_values( unpack('C*',

Am I going crazy or is Math.Pow broken?

空扰寡人 提交于 2019-12-10 17:48:44
问题 I used the base converter from here and changed it to work with ulong values, but when converting large numbers, specifically numbers higher than 16677181699666568 it was returning incorrect values. I started looking into this and discovered that Math.Pow(3, 34) returns the value 16677181699666568, when actually 3^34 is 16677181699666569. This therefore throws a spanner in the works for me. I assume this is just an issue with double precision within the Pow method? Is my easiest fix just to

Why does Python 3.4 give the wrong answer for division of large numbers, and how can I test for divisibility? [duplicate]

廉价感情. 提交于 2019-12-10 17:22:59
问题 This question already has answers here : python 3.1.2 gives wrong output when dividing two large numbers? (3 answers) Closed 6 months ago . In my program, I'm using division to test if the result is an integer, I'm testing divisibility. However, I'm getting wrong answers. Here is an example: print(int(724815896270884803/61)) gives 11882227807719424. print(724815896270884803//61) gives the correct result of 11882227807719423. Why is the floating point result wrong, and how can I test whether

iOS 7.1 CommonCrypto library complains: Implicit conversion loses integer precision: 'NSUInteger' (unsigned long) to CC_LONG (unsigned int)

蓝咒 提交于 2019-12-10 16:16:35
问题 I get the above error (in title) whilst doing a MD5 from file.. I can usually cope with these type of 32->64bit conversion issues..but in this case, I do not know what I should do as CC_MD5 is part of CommonCrypto->CommonDigest , a library that ships with iOS7.1. I am assuming [inputData length] is returning NSUInteger and therein lies the issue, however can I simply cast down from UL to UI? I will possibly lose precision if the file is large. Why would a library that Apple ships with require

Get file modification time to nanosecond precision

爷,独闯天下 提交于 2019-12-10 16:13:37
问题 I need to get the full nanosecond-precision modified timestamp for each file in a Python 2 program that walks the filesystem tree. I want to do this in Python itself, because spawning a new subprocess for every file will be slow. From the C library on Linux, you can get nanosecond-precision timestamps by looking at the st_mtime_nsec field of a stat result. For example: #include <sys/stat.h> #include <stdio.h> int main() { struct stat stat_result; if(!lstat("/", &stat_result)) { printf("mtime

Reading lossy file format (PRC), resulting in precision problems

五迷三道 提交于 2019-12-10 15:42:56
问题 I got into making a viewer for various 3D file formats, and those formats that I had before didn't pose a problem, until I came to the PRC file (which is one of the supported 3D formats that can be embedded in PDFs). I can extract all the data from the PDF and display those models that were encoded in non-lossy ways, however when I tried to decode what they call "Highly Compressed Tesselations" I run into a problem that I think is a precision problem, but I don't quite know how to fix it or