decimal

Decimal TryParse in VBA

亡梦爱人 提交于 2019-12-02 08:57:30
I am attempting to tryparse using decimal; however, I keep getting an "Object Required" run-time error. I'm not certain what I'm doing wrong. I'm used to doing a tryparse in C#. This is VBA, so the language translation is not clicking just yet. Any help appreciated. Sub try() Dim val As Variant Dim res As Boolean res = Decimal.TryParse("2.5", val) MsgBox (res & ":" & val) End Sub You can try CInt and check for a specific error using On Error Goto. res = cBool(Val("2.5")) should do the trick here, since any value <> 0 will evaluate as True 来源: https://stackoverflow.com/questions/23655251

How to convert Decimal to String with two digits after separator?

﹥>﹥吖頭↗ 提交于 2019-12-02 08:44:38
问题 This is what I do now: extension Decimal { var formattedAmount: String { let formatter = NumberFormatter() formatter.generatesDecimalNumbers = true formatter.minimumFractionDigits = 2 formatter.maximumFractionDigits = 2 return formatter.string(from: self) //mismatch type } } but I cannot create NSNumber from Decimal . 回答1: This should work extension Decimal { var formattedAmount: String? { let formatter = NumberFormatter() formatter.generatesDecimalNumbers = true formatter

Conversion from double to integer [duplicate]

北慕城南 提交于 2019-12-02 07:45:26
This question already has an answer here: C++: How to round a double to an int? [duplicate] 5 answers round() for float in C++ 21 answers I am stuck in problem where the double number is not getting properly converted to integer. In this case-> int x=1000; double cuberoot=pow(x,(1/(double)3)); int a=cuberoot; cout<<"cuberoot="<<cuberoot<<endl; cout<<"a="<<a<<endl; Output: cuberoot=10 a=9 Why here a=9 and not 10? Any solution to this problem?? Also I don't want to round the value..if a=3.67 then it should be converted to 3 only and not 4. Because the cuberoot is very close to 10 but not quite

Output in MatLab has a capped amount of decimal points [duplicate]

半世苍凉 提交于 2019-12-02 07:35:44
This question already has an answer here: Is it possible in matlab to explicitly format the output numbers? 7 answers I have modified some code in MatLab so that it will give me the root of the function cos(x) - 3*x. When I run the code and ask for it to return the value of xnew (as xnew should equal the root of the function) it returns xnew to only 4 decimal points. I would like for it to be more than this. Does anyone know why it capping this value? x = 0; N = 100000; Tol = 0.00001; count = 1; while count <= N f = cos(x) - 3*x; Df = -sin(x) - 3; d = (f/Df); xnew = x - (d); if (abs(xnew - x))

PHP - Find the number of zeros in a decimal number

99封情书 提交于 2019-12-02 07:17:29
Let's say we have 0.00045. I want to find a way to count the number of "significant" zeros after the decimal point (3 in this case). I've been trying to implement strpos or substr , but I'm getting stuck. Other exs.... 3.006405: Should return "2" 0.0000062: Should return "5" 9.0100000008: Should return "1" Any ideas? strspn($num, "0", strpos($num, ".")+1) strspn finds the length of a sequence of zeroes. strpos finds the position of the decimal point, and we start from 1 position past that. However, this doesn't work for 0.0000062 because it gets converted to scientific notation when converted

How to convert hexadecimal string to decimal?

老子叫甜甜 提交于 2019-12-02 07:16:04
I would appreciate it if you could tell me how I can convert hexadecimal letters within an NSString , e.g. @"50A6C2" , to decimals using Objective-C. Thanks in advance. The easiest way is to use an NSScanner , specifically the methods scanHexInt: or scanHexLongLong: . Another possibility is to get the C string from the NSString and use C-style functions such as strtol (with base 16). 来源: https://stackoverflow.com/questions/6002196/how-to-convert-hexadecimal-string-to-decimal

Decimal point in calculations as . or ,

大城市里の小女人 提交于 2019-12-02 07:11:21
问题 If I use decimal pad for input of numbers the decimal changes depending of country and region format. May be as a point "." or as a comma "," And I do not have control over at which device the app is used. If the region format uses a comma the calculation gets wrong. Putting in 5,6 is the the same as putting in only 5 some times and as 56 same times. And that is even if I programmatically allow both . and , as input in a TextField. How do I come around this without using the numbers an

Python Decimal vs C# decimal precision [duplicate]

梦想的初衷 提交于 2019-12-02 06:51:44
This question already has an answer here: Why python decimal.Decimal precision differs with equable args? 2 answers I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals: >>> 4321.90 * 100 432189.99999999994 >>> Decimal(4321.90) * Decimal(100) Decimal('432189.9999999999636202119291') I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close

9-bit floating point representations using IEEE floating point format A and B

社会主义新天地 提交于 2019-12-02 06:41:07
I'm having some trouble with a problem I've run into dealing with floating points. I'm having a hard time moving from floating point representation to decimal values and also from format A of the representation to format B of the representation. The problem: Consider the following two 9-bit floating-point representations based on the IEEE floating-point format. Format A There is one sign bit. There are k = 5 exponent bits. The exponent bias is 15. There are n = 3 fraction bits. Format B There is one sign bit There are k = 4 exponent bits. The exponent bias is 7. There are n = 4 faction bits

9-bit floating point representations using IEEE floating point format A and B

删除回忆录丶 提交于 2019-12-02 06:37:55
问题 I'm having some trouble with a problem I've run into dealing with floating points. I'm having a hard time moving from floating point representation to decimal values and also from format A of the representation to format B of the representation. The problem: Consider the following two 9-bit floating-point representations based on the IEEE floating-point format. Format A There is one sign bit. There are k = 5 exponent bits. The exponent bias is 15. There are n = 3 fraction bits. Format B There