int

int to string, char* itoa

巧了我就是萌 提交于 2020-01-06 07:55:22
问题 trying to get ‘sval’ to contain the string “$1” – “$500” for array indexes 0-499. in the following code, however itoa is giving me strange strings in the code below: #include<iostream> #include <stdio.h> #include <stdlib.h> using namespace std; typedef struct data_t { int ival; char *sval; } data_t; void f1(data_t **d); int main() { data_t *d; d=static_cast<data_t*>(malloc(500)); //is this even needed? d = new data_t[500]; f1(&d); } /* code for function f1 to fill in array begins */ void f1

random element from array in c

扶醉桌前 提交于 2020-01-06 02:24:08
问题 How can I select a random element from a character array in c ? For instance: char *array[19]; array[0] = "Hi"; array[1] = "Hello"; etc I am looking for something like array[rand], where rand is the random integer number between o and the array's length(in this case 20) like 1, 2, 3 , 19 etc. 回答1: To start things off, since you have an array of strings, not of characters, you have to declare it as char* array[19]; Then, you can declare the following (always useful) macro #define ARR_SIZE(arr)

Converting floating point numbers to integers, rounding to 2 decimals in JavaScript

梦想与她 提交于 2020-01-05 11:30:26
问题 What's the best way to perform the following conversions in JavaScript? I have currencies stored as floats that I want rounded and converted to integers. 1501.0099999999999909 -> 150101 12.00000000000001 -> 1200 回答1: One way to do this is to use the toFixed method off a Number combined with parseFloat . Eg, var number = 1501.0099999999999909; var truncated = parseFloat(number.toFixed(5)); console.log(truncated); toFixed takes in the number of decimal points it should be truncated to. To get

Convert between nullable int and int

若如初见. 提交于 2020-01-05 10:26:10
问题 I would like to do something like this : int? l = lc.HasValue ? (int)lc.Value : null; where lc is a nullable enumeration type, say EMyEnumeration?. So I want to test if lc has a value, so if then give its int value to l, otherwise l is null. But when I do this, C# complains that 'Error type of conditional expression cannot be determined as there is no implicit conversion between 'int' and ''. How can I make it correct? Thanks in advance!! 回答1: You have to cast null as well: int? l = lc

How to set to int value null? Java Android [duplicate]

假如想象 提交于 2020-01-05 09:00:29
问题 This question already has answers here : What is the difference between an int and an Integer in Java and C#? (26 answers) Closed 5 years ago . which is the best way to set already defined int to null ? private int xy(){ int x = 5; x = null; //-this is ERROR return x; } so i choose this private int xy(){ Integer x = 5; x = null; //-this is OK return (int)x; } Then i need something like : if(xy() == null){ // do something } And my second question can i safely cast Integer to int? Thanks for

char to int conversion

十年热恋 提交于 2020-01-05 08:52:38
问题 So I have something like this: char cr = "9783815820865".charAt(0); System.out.println(cr); //prints out 9 If I do this: int cr = "9783815820865".charAt(0); System.out.println(cr); //prints out 57 I understand that the conversion between char and int is not simply from '9' to 9 . My problem is right now I simply need to keep the 9 as the int value, not 57. How to get the value 9 instead of 57 as a int type? 回答1: You can try with: int cr = "9783815820865".charAt(0) - '0'; charAt(0) will return

Data type for an ID field (int or varchar)

左心房为你撑大大i 提交于 2020-01-05 01:33:35
问题 What is advisable data type of an ID field (such as state ID or web_form_id or employeeID)? Is it best to use an int data type of varchar (and why) ? 回答1: For an Id, which is going to be unique to each user, its recommended to go with int , and as this sounds more like an sql question, in the auto_increment mode. 来源: https://stackoverflow.com/questions/6554630/data-type-for-an-id-field-int-or-varchar

How does sizeof work for int types?

北城以北 提交于 2020-01-04 05:43:29
问题 I have a small program which compares (1) sizeof, (2) numeric_limits::digits, (3) and the results of a loop in an effort to make sure they all report the same thing regarding the size of the "int types" on any C++ implementation. However because I don't know about the internals of sizeof, I have to wonder if it is just reporting numeric_limits::digits. Thanks 回答1: Most likely sizeof() on most compilers causes the compiler to look the given type (or object's type) up in its internal type table

Different results when casting int and const int to float

只愿长相守 提交于 2020-01-04 02:38:04
问题 Would anyone be able to explain why int and const int give different results when cast to float and used in floating point math? See for example this piece of code: int _tmain(int argc, _TCHAR* argv[]) { int x = 1000; const int y = 1000; float fx = (float) x; float fy = (float) y; printf("(int = 1000) * 0.3f = %4.10f \n", 0.3f*x); printf("(const int = 1000) * 0.3f = %4.10f \n", 0.3f*y); printf("(float)(int = 1000) * 0.3f = %4.10f \n", 0.3f*fx); printf("(float)(const int = 1000) * 0.3f = %4

dynamically determine the type of integer based on the system (c++)

廉价感情. 提交于 2020-01-03 18:39:17
问题 I am writing a program to store data to a file on the unit of every 32 bits (i.e. 4 bytes at a time). I wrote the code in 64-bit windows system but the compiler I used is 32 bits (mingw32). In the current system, the size of int an long are the same, 32 bits (4bytes). I am current porting the code to other systems by recompiling with g++ (without changing the code). However, I found that the size of int or long are different and depending on the system. Is that any way (like using a macro in