int

Why must a short be converted to an int before arithmetic operations in C and C++?

余生长醉 提交于 2019-12-25 05:11:14
问题 From the answers I got from this question, it appears that C++ inherited this requirement for conversion of short into int when performing arithmetic operations from C. May I pick your brains as to why this was introduced in C in the first place? Why not just do these operations as short ? For example ( taken from dyp's suggestion in the comments ): short s = 1, t = 2 ; auto x = s + t ; x will have type of int . 回答1: If we look at the Rationale for International Standard—Programming Languages

Convert char to int TeraData Sql

♀尐吖头ヾ 提交于 2019-12-25 04:19:33
问题 I'm trying to convert a column from char (8) to integer in order to make a referential integrity with an integer. IT didn't work and I test a select in order to check the cast. Utente_cd is a char (8) column SEL CAST(UTENTE_CD AS INTEGER) FROM TABLEA Teradata produces this error: SELECT Failed. 2621: Bad character in format or data of TABLEA. Sometime the char column contains also an alphanumeric code that I should discard. 回答1: In TD15.10 you can do TRYCAST(UTENTE_CD AS INTEGER) which will

Math with an integer obtained from an EditText

梦想的初衷 提交于 2019-12-25 02:29:16
问题 I don't know what I am doing wrong but I have an edittext called input, where the user writes a number, then you press a button: input = (EditText) findViewById(R.id.input); XX= (Button) findViewById(R.id.XX); XX.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View arg0) { if(XX.equals("XX")) { String aa= String.valueOf(input) ; float operation= ((float) ( 0.029 * 2.2 *aa ) ); String cadena= String.valueOf(operation) ; // set text to cadena (result of the

Creating a byte from an integer

非 Y 不嫁゛ 提交于 2019-12-25 02:26:45
问题 I want to create a byte using an integer. For example I have an integer of 4, I want to create a byte with a value of 4. When I do something like byte test = 134; I get an byte value of -122 wheras I want a byte value of simply 134 so that i can convert this to ascii later on. I am unsure of how to achieve this but would welcome any help that can be provided. Thanks. 回答1: In Java byte s are interpreted as signed (along with int s, short s, and long s). The unsigned byte value of 134 is bit

Converting Int to CGFloat results in 0.000000

谁说我不能喝 提交于 2019-12-25 02:26:10
问题 I am attempting to take Int values from an Array and convert them into CGFloat s so that I may create a UIColor from those red, green, and blue values. Simple enough, right? I can successfully get the Int values out of the array, but when I attempt to convert to CGFloat , the result is 0.000000. Why is this? let colorArray = NSUserDefaults.standardUserDefaults().arrayForKey("colors") let redValue = colorArray[0] as Int let greenValue = colorArray[1] as Int let blueValue = colorArray[2] as Int

Cast to a 32 bit integer may result in truncation PHP Propel?

雨燕双飞 提交于 2019-12-25 02:07:54
问题 Looking at the source code of Propel (the PHP ORM library), I have found this method inside the propel/propel1/runtime/lib/query/Criteria.php file: /** * Set offset. * * @param int $offset An int with the value for offset. (Note this values is * cast to a 32bit integer and may result in truncation) * * @return Criteria Modified Criteria object (for fluent API) */ public function setOffset($offset) { $this->offset = (int) $offset; return $this; } Why in the doc comments they say that the value

Computer dosen't return -1 if I input a number equal to INTMax+1

拜拜、爱过 提交于 2019-12-25 01:46:44
问题 The type int is 4-byte long and I wrote a little procedure in C under Ubuntu to print the number I've just input. When I input 2147483648, i.e. 2^31, it prints 2147483647 rather than -1. The same thing happens when I input any number larger than 2147483647. Why doesn't it overflow to -1 as I learnt form book but seems like truncated to INT_Max and what happened in the bits level? #include <stdio.h> int main(){ int x; scanf("%d",&x); printf("%d\n",x); } I made a mistake. INT_Max+1 should equal

Java int vs. Double

限于喜欢 提交于 2019-12-25 00:43:20
问题 public double calc(int v1) { return v1 / 2 + 1.5; } public double cald (double v) { return v / 2 + 1.5; } Do the functions return the same result? I would argue that they don't return the same result, as the second function would include the decimal point, where as the second function would round the number up. Is that correct? 回答1: when you divide a by b i.e a/b if both a & b are int then result will be int else any or both a & b are double then result will be double Edit: Also see my answer

Is it possible to define an integer-like object in Python that can also store instance variables?

為{幸葍}努か 提交于 2019-12-25 00:38:12
问题 Is it possible to define a data object in python that behaves like a normal integer when used in mathematical operations or comparisons, but is also able to store instance variables? In other words, it should be possible to do the following things: pseudo_integer = PseudoInteger(5, hidden_object="Hello World!") print(5 + pseudo_integer) # Prints "10" print(pseudo_integer == 5) # Prints "True" print(pseudo_integer.hidden_object) # Prints "Hello World!" 回答1: Yes, it is. You can create your own

C++ self-enforcing a standard: size_t

我的梦境 提交于 2019-12-24 21:39:24
问题 Simple question, Would it be good for me to force myself to start using size_t (or unsigned longs?) in places where I would normally use ints when dealing with arrays or other large datastructures? Say you have a vector pointer: auto myVectorPtr = myVector; Unknown to you, the size of this vector is larger than: std::numeric_limits<int>::max(); and you have a loop: for(int i = 0; i < myVectorPtr->size(); ++i) wouldn't it be preferable to use for(size_t i = 0; i < myVectorPtr->size(); ++i) to