I just started testing with the iPhone 5S and the 64bit architecture on an OpenGL ES app. The problem I\'m seeing is that (CGFloat) values are way wrong when they get to the
CGFloat
is a variable typedef. On a 32-bit build environment it is single-precision, on 64-bit it is double-precision. Normally this would not be a huge issue, but you are using glUniform4fv
, which takes a GLfloat *
.
OpenGL stipulates that GLfloat
is always a single-precision floating point value and compilers can deal with type demotion from double-precision to single-precision when you use the non-pointer version of this function. When you use pointers, this behavior does not occur - OpenGL expects to be passed an array of single-precision floats, but you pass it an array of double-precision floats with no type conversion.
What you need to do is stop using CGFloat
. Instead, use GLfloat
. OpenGL typedefs are provided to ensure this sort of thing never happens.