I tend to use CGFloat all over the place, but I wonder if I get a senseless \"performance hit\" with this. CGFloat seems to be something \"heavier\" than float, right? At wh
As others have said, CGFloat is a float on 32-bit systems and a double on 64-bit systems. However, the decision to do that was inherited from OS X, where it was made based on the performance characteristics of early PowerPC CPUs. In other words, you should not think that float is for 32-bit CPUs and double is for 64-bit CPUs. (I believe, Apple's ARM processors were able to process doubles long before they went 64-bit.) The main performance hit of using doubles is that they use twice the memory and therefore might be slower if you are doing a lot of floating point operations.