I was going through this example which has a function outputting a hex bit pattern to represent an arbitrary float.
void ExamineFloat(float fValue)
{
pr
*(unsigned long *)&fValue is not equivalent to a direct cast to an unsigned long.
The conversion to (unsigned long)fValue converts the value of fValue into an unsigned long, using the normal rules for conversion of a float value to an unsigned long value. The representation of that value in an unsigned long (for example, in terms of the bits) can be quite different from how that same value is represented in a float.
The conversion *(unsigned long *)&fValue formally has undefined behaviour. It interprets the memory occupied by fValue as if it is an unsigned long. Practically (i.e. this is what often happens, even though the behaviour is undefined) this will often yield a value quite different from fValue.