In traditional embedded programming, we will give a delay function like so:
for(i=0;i<255;i++)
for(j=0;j<255;j++);
In the micropro
If you're using for-loops, you'd better know what they compile to and how long those instructions take at your given clock speed, and ensure the CPU runs your instructions and nothing else (this can be done in embedded systems but it's tricky since it disallows interrupts).
Otherwise, you won't be able to tell how long it's really going to take.
Early PC games had this problem - they were built for a 4.7MHz PC and, when the faster computers came along, they were unplayable.
The best way a 'sleep' can work is for the CPU to know what time it is at any given point. Not necessarily the actual time (7:15 am) but at least the relative time (8612 seconds since some point in time).
That way it can apply a delta to the current time and wait in a loop until the current+delta is reached.
Anything that relies on number CPU cycles is inherently unreliable as the CPU may go off to another task and leave your loop hanging.
Let's say you have a memory-mapped 16-bit I/O port which the CPU increments once a second. Let's also assume it's at memory location 0x33 in your embedded system, where ints are also 16 bits. A function called sleep then becomes something like:
void sleep (unsigned int delay) {
unsigned int target = peek(0x33) + delay;
while (peek(0x33) != target);
}
You'll have to ensure that peek() returns the memory contents every time (so optimizing compilers don't muck up the logic) and that your while statement runs more than once per second so you don't miss the target, but these are operational issues that don't affect the concept I'm presenting.