I have read quite a bit of material on Internet where different authors suggest using output buffering. The funny thing is that most authors argument for its use only becaus
Ok, here is the real reason : the output is not started until everything is done. Imagine an app which opens an SQL connection and doesn't close it before starting the output. What happens is your script gets a connection, starts outputting, waits for the client to get all it needs then, at the end, closes the connection. Woot, a 2s connection where a 0.3s one would be enough.
Now, if you buffer, your script connects, puts everything in a buffer, disconnects automatically at the end, then starts sending your generated content to the client.