Is time() guaranteed to be leap-second aware?

前端 未结 3 1695
旧巷少年郎
旧巷少年郎 2020-12-19 05:42

PHP manual states that time() returns \"the current UNIX timestamp\" ᴀ ­and microtime() returns the \"curre

3条回答
  •  清酒与你
    2020-12-19 06:00

    The UNIX timestamp, as you get it from time() in PHP, is a peculiar beast.

    It took me a long time to understand this, but what happens is that the timestamp increases during the leap second, and when the leap second is over, the timestamp (but not the UTC!) jumps back one second.

    This has some important implications:

    • You can always accurately convert from UTC to a UNIX timestamp
    • You can not always (for all points in time) convert from timestamp to UTC
    • Unix timestamp is non-contiguous, i.e. it steps back. (Or alternatively, it stays the same without incrementing for 2 seconds.)
    • If you get a conversion error converting from UNIX timestamp to UTC, the error will be at most 2 seconds off. (This means you could get the wrong day though.)
    • In retrospect, the Unix timestamp will appear linear. This means that if you are storing time series data and you are not concerned with a single second or two per year being incorrectly represented, then you can consider stored timestamped data contiguous.

    Workarounds

    What if the Unix timestamp neither jumped backed abruptly nor stayed the same for 2 seconds? This is exactly what Google has made happen for their servers. Their solution was to slowly skew the time merely milliseconds at a time over a long period of time, to make the leap second shift virtually invisibly to applications. (As far as the applications and operating systems on Google servers are concerned, leap seconds are no longer inserted by the IERS.)

提交回复
热议问题