Why are dates calculated from January 1st, 1970? [duplicate]
using date(January 1st, 1970) as default standard
The Question makes two false assumptions:
- All time-tracking in computing is done as a count-since-1970.
- Such tracking is standard.
Two Dozen Epochs
Time in computing is not always tracked from the beginning of 1970 UTC. While that epoch reference is popular, various computing environments over the decades have used at least nearly two dozen epochs. Some are from other centuries. They range from year 0 (zero) to 2001.
Here are a few.
January 0, 1 BC
January 1, AD 1
October 15, 1582
January 1, 1601
December 31, 1840
November 17, 1858
December 30, 1899
December 31, 1899
January 1, 1900
January 1, 1904
December 31, 1967
January 1, 1980
January 6, 1980
January 1, 2000
January 1, 2001
Unix Epoch Common, But Not Dominant
The beginning of 1970 is popular, probably because of its use by Unix. But by no means is that dominant. For example:
- Countless millions (billions?) of Microsoft Excel & Lotus 1-2-3 documents use
January 0, 1900
(December 31, 1899). - The world now has over a billion iOS/OS X devices using the Cocoa (NSDate) epoch of
1 January 2001, GMT
. - The GPS satellite navigation system uses
January 6, 1980
while the European alternative Galileo uses22 August 1999
.
ISO 8601
Assuming a count-since-epoch is using the Unix epoch is opening a big vulnerability for bugs. Such a count is impossible for a human to instantly decipher, so errors or issues won't be easily flagged when debugging and logging. Another problem is the ambiguity of granularity explained below.
I strongly suggest instead serializing date-time values as unambiguous ISO 8601 strings for data interchange rather than an integer count-since-epoch: YYYY-MM-DDTHH:MM:SS.SSSZ
such as 2014-10-14T16:32:41.018Z
.
Count Of What Since Epoch
Another issue with count-since-epoch time tracking is the time unit, with at least four levels of resolution commonly used.
- Seconds
The original Unix facilities used whole seconds, leading to the Year 2038 Problem when we reach the limit of seconds since 1970 if stored as a 32-bit integer. - Milliseconds
Used by older Java libraries, including the bundled java.util.Date class and the Joda-Time library. - Microseconds
Used by databases such as Postgres. - Nanoseconds
Used by the new java.time package in Java 8.
It is the standard of Unix time.
Unix time, or POSIX time, is a system for describing points in time, defined as the number of seconds elapsed since midnight proleptic Coordinated Universal Time (UTC) of January 1, 1970, not counting leap seconds.
Is there any reason behind using date(January 1st, 1970) as standard for time manipulation?
No reason that matters.
Python's time
module is the C library. Ask Ken Thompson why he chose that date for an epochal date. Maybe it was someone's birthday.
Excel uses two different epochs. Any reason why different version of excel use different dates?
Except for the actual programmer, no one else will ever know why those those kinds of decisions were made.
And...
It does not matter why the date was chosen. It just was.
Astronomers use their own epochal date: http://en.wikipedia.org/wiki/Epoch_(astronomy)
Why? A date has to be chosen to make the math work out. Any random date will work.
A date far in the past avoids negative numbers for the general case.
Some of the smarter packages use the proleptic Gregorian year 1. Any reason why year 1?
There's a reason given in books like Calendrical Calculations: it's mathematically slightly simpler.
But if you think about it, the difference between 1/1/1 and 1/1/1970 is just 1969, a trivial mathematical offset.