Inertia is a powerful force. The computing world retains a lot of practices that are odd little dysfunctional relics of past stages of its technology. The one I’m here to talk about today looks like this:
Mar 6 15:11:07 snark postfix/qmgr[3927]: 0422513A6C53: removed
That’s a log message hot’n’fresh from my /var/log/mail.log file. It’s entirely typical of traditional log formats on Unix systems, and these things offend the bejeezus out of me every time I see them. Now let me show you how this would look in a sane universe:
2018-03-06T15:11:07Z snark.thyrsus.com postfix/qmgr[3927]: 0422513A6C53: removed
Logging events in local time (and with only local hostnames, but that’s not the subject of today’s rant) is a dysfunctional remnant of the time before wide-area networks. It means that log timestamps aren’t directly comparable across hosts in different time zones. This is death on diagnostics for a large class of network-transaction bugs.
Actually it can mean a lot worse than that, even locally. An A&D regular who wishes to remain nameless recently told me of once having to help troubleshoot a medical-records system at a major hospital. It was unusable – they had to plan around this and retreat to a paper backup system – for two hours a year. Those two hours were just adjacent to daylight-saving-time changes. Yes, that’s right, stamp collisions due to logging in local time crashed their multi-megabuck investment.
Another place logging and displaying in local time is a bad mistake is in distributed version-control systems. I’ve never seen a case in which it was not more important to know the relative time of a sequence of commits than to know the, er, “absolute” local time of any of them. And, of course, committers may be scattered across multiple timezones. Thus, the way to reduce cognitive friction on people browsing the history is to refer (display, and not just storage of) all commit timestamps to common timebase.
Yes, git does get this wrong. Git timestamps are stored in UTC but displayed in the committer’s local time in git log and elsewhere. To accomplish this git has to keep a local time zone offset with each date, a pointless “feature” that often causes me chronic problems too tedious to explain when I do repository conversions.
You know who got this right first? Military and civil aviation. Long before Internet traffic routinely crossed timezones, airplanes did. Requiring pilots and ground controllers to constantly track everybody’s timezones and do conversions on the fly would be confusing and dangerous, so…Zulu time for everybody. Loss-of-life risk is lower where we play (except maybe at that hospital?) but the underlying logic for ditching local time is the same.
EDIT: It has been pointed out to me that radio and telegraph operators faced similar situational stresses as far back as the mid-1800s. It’s not clear, however, how soon GMT became on-air standard time after it was formalized in 1847.
So next time you have to choose a time stamp format, cut the crap and go straight to RFC3339 (with the T in the middle, thank you). It has many advantages. It’s unambiguous, compact, compares correctly no matter where it came from, constant length, sorts lexicographically the same as its time order, and parses out of text as a single token that is easy to distinguish from anything but an RC3339 timestamp (that’s why you want to leave the T in the middle).
(But in case you’re tempted to think about Zulu for all purposes…bad idea. See also In defense of calendrical irregularity.)
And now, to conclude this public-service announcement, a filk I composed for the occasion. Take the tune from this and superimpose these lyrics:
Baby you'll come knockin' on my firewall Just as I'm dealin' with a system stall I said yeah, well, what'm I supposed to do? I don't need no cracker gettin' in too. NANOG says they have some trouble in town Now you're shutting some daemons down Stop logging in... Stop logging in... Stop logging in local time. It's hard to know where the intruders came from It's hard to know just what we've lost This doesn't have to be the big net meltdown This doesn't have to be anything at all. I know you really want to tell me good-bye I know you really want to run your own show Baby you could never look me in the eye Yeah you buckle with the weight of the words Stop logging in... Stop logging in... Stop logging in local time. There's people running 'round loose in the world Ain't got nothing better to do Than make a meal of some P.F.Y. You need someone looking after you I know you really want to tell me good-bye I know you really want to run your own show Baby you could never look me in the eye Yeah you buckle with the weight of the words Stop logging in... Stop logging in... Stop logging in local time. Stop logging in local time!
Well, RFC 3339 is pretty good for the next 7981 years — after that it will get dicey.
>Well, RFC 3339 is pretty good for the next 7981 years — after that it will get dicey.
Nah. By that time everything, even datestamps, will be garbage-collected strings.
I’m not sure I’m joking…
There’s already a proposal to solve the Y10K problem:
If the first character in a date is a digit, it’s a YYYY year format.
If the first character is alphabetic, it’s
A
YYYYYB
YYYYYY. . .
V
YYYYYYYYYYYYYYYYYYYYYYYYYYW
YYYYYYYYYYYYYYYYYYYYYYYYYYYX is reserved for extending this to two or more alphas, while Y and Z are not used to avoid confusion with Year digits and UTC/Zulu timezone. Some have suggested leaving I and O out to avoid human confusion (machines are fine with it of course) As we have a while to work out the details before we have to implement the successor to
G
YYYYYYYYYYY, people can feel free to implement the first 7 generations of the protocol before Y100G becomes an issue.Note that A00000 collates after 9999, and B collates after A99999, etc., preserving one of the most useful features of the YMD order.
Is that serious? Add just a single digit for each successive letter? Surely the more reasonable algorithm would be to double the number of digits with each reallocation.
Let’s play your proposal out. Let’s say that, due to the reserved meanings of D, M, T, W, Y, Z and the potential for confusion from I and O that we eliminate those eight letters from use, as well as reserving X for future expansion. That gives us 17 letters from A to U:
A=2^3 YYYYYYYY
B=2^4
C=2^5
E=2^6
…
U=2^19 Y digits, which should immediately be recognized as over half a million digits. It’s estimated that the expansion of the universe will lead to the end of star formation when the years are measured in 14 digits, with stars extinguished in the same number of digits, so you’re adding a lot of extra leading 0s to your years for no good reason.
with the same exclusions, we have
A=5 Ys
B=6
C=7
E=8
…
L=14
N=15
Perhaps if we’ve figured out where to get energy when there are no more stars generating it, we can modify the original proposal to start adding three year digits at a time after this
P=18
Q=21
R=24
S=27
U=30
XA=???
Eric, I’m sure you’ve read an old novel by Vernor Vinge call A Fire Upon the Deep where the space trading nation Qeng Ho tell times in seconds. 1 Kilosecond is about 20 minutes, 1 Megasecond is about 2 weeks, etc. And they sort of record their time unit back to Unix Epoch, several thousands of years later.
>And they sort of record their time unit back to Unix Epoch, several thousands of years later.
Yes, I thought that was hilarious when I first read it.
Then I thought about it and understood why it’s pretty plausible, actually.
And talking of Unix Epoch, I think logs should simply store their timestamps as that (i.e. seconds from 00:00:00 GMT, Thursday, 1 January 1970). In fact when I had to generate some standardized JSON for log data we receive as part of $dayjob from firewalls all over the world that was exactly my solution. You don’t want to know the mucking around we had to do to figure out the offset from UTC for the dates where they omit the TZ offset, the year or 10001 other oddities (parsing a log that overlaps the New Year and doesn’t include the year as part of the timestamp is particularly “fun”) .
In fact these days whenever I create a file or update a zone or do something else that is periodic and less frequent than once per second I typically use the epoch time integer as the filename/version number. The result means the files automatically sort in creation date order and you can see clearly if someone mucked around with one when it appears out of order when actually date sorted. As a result, while I don’t exactly think in kiloseconds or megaseconds, I do know the current epoch time approximately and find it relatively simple to work out how old something is by checking its Epoch Time against what I know is approximately the current one.
>And talking of Unix Epoch, I think logs should simply store their timestamps as that (i.e. seconds from 00:00:00 GMT, Thursday, 1 January 1970).
No, you should always log in solar (Zulu/UTC) time. You can’t *get* seconds from epoch without going though that representation anyway; the Unix time counter stutters and could theoretically skip, though this has never happened and is unlikely.
I think we’re in agreement here
>I think we’re in agreement here
You wanted a monotonic seconds counter from the epoch. If you think that agrees with what I said, you have not grasped the problem. Really, you haven’t.
This is a common mistake. I think I need to upgrade a FAQ so I can point at a well worked explanation.
You mean TAI.
What am I missing? I’ve always used Epoch as the number of seconds since 00:00:00 GMT, Thursday, 1 January 1970 and thus it is – effectively – UTC. There are no timezone issues at all.
Why would you dates in a complex format like YYYY-MM-DDTHH:MM:SSZ instead of an integer? A lot more processing has to be done to create that log line which can be an issue if you’re actually logging thousands of hits a second (when anyway you need the fraction of the second).
[One of the differences between the Bind DNS server and unbound is that the latter has relatively poor logging of queries. I strongly suspect that this is related to its generally higher performance in terms of queries/sec that it can support on the same hardware.]
>What am I missing? I’ve always used Epoch as the number of seconds since 00:00:00 GMT, Thursday, 1 January 1970 and thus it is – effectively – UTC. There are no timezone issues at all.
There are no timezone issues at all. There are serious leapseconds issues, though, if you ever want to relate your stamps to solar time.
I want “Programmer Archaeologist” as an official job title.
I always thought there was an obvious “Library of Babel” problem with the code Archaeology in that setting:
There is so much junk piled up that the time it takes to search it for what you want would exceed the amount of time it would take to write it yourself. This problem is bad now, it would get incredible with thousands of years of piled up code.
Never reinvent the wheel is terrible programming philosophy.
(Only the Sith speak in absolutes. Now which button on this red lightsaber makes it go? :-P )
Anyway, I can imagine lovecraftian horrors of patched together junk by futuristic java-jockeys that only half bother to understand what they’re doing, or even trying to do. Patterns and ruts worn clean to the center of the earth for common tasks – but try to do anything even slightly uncommon, and you’d have to either search the infinite library or write it yourself.
Just got done with a bit of “code archaeology” myself: Namespace deconfliction for cephes.
This problem isn’t sensitive to the complexity of what you are trying to do: If it isn’t very complex, it isn’t much effort to write it yourself. If it is very complex, the odds of finding something that does exactly what you want go down combinatorially with the complexity.
From reading the books, the job wasn’t to find a program to DO the job, it was to figure out what was going on in the stygian depths of the control systems–systems that were thousands of years old at that point.
As to:
Dude, have you paid attention to how Open Source *works*?
It’s already a “lovecraftian horror of patched together junk”. J. Random Tool requires these three libraries, each requiring two others, and usually expecting specific versions.
Back in the mists of time I used to maintain 3 cPanel servers as penance for some relatively major sin I clearly committed in a previous life, and I just gave up trying to keep the perl modules in sync. three runs at CPAN within seconds of each other would pull down THREE different versions from different locations. It was crazy making.
When you’re talking about control systems for VITAL stuff you often *cannot* just grab the latest language and re-write it from scratch.
There’s a government program out there that was built using VMS as the OS that the control software ran on. Last time I was allowed to know anything about that program they’d had a multi-million dollar rewrite of the software to run on Linux, only it didn’t work. So they were paying HP a LOT of money to keep maintaining VMS.
Working code–especially for stuff like that–embodies a LOT of knowledge about the nature of the hardware and universe that might not match what the manual says. If you’re really, really reliant on that stuff to run your environmental controls (say you’re working in a deep space craft using solar sails spending a century or two between “ports”) you DO NOT re-write from the ground up very often. If ever.
Now, if you’re selling socks on the internet or building a social media messaging system, go right ahead.
>Random Tool requires these three libraries, each requiring two others, and usually expecting specific versions.
To be fair, I see much less of “expecting specific versions” than I used to – and I usually see it on corporate code written by people who are only half-acculturated. No longer common, and diminishing.
>I just gave up trying to keep the perl modules in sync
That’s yer problem. Perl was deservedly notorious for this.
The last few years I’ve spent a LOT of time inside well firewalled/controlled environments and while I’m not seeing specific versions as much any more, there’s still a TON of cross-dependencies that make adding a non-standard package in. Building from source can be even more of a problem when you have to deploy across 500 servers.
But the point is still there, and it’s not really even a “open source” issue, it’s just the nature of a complex system that is constantly changing. Some parts will move faster than others, and some parts will just stop moving as interest waxes and wanes.
We’re past the point now where a new OS can get wide adoption without a tonne of “normal” software being ported to it, and when those ports happen the “shortest path” will be to port along a bunch of the underlying libraries, so the cruft will migrate–at least some of it will.
This was basically what VV was getting at when he wrote the bits about the Programmer-Archaeologist.
I think nowadays most stacks include a mechanism to freeze dependencies at given working version.
Yes, even Perl – with carton.
> To be fair, I see much less of “expecting specific versions” than I used to
Could Semantic Versioning (or something close to it) catching on be the reason for this? I can have a dependency on libfoo 2.3 or better and trust that the maintainers of libfoo won’t release a 2.8 that breaks functionality (because that has to be libfoo 3.0; libfoo.so.2 and libfoo.so.3 can live side by side with no problems).
> J. Random Tool requires these three libraries, each requiring two others, and usually expecting specific versions.
I’ve seen this in python ecosystems when writing my own tools — although usually with company-internal libraries rather than OSS. It drives me up the wall. What takes it from tragedy to farce is that sometimes the developers themselves are wrong about what their code depends on. It’s like they run `pip freeze` and just say okay, all my users are going to use what I’m using, because I know it works and I can’t be bothered to figure out what my *actual* deps are. The worst cases are those where they lock a version for a library that is also a dependency of one of their *other* dependencies, and then later they get out of sync and self-conflicting.
I blame devs not taking packaging seriously, and for responding to “my upstream dependencies might make a breaking change in a non-major version” with “clearly the solution is to never change anything.”
And then they say “just install in a virutalenv from requirements.txt,” (it’s always requirements.txt, I have never seen this problem in any library with a proper setup.py) and if I really need the library I fix their packaging and deplist for them, and if I don’t then I go hunting for an Apocalypse-class LART.
(not really. But I wish. I try to stick to the standard library just to avoid this shit.)
Time in what reference frame? Does the captain of a relativistic starship go by mission time, or time on some fixed planet? You’re either going to have drift and conflict, or you’re going to be tied to a fixed reference standard. Muahahaha.
In the :Fire in the Deep” there was a time hack sent from a “central” location, which propagated at the speed of radio waves. So there was a “universal” standard to latch on to.
We should probably start sending that signal now, so in 400 years when we manage to get off this mud-ball and get spread out among the starts there’s a pre-existing, useful “standard” time.
WWV
EXPN?
(can be “EXPlaiN or EXPaNd, your choice)
See https://en.wikipedia.org/wiki/WWV_(radio_station)
\/\/\/\/\/ <== Charlie Brown's shirt.
This local time frustration is an old and annoying companion to me too… Sigh…
Nitpickers corner: there’s a typo in ‘Tom accomplish this git has to keep a local time one offset with each date’…
I feel your pain.
Real time data logging on a AMS2750E system, customer insists on turning on daylight savings time although we urge them not to.
Then they b***h they have a 23 hour and 25 hour day in their totalizations.
:doublefacepalm:
ObNit: “Daylight[-]Saving” does not end in an “s”. There is no “savings account” into which we deposit daylight, earn some paltry interest, and later withdraw it. We get up an hour earlier in the morning so that we get to save the daylight in the evening.
As part of my job, I support a popular job-control system that has an interesting way of dealing with “Spring Forward”. The entire 0200-0259 hour is compressed into the first minute of 0300.
I make it a point to email my users a reminder of this every year, a few days before the event, so that they won’t be surprised when something doesn’t work correctly.
I see this problem a lot in firmware as well, in logging and non-logging situations. Worse yet, often the date stamp comes with another variable alongside to indicate whether the date format is MMDDYYYY or DDMMYYYY. All of this nonsense disappears when using ISO 8601.
I just had a mental flash from combining your filk with Weird Al’s take on that same track, and wondered how a ’64 Plymouth does syslogs…
>I just had a mental flash from combining your filk with Weird Al’s take on that same track, and wondered how a ’64 Plymouth does syslogs…
YouTube URL?
https://www.youtube.com/watch?v=qe7-9iaZss4
The syslog RFC does, in fact, prescribe RFC 3339 timestamps and the FQDN for the hostname as the preferred syntax.
Yeah, and there is no easy way to actually *produce* the RFC syslog format from an application. You call syslog(3), you get the legacy format. There is a library called liblogging-stdlog by the guy who *wrote* the syslog RFC, and it, too, produces legacy format.
Looks like RFC 5424 (which I was referring to) is only a “proposed” standard, and RFC 3164–which prescribes the ugly non-ISO local timestamps–is still in effect.
Not sure what it would take to get 5424 approved, and since it’s almost 10 years old, there’s probably a reason it hasn’t been yet.
I think that
“reduce cognitive fiction” should be “reduce cognitive friction”
“time one offset” should be “time zone offset”
but I’m not sure on the latter.
And in business environments, local timestamps can more quickly give you important information – did this occur during working hours, was it during the morning rush when everyone tries to access the service at once, etc.
>And in business environments, local timestamps can more quickly give you important information – did this occur during working hours, was it during the morning rush when everyone tries to access the service at once, etc.
There’s some point to this, I admit.
I still think the tradeoffs favor Zulu time, though. Zulu time to current local is easy. Historical local time to current Zulu or current local is hard.
I can think in UTC with about as much slowdown as level 3 ILR fluency, but I’m not any sort of representative data point. I wonder what difficulty this poses in the general case. Perhaps aviation has an answer?
Try doing this across three timezones at once.
Back in the day, before I became an “Elderly Gentleman of Leisure” I worked in broadcast television, latterly for a Scandinavian company, transmitting to Scandinavia from the UK. Don’t Ask!!!! It was rare, but not unknown to have to work in 5 timezones simultaneously – PST, EST, GMT, UK (civil time, possibly including Summer Time) and CET.
Case in point, a programme originated in Los Angeles (PST), sent via satellite (GMT) to New York (EST), then by satellite (GMT again) to London (UK civil) where I got my mitts on it, and then to Scandinavia (CET) This was fun – for microscopic values of fun.
And for extra fun, add in DST date differences (which makes these two rants on topic.
For a while I lived in the Northern Territory of Australia.
I had to coordinate with people in a part of Europe, a timezone in the US, and family in Central Time. Europe switched to DST at a different time than the US.
The NT doesn’t do DST (except Telstra couldn’t figure that out, so the cell tower times would switch, and then switch back a few weeks later) AND it’s 1/2 hour out for some reason.
My mother never could figure out the timezone differences. Got a lot of early morning phone calls.
I also once got an interview call from Amsterdam at 2:30 AM. (I was living in MST then).
I started using YYYY-MM-DD for my personal work some 3 decades ago. People still look at me like I’m crazy when they see it. One person said “wondered what that was, thought it was some sort of code”.
I’ve come to really despise the US pattern of 12/31/2017 as well as the slightly less awful European 31.12.2017.
Better examples come in the first 1.714286 weeks of each month.
Quick is 03-07-08 in March, July or August? Of what year?
I’m not sure why, but I have never internalized the rules for American or European dates. Maybe getting online young and having lots of exposure to both systems prevented either one from winning in me. The good part is that I have no internal friction scrawling YYYY-MM-DD across every date box or blank I find, sometimes even ones preprinted with __ / __ / 20__. It rarely occurs to me that someone may actually want a broken date there.
The bad part is that when I see a date like the one I used above, I have no clue what date it is intended to represent. I find that very curious. Everyone else I’ve ever met sees that string as a specific date. They may be wrong, if the person who wrote it follows a different scheme, but they still have a specific date in mind. And all I see is the *ERROR* indicator.
The space after 20 is for “th”, as in “2018/02/20th”, where you pick the month according to which month’s 20th makes sense for the form you’re filling out.
Don’t let anyone tell you different.
;)
I’ve been using the YYYY-MM-DD format as well for almost my entire IT career (only about 2 decades :) ). Only format that makes sense IMHO.
When I have to *write* a date down for other people to parse I use the “%e %b %Y” (i.e. 5 Mar 2018) because if someone can’t figure that out they are too stupid to breed.
IMO the only right way to do it on a computer is %Y %m% %d %H %M %S with whatever separators you want/need.
I’ve been doing Unix Adminy/support stuff on multiple continents for over 20 years, and if you’re not running UTC/ZULU you’re *wrong*.
Now, convincing clients of that…
YYYY-MM-DD is God’s chosen format, and how the Japanese do dates normally. They and much of the rest of Asia, possibly borrowing from the Chinese, express Julian dates in descending order of granularity from year to day, and it’s one of those things that make such perfect sense that I wish everybody would adopt it.
Unfortunately, humans being what they are, we’ll be stuck with DMY (most of the rest of the world) and MDY (filthy Americans) for some time to come.
YMD and DMY have the advantages of corresponding to big and little endian, respectively. MDY just makes no sense at all.
He transmits through his x86 processor.
“YYYY-MM-DD is God’s chosen format, and how the Japanese do dates normally”
In my experience Japanese write it with slashes, rather than dashes. So 2018/3/3, not 2018-03-03. Then, if they drop the year, it becomes just 3/3. (Which leads to the same problem of: what do you mean 12/3? Especially when a Japanese person does it in place that is normally little-endian for dates. )
And of course they still use kanji often, and use the era (reign of the emperor) for many official things.
I.e. not even the enlightened Japanese get it correct. (And actually, Japanese society and culture is in many ways not at all enlightened, even if many individuals within the society and culture are.)
> Japanese society and culture is in many ways not at all enlightened,
And the award for understatement of the year is already wrapped up, and it’s only March.
DMY is little-endian with respect to date components but not digits. As long as decimal number notation is inherently big-endian, YMD will still be the only one internally consistent order.
DMY is more human reader-friendly as a display format. Eyeballing any list sorted by date and having many entries per month, you need to read only the first two characters to know which date it is, the month and year will be the same as the previous entries. In fact if you are looking for what happened at a certain time when things crashed in a log, you will find it fastest with a SSMiMiHHDDMoMoYYYY format, except that while date formats differ per country, time formats not so that is not going to happen. But I hope you understand how it makes sense. If you know things crashed around 23:45:00 I would start reading at 23:40:00 and go down and the easiest way to quickly scan that would be a reverse, 00:40:23 time format.
That is ENTIRELY training, and I’m not even sure it’s accurate at that.
If you’ve got a list of 200 entries that happened over the last 180 days with YYYYMMDD, they sort correctly, and your eye, after the first minute or three, will ignore the bits it doesn’t need. The change in the patterns will stand out signifying months and days.
Years ago at the beginning of the Era Of Ugly Type one of the founders of Emigre magazine said “you read best what you read most”. It’s true, if unfortunate.
I found that people could figure out what I meant, despite thinking it odd. And I had a date that actually collated correctly. I get involved in meetings about how we’re going to share data between applications, and the topic of naming files with date/time strings in the filenames often comes up. I always suggest “YYYY-MM-DD[{T|_}HHMM[SS]]” (I’m not dogmatic about the T as ESR is; it doesn’t matter what the separator is so long as both ends agree on it) and explain that “2018-01” easily collates after “2017-12”, which is not true of MM-DD-[YY]YY or DD-MM-[YY]YY formats. And January is a bad time to realize your logic’s screwed up, for reasons I’ll leave unspecified in public.
Thank you for the rant; this is one of my peeves too.
Any chance of disrupting libc with respect to logging?
I’m not sure I understand the issue with git. Git stores date/time as UTC internally, then displays the local time as calculated by the local offset. If git log entries get their displayed time adjusted by the local offset, then don’t all entries get adjusted by the same offset, thereby preserving the relative order of the entries in the git log?
>If git log entries get their displayed time adjusted by the local offset,
They get displayed adjusted not by the local offset but by an offset carried with the date. So you get local time in the committer’s time zone, not yours. (This is what Jakub Narebski referred to as “original local time” a few comments upthread.)
Thats…
Wow.
That will NEVER cause any confusion.
> They get displayed adjusted not by the local offset but by an offset carried with the date. So you get local time in the committer’s time zone, not yours.
Ahh, now I understand. And I think that’s a bad design. If you’re storing UTC internally, I don’t see a need to store the committer’s local offset as well. That’s potentially different for everyone potentially accessing the repo. Just query the local system or the local config for the offset to apply and display accordingly. Storing the committer’s offset seems like unnecessary complexity.
>Storing the committer’s offset seems like unnecessary complexity.
It is. Git screwed the pooch on this one. I wish I knew what they were thinking when they did that.
Correlate bugs with commit time of day?
No, not seriously.
That was my immediate thought!
First, it could be easily corrected on display with appropriate (not yet existing) date format configuration.
Second, this is what allows gitweb (git web interface) to display warning if the commit was authored at night, that is between 0 and 6 AM). Note that gitweb displays date both in creator timezone, and in UTC… if I remember it correctly.
What is a bit strange is that with Git you have the choice of pretty-formatted date in original local time, pretty-formatted date in your local time… or raw timestamp. There is no option (though there is place for it) for showing pretty-formatted date in UTC.
Just FYI, there is also relative date format (“2 weeks ago”)…
“pretty-formatted date in your local time” + “TZ=UTC git log …”
Having been battling with ‘daylight saving’ for many years now, and adding ‘timezone’ as a secondary element, getting the RIGHT local time for an event is still unpredictable. Yes, the starting point is simply to run the server at UTC and only store the location of an event to provide local time at the site. Browser offset is still a joke and one needs a proper location tag for a client user in order to display times in THEIR local time. The major roadblock here however is that TZ is inconsistent in providing pre 1970 DST data, so genealogical material can’t easily be UTC normalized. tzdist was intended to provide a method of identifying which version of timezone rules where used to store normalized data, and provide triggers when the stored data is inconsistent with the current rules, but without a reliable master source it has been a waste of time developing it?
The Z element on a timestamp is *NOT* the timezone … it is simply an offset, and something else is needed to provide the correct rules to be applied for local time.
Having written calendar software, I’m still of the opinion that we shouldn’t have time zones at all. Make the whole world UTC and get used to it.
I agree, lets get rid of zones.
> Zulu
Which was originally Greenwich Mean Time, and then Coordinated Universal Time, and… I’m guessing attempts to plaster over “Not Invented Here.”
Now if we could only persuade people to use the Julian calendar it would make things even simpler.
Not an attempt at NIH, the US Navy uses single letter codes to denote time zone, with Z assigned to GMT. Zulu is the phonetic alphabet derivation.
The system actually predates Standard Time, having originated in 1802 in Bowditch’s American Practical Navigator, and has always been linked to GMT (which of course also long predates Standard Time). The US Navy picked it up from Bowditch and it was later standardized across all US military branches (and later most NATO and allied militaries via the ACP121 standard)
Note J or Juliet is the other common military usage, indicating local time.
Nice filk, though it took me an embarrassingly long time to realize what tune you were using.
Dude!
The output of systemd-journald viewed with journalctl can be viewed with UTC timestamps by using the –utc option. –ouput=short uses classic syslog style and is default, or short-iso uses similar style but with ISO 8601 timestamps.
Default: May 01 20:31:50
short ISO: 2017-05-01T20:31:50-0500
short ISO with –utc: 2017-05-02T01:31:50+0000
Seems like journalctl can be made to do the right thing with just a command line tweak.
Or UTC should be the default.
Just be glad you don’t have to deal with the screwup we encountered when fixing the New Delhi node of a distributed system. Commands were hanging because the (weird and nonstandard, for complicated reasons) SSH client wouldn’t recognise responses with timestamps in the future, and all the responses were 5½ hours in the future. But that shouldn’t happen, because all the timestamps are in UTC. Turned out that some idiot sysadmin had set the system to think it was in GMT and put its notion of UTC forwards 5½ hours to match local time.
Incidentally, UTC is a horrible mess, why can’t we use TAI instead? </whine>
And just be glad that you do not have to manage systems in the Turks and Caicos Islands.
TCI has been using Atlantic Standard Time. ( same as Bermuda, Nova Scotia, New Brunswick). Most on-line maps of the Time Zones, do not get this correct, lumping TCI with the Bahamas which is EST.
On March 11, 2018 at 2:00 am they are changing to Eastern *Daylight* Time.
So a TWO, TWO TWO Hours in one, change!
Methinks some will lose more than 2 hours of sleep.
I think you have a sign error in your math – 2AM AST is 1AM EST (both are 6AM GMT) is 2AM EDT (spring forward to go from *ST to *DT). So there’s no immediate net change from this move.
I like this way of changing your jurisdiction’s time zone in general:
If you don’t observe daylight saving time and want to move west, then at the spring time change, adopt DST and move to the next later time zone.
If you do observe DST and want to move west, then at the autumn time change, abandon DST.
If you don’t observe DST and want to move east, then at the autumn time change, adopt DST.
If you do observe DST and want to move east, then at the spring time change, abandon DST and move to the next earlier time zone.
The advantage of this method is that everyone gets a grace period until the second time change; for up to half a year after the change becomes official, failure to update your devices won’t cause problems. And of course you could daisy-chain these switches.
By George I think he’s got it. And I missed it by…. 4 hours!
And my excuse goes along with the grey hair….
No, this is ugly. AST and EDT are the same time, but are not the same zone. Same for MST and PDT: Phoenix did not magically move from Mountain Time to Pacific time over the weekend. Back when I worked at the KC office of a company with a regional HQ in South Bend, IN, they didn’t move from Eastern to Central and back. EST and CDT are the same time, and EST and CST differ by an hour.
People who don’t understand the difference between ST and DT have no business deciding how to tell what time it is.
To be clear, I suggested this as a good way to make permanent, not seasonal, changes. Occasionally some jurisdiction decides they’d rather be in a different time zone. You sometimes see it happen when a government realigns its foreign policy – Franco notably changed Spain from British to German time during WWII.
qmail logs using TAI.
I always log in Zulu time, with a nanosecond timer, even. Problem: my operating system (Linux) and the libraries built on top don’t deal well with leap seconds. How do you properly disambiguate things there?
>I always log in Zulu time, with a nanosecond timer, even. Problem: my operating system (Linux) and the libraries built on top don’t deal well with leap seconds. How do you properly disambiguate things there?
Zulu time is unambiguous. It’s Linux time that isn’t – it can skip and stutter.
Linux time, or posix time format? I thought that the linux kernel used UTC internally. Or are you referring to how the clock has to step backwards since it does not implement 23:59:60 at a leap second insertion? Posix time spec is obviously broken in that regards, I’m a little surprised that the Unix spec was never updated to be in line with modern time keeping standards, but I at least partially understand why the whole “day is only ever exactly 86400 seconds” strictness was chosen.
>Or are you referring to how the clock has to step backwards since it does not implement 23:59:60 at a leap second insertion?
That’s what I’m referring to.
>Posix time spec is obviously broken in that regards
I know it looks that way at first sight. You’ll understand the problem when you get why it can’t be unbroken in any way that is not arguably worse.
The problem is really fundamental. We want a monotonic fixed counter, but the Earth’s rotation is not just variable, it’s variable with unpredictable jitter. Thus, conversion between solar time and any monotonic counter is intrinsically messy. POSIX time had to make a choice: prioritize tracking solar time, breaking interval arithmetic and timestamp uniqueness, or prioritize uniqueness and interval arithmetic and break tracking solar time.
There was no right answer, just the less painful choice.
The right answer is to make UTC *UNIVERSAL* and abandon the notion of syncing with solar time.
It will make it much easier when we colonize the belt and move out beyond the solar system boundaries.
>The right answer is to make UTC *UNIVERSAL* and abandon the notion of syncing with solar time.
I disagree. In defense of calendrical irregularity
If system time accommodated minutes with 59 or 61 seconds, what kind of problems does that cause? Long term time comparisons would potentially be off by some tens of seconds, but short term comparisons would be off by no more than one second, and would have the advantage that times always match UTC.
>If system time accommodated minutes with 59 or 61 seconds, what kind of problems does that cause?
If you happen to format the date during a leap-second insertion, you might see a stutter like that. It already works that way.
>I disagree. In defense of calendrical irregularity
William’s point is that off Earth, we’ll need different intuitions about how dates and times correspond to our routines. A colony out in the asteroid belt might well just use the monotonic second count everybody uses for high precision time for civil time. A colony dirtside on Mars would likely use a clock and calendar specific to Mars, with locally appropriate irregularities, but, in my view, there’s no use in keeping the civil top of second on either Earth or Mars synced to the atomic top of second used for high precision time or civil time in the belt.
> A colony dirtside on Mars would likely use a clock and calendar specific to Mars
Or even controllers on Earth working with Martian rovers….
Yes, and since the length of a mean solar day on Mars isn’t even close to an integral number of seconds, people who live/work there won’t have kittens about it. 88,775.244s means some days go from 00:00:00 to 24:39:34 before rolling back to 00:00:00, but about a quarter of them go to 24:39:35 instead.
Due to the greater eccentricity of the Martian orbit it’s probably easiest to make Long Days be a block of days centered on Perihelion, when the Equation of Time is naturally falling behind anyway, or perhaps the last x days of each year to make it even simpler. The exact number of Long Days each year will vary between 163 and 164.
So Martian calendars will have to track whether it’s Leap Year (Leap Day being New Year’s Eve in those years, not a day stuck in the middle of the calendar as we stupidly do on Earth) and the start (and possibly end) of Long Days, but everything else will be the same. And kids who grow up keeping time that way won’t think there’s anything weird at all about Short Hour.
> it’s probably easiest to make Long Days be a block of days centered on Perihelion
Instead of a block all together, why not do “short-short-short-long”, and then even it out quarterly or annually. Gets the clock even less out-of-sorts with the local noon.
Or, you could do it with a stutter in the rhythm of the SSSL pattern – see also “Kelly days”. (My brother is a retired firefighter, and we all got used to this quirk in his schedules.)
Because it’s complicated to make it a pattern (that still has to be evened out) and the eccentricity of the orbit already gets it out-of-sorts with the local noon, so piling up the extra seconds during the time the solar day is longer anyway is actually better.
Your essay is correct to the extent that we are still circling the same sun.
I’m arguing that we abandon that now to get to the point where live in ” space habs or dome colonies”.
Also, when you’re talking about time drifting a second every year and a half, that if you strictly followed the Atomic UTC you’d get about a minute to minute and a half drive OVER A STANDARD LIFETIME.
That means that “daytime” will mostly have the same meaning when you’re 90 as it did when you first realized what “day” meant.
Not enough drift to really matter.
Which is irrelevant, because my position is more based on creating a mental focus OUT THERE, not down here.
I still think a big part of the problem is trying to couple the monotonic counter to solar time too closely. Civil time should use the actual rotation of the Earth as its reference clock, with no connection to atomic time, and high-precision time should be monotonic atomic time, with no attempt to stay synchronized with the rotation of the Earth, and conversion between the two should be a matter of lookup into tables of historical data (or of projection, for future dates).
>I still think a big part of the problem is trying to couple the monotonic counter to solar time too closely. Civil time should use the actual rotation of the Earth as its reference clock, with no connection to atomic time, and high-precision time should be monotonic atomic time, with no attempt to stay synchronized with the rotation of the Earth, and conversion between the two should be a matter of lookup into tables of historical data (or of projection, for future dates).
Surprise! This is effectively how it works now.
You don’t know this because the combination of Unix, NTP, and the TZ database works very hard at hiding the ugly bits.
Eric, the problem with the way it works now is that we try to use the atomic second for civil time, and then add extra seconds in when civil time diverges from solar time. What we should do is have timeservers serve up both high-precision and civil time, and have civil time be entirely a function of the rotation of the Earth, so that the two diverge smoothly instead of discretely (in other words, we sacrifice a synchronized top of second for the two timescales in order to make the number of seconds in n days a constant 86400*n).
> (in other words, we sacrifice a synchronized top of second for the two timescales in order to make the number of seconds in n days a constant 86400*n)
/me boggles
*WHICH* fscking second do you use for science and engineering??!!! And when they are different, which one do you report, oh say, medical data in? Or something as mundane as flow rates in gallons per hour (short hours? metric hours?? WTF?!!)
DON’T
DO
EEET!!!!!
I’d expect science and engineering to use the high-precision second.
For medical data, probably the civil second, as I doubt enough precision in timing would be required to necessitate the use of the high-precision second, and biological rhythms are going to be entrained to the solar day.
In general, the question would be “do you need precision on the order of seconds per year?”
>/me boggles
/me agrees.
Breaking the second is deeply horrible idea that would sooner later cause a lot of fatal malfunctioms.
I believe the way Google has modified the Linux kernel’s handling of leap seconds is to make every second of the day to which the leap second will be added/subtracted just a tiny bit longer/shorter, so it maintains a very smooth clock that most people won’t care is actually off by a fraction of a second for part of a day every few years.
But this is where we come in, solar time is very erratic from day to day, hence ‘mean time’.
England got a time zone because it was the only practical way to run a railway, particularly an East-West one. If Oxford is 5 minutes behind London, and it’s a two hour journey, when did you leave?
Greenwich of course because of astronomers and then the Navy.
I’m going to timestamp with microseconds until the heat death of the universe.
The timestamping countdown of DOOM
What precision is that estimate?
Hey, what’s a few hundred millennia between friends ?
We’ll be long gone, and if humans aren’t extinct, and are still stuck dealing with this problem, then they deserve to die off in a fluster of shame ;)
What I’m getting from this post and subsequent discussion is that there are times when logging in local time is useful but that logging using a universal time is more often the desired standard. To make everyone happy a system that would dual log to both local and universal time would be ideal.
No, that would lead to logs where the two timestamps occupy half the viewer width, leaving only a little space for actual log message text.
RFC3339 spends much of its work on local timezone timestamps.
I expect you to want at least: 1990-12-31T23:59:60Z
Not its RFC3339 alternate version 1990-12-31T15:59:60-08:00
All the versions with the current TZ offset seem to be in local time
So what I expect you would really want, would be an extension to RFC3339 for Zulu time with the current local time offset in the reverse direction.
Perhaps something like 1990-12-31T23:59:60Z+08:00 (not in RFC3339)
All the examples are supposed to be the leap second in 1990 as seen in the USA Pacific timezone.
Or maybe I just mis-read the spec, I was using section 5.6
Plus of course that RFC3339 has significant +00:00/-00:00
+00:00 for people living in the UTC timezone
-00:00 for people who know the UTC time but do not know their offset from UTC
Excellent read with thorough analysis on why abolishing timezones doesn’t work:
https://qntm.org/abolish
And on continuous timezones, which also doesn’t work:
https://qntm.org/continuous
>Excellent read with thorough analysis on why abolishing timezones doesn’t work:
That’s a better and funnier way of making the point I was driving at in
“In defense of calendrical irregularity”.
Actually that was your post that I had originally intended to reply to with this, after having referred to it from here (my second reading of that – I’m an avid lurker). Then I decided to reply here since it’s the newest post.
I very much hope that they have fixed it, but I was lately bitten by assumption of storing dates in local timezone.
When exporting to XML dump format, phpMyAdmin 4.1.6 (from 2014 or so) prints values stored in datetime type fields in ISO-like format without explicit timezone – in local time of the server (!). And this is *export* format.
In Python 2.7 (and possibly also in Python 3), the datetime.strftime(‘%s’) always assumes that datetime is in local timezone, even if it is offset-aware datetime with not-None tzinfo.
…