JavaScript is a language of many silly things, and one of them is:
> new Date() Wed Aug 24 2022 … > new Date().getFullYear() 2022 > new Date().getMonth() 7 > new Date().getDate() 24
It represents 2022-08-24 as (2022, 7, 24). One-based indexing for the year and day, but zero-based indexing for the month.
In this case, however, the problem was copied from Java:
getMonth: The value returned is between 0 and 11, with the value 0 representing January.getDate: The value returned is between 1 and 31 representing the day of the month.
I’d love to blame Java, but they seem to have copied the problem from C:
localtime(3):tm_mday: The day of the month, in the range 1 to 31. tm_mon: The number of months since January, in the range 0 to 11.
Looking at the Unix History repo, the first mention of “month (0-11)” is in 1973′s Research Unix V4:
The value is a pointer to an array whose components are .s3 .lp +5 5 0 seconds .lp +5 5 1 minutes .lp +5 5 2 hours .lp +5 5 3 day of the month (1-31) .lp +5 5 4 month (0-11) .lp +5 5 5 year \*- 1900 .lp +5 5 6 day of the week (Sunday = 0) .lp +5 5
While this may have been an original decision by Dennis Ritchie, it’s also possible it was copied from an even earlier system. Does anyone know?
Comment via: facebook
Is there a correlation with a language’s choice of a lower bound of arrays?
Months are often represented as a sequence of characters, rather than a number.
An array of strings, of month names, would be indexed by a number of obtain the name. Languages with zero-based arrays would use zero-based month-numbers, while languages with one-based arrays would use one-based month numbers.
The idate function in Fortran (one-based arrays) has one-based month numbers.
In Algol and Pascal the array base was user selectable for each array definition, but these languages didn’t have any standard library functions that returned a numeric value for the month. I cannot think of any appropriate library extensions for these languages.
I can confirm Julia and R both have one-based arrays and also one-based month numbers. I’m guessing they tend to line up quite often.
R copies Fortran, e.g., row/column ordering rather than column/row (what C and … do), and slightly different operator precedence. I’m guessing that Julia does the same???
I learned C and Unix in the 80s, and it was Just Obvious that 0-based counting was the right answer for this kind of data. It wasn’t justified or questioned, being clearly the right way to think.
And it IS obvious for things that get used as indices into arrays (monthname, dayname, etc.), and confusing that it’s NOT universal—day of month is 1-indexed.
Nowadays, it’s probably like the semi-apocryphal causal chain from Roman chariots to 18th century rail width to modern solid rocket booster diameter. Keeps getting repeated for compatibility and because it’s not clearly worse than alternatives.
Related to these kinds of details that are not obvious, but just need to be understood, my only Y2K bug was due to the fact that the older time struct used “years since 1900” as the year value, and I had a few pages that displayed “Jan 1 19100″ until I fixed them to be 1900 + $year instead of “19” + $year.