Tick marks for years are ambiguous. Is the tick mark indicating the start of the year? The middle of the year? The end of the year? I have worked with chart libraries, and sometimes it’s even the current date, n years ago. Like a tick mark labelled “2010” = March 9, 2010. A better alternative to tick marks is to have “year separators”, where the “2010” is placed between two “tick marks” rather than under one, which can only be interpreted as start and end of the year.
Regarding temperature. Physically speaking, only 0 Kelvin is an objective zero point, such that something with 20 Kelvin has “twice as” much temperature as with 10 Kelvin. Kelvin is a “ratio” scale. Celsius and Fahrenheit are only “interval” scales, so 20°C is not twice as hot as 10°C, but only ~3.5% hotter. (See also this interesting Wikipedia article on the various types of scales.) This is even though 0°C (water freezes) seems more objective than 0°F.
Nonetheless, Kelvin is not relevant to what we perceive as “small” and “large” differences in everyday life. We wouldn’t say 20°C only feels a mere 3.5% warmer than 10°C.
I guess it helps to include two familiar “reference points” in the temperature axis of a chart, like freezing and boiling of water (0°C and 100°C, or explicit labels for Fahrenheit) or “fridge temperature” and “room temperature”. That should give some intuitive sense of distance in the temperature axis.
Good point regarding year tick marks! I was thinking think that labeling 0°C would make the most sense when freezing is really important. Say, if you were plotting historical data on temperatures and you were interested in trying to estimate the last frost date in spring or something. Then, 10°C would mean “twice as much margin” as 5°C.
These suggestions seem plausible. A few notes:
Tick marks for years are ambiguous. Is the tick mark indicating the start of the year? The middle of the year? The end of the year? I have worked with chart libraries, and sometimes it’s even the current date, n years ago. Like a tick mark labelled “2010” = March 9, 2010. A better alternative to tick marks is to have “year separators”, where the “2010” is placed between two “tick marks” rather than under one, which can only be interpreted as start and end of the year.
Regarding temperature. Physically speaking, only 0 Kelvin is an objective zero point, such that something with 20 Kelvin has “twice as” much temperature as with 10 Kelvin. Kelvin is a “ratio” scale. Celsius and Fahrenheit are only “interval” scales, so 20°C is not twice as hot as 10°C, but only ~3.5% hotter. (See also this interesting Wikipedia article on the various types of scales.) This is even though 0°C (water freezes) seems more objective than 0°F.
Nonetheless, Kelvin is not relevant to what we perceive as “small” and “large” differences in everyday life. We wouldn’t say 20°C only feels a mere 3.5% warmer than 10°C.
I guess it helps to include two familiar “reference points” in the temperature axis of a chart, like freezing and boiling of water (0°C and 100°C, or explicit labels for Fahrenheit) or “fridge temperature” and “room temperature”. That should give some intuitive sense of distance in the temperature axis.
Good point regarding year tick marks! I was thinking think that labeling 0°C would make the most sense when freezing is really important. Say, if you were plotting historical data on temperatures and you were interested in trying to estimate the last frost date in spring or something. Then, 10°C would mean “twice as much margin” as 5°C.