Yes, I appreciate the effect of automation on standardization, it is really a great thing. I just have the impression that differences stemming from the deep, very much method-shaped variability of research—like ‘radiation’ in the evolutionary sense—might be only superficially addressed using only the standardization-as-I-have-read-of-it. (I’m still reading, and expect this to change.)
I’m starting with an image of ‘variable and its measure can be only tenuously linked (like ‘length’ and ‘meter) to pretty much baked together (like the phytohormone example)’. This image might itself be just wrong.
Meter is not our only measure of length. We also have astronomical units to measure length. In school they taught us that they were different units and that we can speak with more precision about the distance between two stars if we talk in astronomical units.
For a long time there were also a bunch of interesting questions such as whether it makes sense to say that the norm meter in Paris is 1-meter long and whether it stays exactly 1-meter long is it’s surface oxidates a bit.
Metal changes it’s length at different temperature. That means you need a definition of temperature to define the length of the meter if you define it over the norm meter.
Newton thought that there’s was a fixed “temperature of blood”. Fahrenheit used “body temperature” as a measuring stick for a specific temperature.
It took a lot of science to find the freezing point and the boiling point of water as the perfect way to norm temperature. If you shape the vessel the right way, it’s possible to boil water at 102 degree C, so they needed to specify the right conditions.
I either didn’t know or hadn’t thought in the context about most of what you say here, thank you.
Yet this (the exact length of a meter) is more-or-less settled, in the sense that very many people use it without significant loss of what they want to convey. This is kind of exactly the thing I’d like to learn about—how unit-variable relationships evolve and come to some ‘resting position’. How people first come to think about the matter of a subject, than about the ways to describe it, and finally about the number of a common ‘piece’ used to measure it.
I think the Applied Ontology book is worth reading as it touches a lot of the practical concerns that come with the need for standardization due to automated knowledge processing. Even if you aren’t interested in automated knowledge processing it still useful.
Inventing Temperature: Measurement and Scientific Progress by Hasok Chang is a good case study for how our measure of temperature evolved. Temperature is a good example because conceptualizing it is harder than conceptualizing length. In the middle ages people had their measures for length but they didn’t have one for temperature.
The definition of the meter over the wavelength of light instead of over the norm meter was settled in 1960 but the amount of people for whom there were practical concerns was relatively little.
Interestingly we have at the moment a proposed change to the SI system that redefines the kilogram: https://en.wikipedia.org/wiki/Proposed_redefinition_of_SI_base_units
It changes the uncertainity that we have over a few constants. Beforehand we had an exact definition of the kilogram and afterwards we only know 8 digits of accuracy. On the other hand we get more accuracy for a bunch of other measurements. It might be worth reading a bit into the debate if you care about how standards are set.
Yes, I appreciate the effect of automation on standardization, it is really a great thing. I just have the impression that differences stemming from the deep, very much method-shaped variability of research—like ‘radiation’ in the evolutionary sense—might be only superficially addressed using only the standardization-as-I-have-read-of-it. (I’m still reading, and expect this to change.)
I’m starting with an image of ‘variable and its measure can be only tenuously linked (like ‘length’ and ‘meter) to pretty much baked together (like the phytohormone example)’. This image might itself be just wrong.
Meter is not our only measure of length. We also have astronomical units to measure length. In school they taught us that they were different units and that we can speak with more precision about the distance between two stars if we talk in astronomical units.
For a long time there were also a bunch of interesting questions such as whether it makes sense to say that the norm meter in Paris is 1-meter long and whether it stays exactly 1-meter long is it’s surface oxidates a bit.
Metal changes it’s length at different temperature. That means you need a definition of temperature to define the length of the meter if you define it over the norm meter.
Newton thought that there’s was a fixed “temperature of blood”. Fahrenheit used “body temperature” as a measuring stick for a specific temperature.
It took a lot of science to find the freezing point and the boiling point of water as the perfect way to norm temperature. If you shape the vessel the right way, it’s possible to boil water at 102 degree C, so they needed to specify the right conditions.
I either didn’t know or hadn’t thought in the context about most of what you say here, thank you.
Yet this (the exact length of a meter) is more-or-less settled, in the sense that very many people use it without significant loss of what they want to convey. This is kind of exactly the thing I’d like to learn about—how unit-variable relationships evolve and come to some ‘resting position’. How people first come to think about the matter of a subject, than about the ways to describe it, and finally about the number of a common ‘piece’ used to measure it.
I think the Applied Ontology book is worth reading as it touches a lot of the practical concerns that come with the need for standardization due to automated knowledge processing. Even if you aren’t interested in automated knowledge processing it still useful.
Inventing Temperature: Measurement and Scientific Progress by Hasok Chang is a good case study for how our measure of temperature evolved. Temperature is a good example because conceptualizing it is harder than conceptualizing length. In the middle ages people had their measures for length but they didn’t have one for temperature.
The definition of the meter over the wavelength of light instead of over the norm meter was settled in 1960 but the amount of people for whom there were practical concerns was relatively little. Interestingly we have at the moment a proposed change to the SI system that redefines the kilogram: https://en.wikipedia.org/wiki/Proposed_redefinition_of_SI_base_units
It changes the uncertainity that we have over a few constants. Beforehand we had an exact definition of the kilogram and afterwards we only know 8 digits of accuracy. On the other hand we get more accuracy for a bunch of other measurements. It might be worth reading a bit into the debate if you care about how standards are set.