Standardization is the core word. Institutions like ISO exist to create common standards. Jounals can then force scientists to actually follow standards.
Controlled vocabularies and applied ontology seem other key words.
How useful standardization happens to be depends a lot on the quality of the standard. The DSM-V for example seems to be a standard that holds science back and as a result there are calls for funding research that tries to use new standards.
Thank you! I’ll look at/read “Applied ontology: an introduction” (ed. by Munn and Smith) - the results are rather varied and have their own developed terminology, and this one looks as good place to start as any.
Edit to add: tentatively, the “automated information systems” angle might not be what I’m looking for:(
Automated information system require fixed vocabulary.
If people who observe rats have a different idea about what a leg happens to be then the people who study humans (the leg is the part between the foot and the knee) there are problems with translating knowledge.
Humans might be smart enough to do the translation but computers won’t. As a result there’s interest in standardization. Bioinformatics needs the standardization and that’s where Barry Smith comes from.
Bioinformatics has the interests in standardization because automated information systems don’t work without it.
I remember a story, which I think cames from People Works (a book about Google’s HR department). It made the point that it’s not trivial to have a charged definition in a company of what it means to have 10 employees.
The people who pay the wages might count 6 full time employees plus 4 half-time employees as 8 employees. When it comes to paying health insurance, it’s 10 employees.
The HR department might count prospective employees as an employee the moment the employee signs the offer while another department waits till their starting date.
The fact that Google has a charged definition of employee allowed them to do much better statistics.
Yes, I appreciate the effect of automation on standardization, it is really a great thing. I just have the impression that differences stemming from the deep, very much method-shaped variability of research—like ‘radiation’ in the evolutionary sense—might be only superficially addressed using only the standardization-as-I-have-read-of-it. (I’m still reading, and expect this to change.)
I’m starting with an image of ‘variable and its measure can be only tenuously linked (like ‘length’ and ‘meter) to pretty much baked together (like the phytohormone example)’. This image might itself be just wrong.
Meter is not our only measure of length. We also have astronomical units to measure length. In school they taught us that they were different units and that we can speak with more precision about the distance between two stars if we talk in astronomical units.
For a long time there were also a bunch of interesting questions such as whether it makes sense to say that the norm meter in Paris is 1-meter long and whether it stays exactly 1-meter long is it’s surface oxidates a bit.
Metal changes it’s length at different temperature. That means you need a definition of temperature to define the length of the meter if you define it over the norm meter.
Newton thought that there’s was a fixed “temperature of blood”. Fahrenheit used “body temperature” as a measuring stick for a specific temperature.
It took a lot of science to find the freezing point and the boiling point of water as the perfect way to norm temperature. If you shape the vessel the right way, it’s possible to boil water at 102 degree C, so they needed to specify the right conditions.
I either didn’t know or hadn’t thought in the context about most of what you say here, thank you.
Yet this (the exact length of a meter) is more-or-less settled, in the sense that very many people use it without significant loss of what they want to convey. This is kind of exactly the thing I’d like to learn about—how unit-variable relationships evolve and come to some ‘resting position’. How people first come to think about the matter of a subject, than about the ways to describe it, and finally about the number of a common ‘piece’ used to measure it.
I think the Applied Ontology book is worth reading as it touches a lot of the practical concerns that come with the need for standardization due to automated knowledge processing. Even if you aren’t interested in automated knowledge processing it still useful.
Inventing Temperature: Measurement and Scientific Progress by Hasok Chang is a good case study for how our measure of temperature evolved. Temperature is a good example because conceptualizing it is harder than conceptualizing length. In the middle ages people had their measures for length but they didn’t have one for temperature.
The definition of the meter over the wavelength of light instead of over the norm meter was settled in 1960 but the amount of people for whom there were practical concerns was relatively little.
Interestingly we have at the moment a proposed change to the SI system that redefines the kilogram: https://en.wikipedia.org/wiki/Proposed_redefinition_of_SI_base_units
It changes the uncertainity that we have over a few constants. Beforehand we had an exact definition of the kilogram and afterwards we only know 8 digits of accuracy. On the other hand we get more accuracy for a bunch of other measurements. It might be worth reading a bit into the debate if you care about how standards are set.
Standardization is the core word. Institutions like ISO exist to create common standards. Jounals can then force scientists to actually follow standards.
Controlled vocabularies and applied ontology seem other key words.
How useful standardization happens to be depends a lot on the quality of the standard. The DSM-V for example seems to be a standard that holds science back and as a result there are calls for funding research that tries to use new standards.
Thank you! I’ll look at/read “Applied ontology: an introduction” (ed. by Munn and Smith) - the results are rather varied and have their own developed terminology, and this one looks as good place to start as any.
Edit to add: tentatively, the “automated information systems” angle might not be what I’m looking for:(
Automated information system require fixed vocabulary.
If people who observe rats have a different idea about what a leg happens to be then the people who study humans (the leg is the part between the foot and the knee) there are problems with translating knowledge.
Humans might be smart enough to do the translation but computers won’t. As a result there’s interest in standardization. Bioinformatics needs the standardization and that’s where Barry Smith comes from. Bioinformatics has the interests in standardization because automated information systems don’t work without it.
I remember a story, which I think cames from People Works (a book about Google’s HR department). It made the point that it’s not trivial to have a charged definition in a company of what it means to have 10 employees. The people who pay the wages might count 6 full time employees plus 4 half-time employees as 8 employees. When it comes to paying health insurance, it’s 10 employees.
The HR department might count prospective employees as an employee the moment the employee signs the offer while another department waits till their starting date.
The fact that Google has a charged definition of employee allowed them to do much better statistics.
Yes, I appreciate the effect of automation on standardization, it is really a great thing. I just have the impression that differences stemming from the deep, very much method-shaped variability of research—like ‘radiation’ in the evolutionary sense—might be only superficially addressed using only the standardization-as-I-have-read-of-it. (I’m still reading, and expect this to change.)
I’m starting with an image of ‘variable and its measure can be only tenuously linked (like ‘length’ and ‘meter) to pretty much baked together (like the phytohormone example)’. This image might itself be just wrong.
Meter is not our only measure of length. We also have astronomical units to measure length. In school they taught us that they were different units and that we can speak with more precision about the distance between two stars if we talk in astronomical units.
For a long time there were also a bunch of interesting questions such as whether it makes sense to say that the norm meter in Paris is 1-meter long and whether it stays exactly 1-meter long is it’s surface oxidates a bit.
Metal changes it’s length at different temperature. That means you need a definition of temperature to define the length of the meter if you define it over the norm meter.
Newton thought that there’s was a fixed “temperature of blood”. Fahrenheit used “body temperature” as a measuring stick for a specific temperature.
It took a lot of science to find the freezing point and the boiling point of water as the perfect way to norm temperature. If you shape the vessel the right way, it’s possible to boil water at 102 degree C, so they needed to specify the right conditions.
I either didn’t know or hadn’t thought in the context about most of what you say here, thank you.
Yet this (the exact length of a meter) is more-or-less settled, in the sense that very many people use it without significant loss of what they want to convey. This is kind of exactly the thing I’d like to learn about—how unit-variable relationships evolve and come to some ‘resting position’. How people first come to think about the matter of a subject, than about the ways to describe it, and finally about the number of a common ‘piece’ used to measure it.
I think the Applied Ontology book is worth reading as it touches a lot of the practical concerns that come with the need for standardization due to automated knowledge processing. Even if you aren’t interested in automated knowledge processing it still useful.
Inventing Temperature: Measurement and Scientific Progress by Hasok Chang is a good case study for how our measure of temperature evolved. Temperature is a good example because conceptualizing it is harder than conceptualizing length. In the middle ages people had their measures for length but they didn’t have one for temperature.
The definition of the meter over the wavelength of light instead of over the norm meter was settled in 1960 but the amount of people for whom there were practical concerns was relatively little. Interestingly we have at the moment a proposed change to the SI system that redefines the kilogram: https://en.wikipedia.org/wiki/Proposed_redefinition_of_SI_base_units
It changes the uncertainity that we have over a few constants. Beforehand we had an exact definition of the kilogram and afterwards we only know 8 digits of accuracy. On the other hand we get more accuracy for a bunch of other measurements. It might be worth reading a bit into the debate if you care about how standards are set.