You propose the neato idea to use fractional truth values to deal with statements like “this is tall”, and boost it with a way to adjust such truth values as height varies. Somehow you missed that we already have a way to handle such gradations; it’s called “units of measurement”.
Units of measurement don’t work nearly as well when dealing with things such as beauty instead of length.
I think an important distinction between units of measurement and fuzzy logic is that units of measurement must pertain to things that are measurable, and they must be objectively defined, so that if two people express the same thing using units of measurement, their measurements will be the same. I see no reason that fuzzy logic shouldn’t be applicable to things that are simply a person’s impression of something.
Or perhaps it would be perfectly reasonable to relax the requirement that units of measurement be as objective as they are in practice. If Helen of Troy was N standards of deviation above the norm in beauty (trivia: N is about 6), we can declare the helen equal to N standards of deviation in beauty, and then agents capable of having an impression of beauty could look at random samples of people and say how beautiful they are in millihelens.
If there’s a better way of representing subjective trueness than real numbers between 0 and 1, I imagine lots of people would be interested in hearing it.
Or perhaps it would be perfectly reasonable to relax the requirement that units of measurement be as objective as they are in practice. If Helen of Troy was N standards of deviation above the norm in beauty (trivia: N is about 6), we can declare the helen equal to N standards of deviation in beauty, and then agents capable of having an impression of beauty could look at random samples of people and say how beautiful they are in millihelens.
That’s still creating a unit of measurement, it just uses protocols that prime it with respect to one person rather than a physical object. It doesn’t require a concept of fractional truth, just regular old measurement, probability andinterpolation.
Why don’t you spend some time more precisely developing the formalism… oh, wait
how can this be treated formally? I say, to heck with it.
I don’t think it’s fair to demand a full explanation of a topic that’s been around for over two decades (though a link to an online treatment would have been nice). Warrigal didn’t ‘come up with’ fractional values for truth. It’s a concept that’s been around (central?) in Eastern philosophy for centuries if not millenia, but was more-or-less exiled from Western philosophy by Aristotle’s Law of the Excluded Middle.
Fuzzy logic has proven itself very useful in control systems and in AI, because it matches the way people think about the world. Take Hemingway’s Challenge to “write one true [factual] sentence” (for which you would then need to show 100% exact correspondence of words to molecules in all relevant situations) and one’s perspective can change to see all facts as only partially true. ie, with a truth value in [0,1].
The statement “snow is white” is true if and only if snow is white, but you still have to define “snow” and “white”. How far from 100% even reflection of the entire visible spectrum can you go before “white” becomes “off-white”? How much can snow melt before it becomes “slush”? How much dissolved salt can it contain before it’s no longer “snow”? Is it still “snow” if it contains purple food colouring?
The same analysis of most concepts reveals we inherently think in fuzzy terms. (This is why court cases take so damn long to pick between the binary values of “guilty” and “not guilty”, when the answer is almost always “partially guilty”.) In fuzzy systems, concepts like “adult” (age of consent), “alive” (cryonics), “person” (abortion), all become scalar variables defined over n dimensions (usually n=1) when they are fed into the equations, and the results are translated back into a single value post-computation. The more usual control system variables are things like “hot”, “closed”, “wet”, “bright”, “fast”, etc., which make the system easier to understand and program than continuous measurements.
Bart Kosko’s book on the topic is Fuzzy Thinking. He makes some big claims about probability, but he says it boils down to fuzzy logic being just a different way of thinking about the same underlying math. (I don’t know if this gels with the discussion of ‘truth functionalism’ above) However, this prompts patterns of thought that would not otherwise make sense, which can lead to novel and useful results.
Units of measurement don’t work nearly as well when dealing with things such as beauty instead of length.
Then neither does fuzzy logic.
I think an important distinction between units of measurement and fuzzy logic is that units of measurement must pertain to things that are measurable, and they must be objectively defined, so that if two people express the same thing using units of measurement, their measurements will be the same. I see no reason that fuzzy logic shouldn’t be applicable to things that are simply a person’s impression of something.
Or perhaps it would be perfectly reasonable to relax the requirement that units of measurement be as objective as they are in practice. If Helen of Troy was N standards of deviation above the norm in beauty (trivia: N is about 6), we can declare the helen equal to N standards of deviation in beauty, and then agents capable of having an impression of beauty could look at random samples of people and say how beautiful they are in millihelens.
If there’s a better way of representing subjective trueness than real numbers between 0 and 1, I imagine lots of people would be interested in hearing it.
That’s still creating a unit of measurement, it just uses protocols that prime it with respect to one person rather than a physical object. It doesn’t require a concept of fractional truth, just regular old measurement, probability andinterpolation.
Why don’t you spend some time more precisely developing the formalism… oh, wait
That’s why.
I don’t think it’s fair to demand a full explanation of a topic that’s been around for over two decades (though a link to an online treatment would have been nice). Warrigal didn’t ‘come up with’ fractional values for truth. It’s a concept that’s been around (central?) in Eastern philosophy for centuries if not millenia, but was more-or-less exiled from Western philosophy by Aristotle’s Law of the Excluded Middle.
Fuzzy logic has proven itself very useful in control systems and in AI, because it matches the way people think about the world. Take Hemingway’s Challenge to “write one true [factual] sentence” (for which you would then need to show 100% exact correspondence of words to molecules in all relevant situations) and one’s perspective can change to see all facts as only partially true. ie, with a truth value in [0,1].
The statement “snow is white” is true if and only if snow is white, but you still have to define “snow” and “white”. How far from 100% even reflection of the entire visible spectrum can you go before “white” becomes “off-white”? How much can snow melt before it becomes “slush”? How much dissolved salt can it contain before it’s no longer “snow”? Is it still “snow” if it contains purple food colouring?
The same analysis of most concepts reveals we inherently think in fuzzy terms. (This is why court cases take so damn long to pick between the binary values of “guilty” and “not guilty”, when the answer is almost always “partially guilty”.) In fuzzy systems, concepts like “adult” (age of consent), “alive” (cryonics), “person” (abortion), all become scalar variables defined over n dimensions (usually n=1) when they are fed into the equations, and the results are translated back into a single value post-computation. The more usual control system variables are things like “hot”, “closed”, “wet”, “bright”, “fast”, etc., which make the system easier to understand and program than continuous measurements.
Bart Kosko’s book on the topic is Fuzzy Thinking. He makes some big claims about probability, but he says it boils down to fuzzy logic being just a different way of thinking about the same underlying math. (I don’t know if this gels with the discussion of ‘truth functionalism’ above) However, this prompts patterns of thought that would not otherwise make sense, which can lead to novel and useful results.