The edges of perhaps most real-world concepts are vague, but there are lots of central cases where the item clearly fits into the concept, on the dimensions that matter. Probably 99% of the time, when my roommate goes and buys a fruit or vegetable, I am not confounded by it not belonging to a known species, or by it being half rotten or having its insides replaced or being several fruits stitched together. The eggplant may be unusually large, or wet, or dusty, or bruised, perhaps more than I realized an eggplant could be. But, for many purposes, I don’t care about most of those dimensions.
Thus, 99% of the time I can glance into the kitchen and make a “known unknown” type of update on the type of fruit-object there or lack thereof; and 1% of the time I see something bizarre, discard my original model, and pick a new question and make a different type of update on that.
It appears you are appealing to rounding: Most concepts are vague, but we should round the partial containment relation to a binary one. Presumably anything which is above 50% eggplant is rounded to 100%, and anything below is rounded to 0%.
And you appear to be saying in 99% of cases, the vagueness isn’t close to 50% anyway, but closer to 99% or 1%. That may be the case of eggplants, or many nouns (though not all), but certainly not for many adjectives, like “large” or “wet” or “dusty”. (Or “red”, “rational”, “risky” etc.)
Presumably anything which is above 50% eggplant is rounded to 100%, and anything below is rounded to 0%.
No, it’s more like what you encounter in digital circuitry. Anything above 90% eggplant is rounded to 100%, anything below 10% eggplant is rounded to 0%, and anything between 10% and 90% is unexpected, out of spec, and triggers a “Wait, what?” and the sort of rethinking I’ve outlined above, which should dissolve the question of “Is it really eggplant?” in favor of “Is it food my roommate is likely to eat?” or whatever new question my underlying purpose suggests, which generally will register as >90% or <10%.
And you appear to be saying in 99% of cases, the vagueness isn’t close to 50% anyway, but closer to 99% or 1%. That may be the case of eggplants, or many nouns (though not all), but certainly not for many adjectives, like “large” or “wet” or “dusty”. (Or “red”, “rational”, “risky” etc.)
Do note that the difficulty around vagueness isn’t whether objects in general vary on a particular dimension in a continuous way; rather, it’s whether the objects I’m encountering in practice, and needing to judge on that dimension, yield a bunch of values that are close enough to my cutoff point that it’s difficult for me to decide. Are my clothes dry enough to put away? I don’t need to concern myself with whether they’re “dry” in an abstract general sense. (If I had to communicate with others about it, “dry” = “I touch them and don’t feel any moisture”; “sufficiently dry” = “I would put them away”.)
And, in practice, people often engineer things such that there’s a big margin of error and there usually aren’t any difficult decisions to make whose impact is important. One may pick one’s decision point of “dry enough” to be significantly drier than it “needs” to be, because erring in that direction is less of a problem than the opposite (so that, when I encounter cases in the range of 40-60% “dry enough”, either answer is fine and therefore I pick at random / based on my mood or whatever); and one might follow practices like always leaving clothes hanging up overnight or putting them on a dryer setting that’s reliably more than long enough, so that by the time one checks them, they’re pretty much always on the “dry” side of even that conservative boundary.
Occasionally, the decision is difficult, and the impact matters. That situation sucks, for humans and machines:
Anything above 90% eggplant is rounded to 100%, anything below 10% eggplant is rounded to 0%, and anything between 10% and 90% is unexpected, out of spec, and triggers a “Wait, what?” and the sort of rethinking I’ve outlined above, which should dissolve the question of “Is it really eggplant?” in favor of “Is it food my roommate is likely to eat?” or whatever new question my underlying purpose suggests, which generally will register as >90% or <10%.
Note that in the example we never asked the question “Is it really an eggplant?” in the first place, so this isn’t a question for us to dissolve. The question was rather how to update our original belief, or whether to update it at all (leave it unchanged). You are essentially arguing that Bayesian updating only works for beliefs whose vagueness (fuzzy truth value) is >90% or <10%. That Bayesian updating isn’t applicable for cases between 90% and 10%. So if we have a case with 80% or 20% vagueness, we can’t use the conditionalization rule at all.
This “restricted rounding” solution seems reasonable enough to me, but less than satisfying. First, why not place the boundaries differently? Like at 80%/20%? 70%/30%? 95%/5%? Heck, why not 50%/50%? It’s not clear where, and based on which principles, to draw the line between using rounding and not using conditionalization. Second, we are arguably throwing information away when we have a case of vagueness between the boundaries and refrain from doing Bayesian updating. There should be an updating solution which works for all degrees of vagueness so long as we can’t justify specific rounding boundaries of 50%/50%.
Do note that the difficulty around vagueness isn’t whether objects in general vary on a particular dimension in a continuous way; rather, it’s whether the objects I’m encountering in practice, and needing to judge on that dimension, yield a bunch of values that are close enough to my cutoff point that it’s difficult for me to decide. Are my clothes dry enough to put away? I don’t need to concern myself with whether they’re “dry” in an abstract general sense.
This solution assumes we can only use probability estimates when they are a) relevant to practical decisions, and b) that cases between 90% and 10% vagueness are never decision relevant. Even if we assume b) is true, a) poses a significant restriction. It makes Bayesian probability theory a slave of decision theory. Whenever beliefs aren’t decision relevant and have a vagueness between the boundaries, we wouldn’t be allowed to use any updating. E.g. when we are just passively observing evidence, as it happens in science, without having any instrumental intention with our observations other than updating our beliefs. But arguably it’s decision theory that relies on probability theory, not the other way round. E.g. in Savage’s or Jeffrey’s decision theories, which both use subjective probabilities as input in order to calculate expected utility.
The edges of perhaps most real-world concepts are vague, but there are lots of central cases where the item clearly fits into the concept, on the dimensions that matter. Probably 99% of the time, when my roommate goes and buys a fruit or vegetable, I am not confounded by it not belonging to a known species, or by it being half rotten or having its insides replaced or being several fruits stitched together. The eggplant may be unusually large, or wet, or dusty, or bruised, perhaps more than I realized an eggplant could be. But, for many purposes, I don’t care about most of those dimensions.
Thus, 99% of the time I can glance into the kitchen and make a “known unknown” type of update on the type of fruit-object there or lack thereof; and 1% of the time I see something bizarre, discard my original model, and pick a new question and make a different type of update on that.
It appears you are appealing to rounding: Most concepts are vague, but we should round the partial containment relation to a binary one. Presumably anything which is above 50% eggplant is rounded to 100%, and anything below is rounded to 0%.
And you appear to be saying in 99% of cases, the vagueness isn’t close to 50% anyway, but closer to 99% or 1%. That may be the case of eggplants, or many nouns (though not all), but certainly not for many adjectives, like “large” or “wet” or “dusty”. (Or “red”, “rational”, “risky” etc.)
No, it’s more like what you encounter in digital circuitry. Anything above 90% eggplant is rounded to 100%, anything below 10% eggplant is rounded to 0%, and anything between 10% and 90% is unexpected, out of spec, and triggers a “Wait, what?” and the sort of rethinking I’ve outlined above, which should dissolve the question of “Is it really eggplant?” in favor of “Is it food my roommate is likely to eat?” or whatever new question my underlying purpose suggests, which generally will register as >90% or <10%.
Do note that the difficulty around vagueness isn’t whether objects in general vary on a particular dimension in a continuous way; rather, it’s whether the objects I’m encountering in practice, and needing to judge on that dimension, yield a bunch of values that are close enough to my cutoff point that it’s difficult for me to decide. Are my clothes dry enough to put away? I don’t need to concern myself with whether they’re “dry” in an abstract general sense. (If I had to communicate with others about it, “dry” = “I touch them and don’t feel any moisture”; “sufficiently dry” = “I would put them away”.)
And, in practice, people often engineer things such that there’s a big margin of error and there usually aren’t any difficult decisions to make whose impact is important. One may pick one’s decision point of “dry enough” to be significantly drier than it “needs” to be, because erring in that direction is less of a problem than the opposite (so that, when I encounter cases in the range of 40-60% “dry enough”, either answer is fine and therefore I pick at random / based on my mood or whatever); and one might follow practices like always leaving clothes hanging up overnight or putting them on a dryer setting that’s reliably more than long enough, so that by the time one checks them, they’re pretty much always on the “dry” side of even that conservative boundary.
Occasionally, the decision is difficult, and the impact matters. That situation sucks, for humans and machines:
https://en.wikipedia.org/wiki/Buridan’s_ass#Buridan’s_principle
Which is why we tend to engineer things to avoid that.
Note that in the example we never asked the question “Is it really an eggplant?” in the first place, so this isn’t a question for us to dissolve. The question was rather how to update our original belief, or whether to update it at all (leave it unchanged). You are essentially arguing that Bayesian updating only works for beliefs whose vagueness (fuzzy truth value) is >90% or <10%. That Bayesian updating isn’t applicable for cases between 90% and 10%. So if we have a case with 80% or 20% vagueness, we can’t use the conditionalization rule at all.
This “restricted rounding” solution seems reasonable enough to me, but less than satisfying. First, why not place the boundaries differently? Like at 80%/20%? 70%/30%? 95%/5%? Heck, why not 50%/50%? It’s not clear where, and based on which principles, to draw the line between using rounding and not using conditionalization. Second, we are arguably throwing information away when we have a case of vagueness between the boundaries and refrain from doing Bayesian updating. There should be an updating solution which works for all degrees of vagueness so long as we can’t justify specific rounding boundaries of 50%/50%.
This solution assumes we can only use probability estimates when they are a) relevant to practical decisions, and b) that cases between 90% and 10% vagueness are never decision relevant. Even if we assume b) is true, a) poses a significant restriction. It makes Bayesian probability theory a slave of decision theory. Whenever beliefs aren’t decision relevant and have a vagueness between the boundaries, we wouldn’t be allowed to use any updating. E.g. when we are just passively observing evidence, as it happens in science, without having any instrumental intention with our observations other than updating our beliefs. But arguably it’s decision theory that relies on probability theory, not the other way round. E.g. in Savage’s or Jeffrey’s decision theories, which both use subjective probabilities as input in order to calculate expected utility.