What if there are no statements involved at all? Say you believe your roommate bought an eggplant. Which makes it more likely that there is an eggplant in the kitchen. However, it turns out in the kitchen is an edge case of an eggplant. Neither clearly an eggplant nor clearly not an eggplant. How do you update your belief that your roommate bought an eggplant?
I would be surprised if grocery stores sold edge cases… But perhaps it was a farmer’s market or something, perhaps a seller who often liked to sell weird things, perhaps grew hybridized plants. I’ll take the case where it’s a fresh vegetable/fruit/whatever thing that looks kind of eggplant-ish.
Anyway, that would generally be determined by: Why do I care whether he bought an eggplant? If I just want to make sure he has food, then that thing looks like it counts and that’s good enough for me. If I was going to make a recipe that called for eggplant, and he was supposed to buy one for me, then I’d want to know if its flesh, its taste, etc., were similar enough to an eggplant to work with the recipe (and depending on how picky the target audience was). If I were studying plants for its own sake, I might want to interrogate him about its genetics (or the contact info of the seller if he didn’t know). If I wanted to be able to tell someone else what it was, then… default description is “it’s an edge case of an eggplant”, and ideally I’d be able to call it a “half-eggplant, half-X” and know what X was; and how much I care about that information is determined by the context.
I think, in all of these cases, I would decide “Well, it’s kind of an eggplant and kind of not”, and lose interest in the question of whether I would call it an “eggplant” (except in that last case, though personally I’m with Feynman’s dad on not caring too much about the official name of such things) in favor of the underlying question that I cared about. My initial idea, that there would be either a classical eggplant or nothing in the kitchen, turned out to be incoherent in the face of reality, and I dropped the idea in favor of some new approximation to reality that was true and was relevant to my purpose.
What do you know, there’s an Eliezer essay on “dissolving the question”. Though working through an example is done in another post (on the question “If a tree falls in a forest...?”).
The problem is that almost all concepts are vague, including “vague” and “exact”. And things often fit a concept to, like, 90% or 10%. Instead of being a clear 50% edge case. If none of these cases allows for the application of Bayesian updating, because we “lose interest” in the question of how to update, then conditionalization isn’t applicable to the real world.
The edges of perhaps most real-world concepts are vague, but there are lots of central cases where the item clearly fits into the concept, on the dimensions that matter. Probably 99% of the time, when my roommate goes and buys a fruit or vegetable, I am not confounded by it not belonging to a known species, or by it being half rotten or having its insides replaced or being several fruits stitched together. The eggplant may be unusually large, or wet, or dusty, or bruised, perhaps more than I realized an eggplant could be. But, for many purposes, I don’t care about most of those dimensions.
Thus, 99% of the time I can glance into the kitchen and make a “known unknown” type of update on the type of fruit-object there or lack thereof; and 1% of the time I see something bizarre, discard my original model, and pick a new question and make a different type of update on that.
It appears you are appealing to rounding: Most concepts are vague, but we should round the partial containment relation to a binary one. Presumably anything which is above 50% eggplant is rounded to 100%, and anything below is rounded to 0%.
And you appear to be saying in 99% of cases, the vagueness isn’t close to 50% anyway, but closer to 99% or 1%. That may be the case of eggplants, or many nouns (though not all), but certainly not for many adjectives, like “large” or “wet” or “dusty”. (Or “red”, “rational”, “risky” etc.)
Presumably anything which is above 50% eggplant is rounded to 100%, and anything below is rounded to 0%.
No, it’s more like what you encounter in digital circuitry. Anything above 90% eggplant is rounded to 100%, anything below 10% eggplant is rounded to 0%, and anything between 10% and 90% is unexpected, out of spec, and triggers a “Wait, what?” and the sort of rethinking I’ve outlined above, which should dissolve the question of “Is it really eggplant?” in favor of “Is it food my roommate is likely to eat?” or whatever new question my underlying purpose suggests, which generally will register as >90% or <10%.
And you appear to be saying in 99% of cases, the vagueness isn’t close to 50% anyway, but closer to 99% or 1%. That may be the case of eggplants, or many nouns (though not all), but certainly not for many adjectives, like “large” or “wet” or “dusty”. (Or “red”, “rational”, “risky” etc.)
Do note that the difficulty around vagueness isn’t whether objects in general vary on a particular dimension in a continuous way; rather, it’s whether the objects I’m encountering in practice, and needing to judge on that dimension, yield a bunch of values that are close enough to my cutoff point that it’s difficult for me to decide. Are my clothes dry enough to put away? I don’t need to concern myself with whether they’re “dry” in an abstract general sense. (If I had to communicate with others about it, “dry” = “I touch them and don’t feel any moisture”; “sufficiently dry” = “I would put them away”.)
And, in practice, people often engineer things such that there’s a big margin of error and there usually aren’t any difficult decisions to make whose impact is important. One may pick one’s decision point of “dry enough” to be significantly drier than it “needs” to be, because erring in that direction is less of a problem than the opposite (so that, when I encounter cases in the range of 40-60% “dry enough”, either answer is fine and therefore I pick at random / based on my mood or whatever); and one might follow practices like always leaving clothes hanging up overnight or putting them on a dryer setting that’s reliably more than long enough, so that by the time one checks them, they’re pretty much always on the “dry” side of even that conservative boundary.
Occasionally, the decision is difficult, and the impact matters. That situation sucks, for humans and machines:
Anything above 90% eggplant is rounded to 100%, anything below 10% eggplant is rounded to 0%, and anything between 10% and 90% is unexpected, out of spec, and triggers a “Wait, what?” and the sort of rethinking I’ve outlined above, which should dissolve the question of “Is it really eggplant?” in favor of “Is it food my roommate is likely to eat?” or whatever new question my underlying purpose suggests, which generally will register as >90% or <10%.
Note that in the example we never asked the question “Is it really an eggplant?” in the first place, so this isn’t a question for us to dissolve. The question was rather how to update our original belief, or whether to update it at all (leave it unchanged). You are essentially arguing that Bayesian updating only works for beliefs whose vagueness (fuzzy truth value) is >90% or <10%. That Bayesian updating isn’t applicable for cases between 90% and 10%. So if we have a case with 80% or 20% vagueness, we can’t use the conditionalization rule at all.
This “restricted rounding” solution seems reasonable enough to me, but less than satisfying. First, why not place the boundaries differently? Like at 80%/20%? 70%/30%? 95%/5%? Heck, why not 50%/50%? It’s not clear where, and based on which principles, to draw the line between using rounding and not using conditionalization. Second, we are arguably throwing information away when we have a case of vagueness between the boundaries and refrain from doing Bayesian updating. There should be an updating solution which works for all degrees of vagueness so long as we can’t justify specific rounding boundaries of 50%/50%.
Do note that the difficulty around vagueness isn’t whether objects in general vary on a particular dimension in a continuous way; rather, it’s whether the objects I’m encountering in practice, and needing to judge on that dimension, yield a bunch of values that are close enough to my cutoff point that it’s difficult for me to decide. Are my clothes dry enough to put away? I don’t need to concern myself with whether they’re “dry” in an abstract general sense.
This solution assumes we can only use probability estimates when they are a) relevant to practical decisions, and b) that cases between 90% and 10% vagueness are never decision relevant. Even if we assume b) is true, a) poses a significant restriction. It makes Bayesian probability theory a slave of decision theory. Whenever beliefs aren’t decision relevant and have a vagueness between the boundaries, we wouldn’t be allowed to use any updating. E.g. when we are just passively observing evidence, as it happens in science, without having any instrumental intention with our observations other than updating our beliefs. But arguably it’s decision theory that relies on probability theory, not the other way round. E.g. in Savage’s or Jeffrey’s decision theories, which both use subjective probabilities as input in order to calculate expected utility.
What if there are no statements involved at all? Say you believe your roommate bought an eggplant. Which makes it more likely that there is an eggplant in the kitchen. However, it turns out in the kitchen is an edge case of an eggplant. Neither clearly an eggplant nor clearly not an eggplant. How do you update your belief that your roommate bought an eggplant?
I would be surprised if grocery stores sold edge cases… But perhaps it was a farmer’s market or something, perhaps a seller who often liked to sell weird things, perhaps grew hybridized plants. I’ll take the case where it’s a fresh vegetable/fruit/whatever thing that looks kind of eggplant-ish.
Anyway, that would generally be determined by: Why do I care whether he bought an eggplant? If I just want to make sure he has food, then that thing looks like it counts and that’s good enough for me. If I was going to make a recipe that called for eggplant, and he was supposed to buy one for me, then I’d want to know if its flesh, its taste, etc., were similar enough to an eggplant to work with the recipe (and depending on how picky the target audience was). If I were studying plants for its own sake, I might want to interrogate him about its genetics (or the contact info of the seller if he didn’t know). If I wanted to be able to tell someone else what it was, then… default description is “it’s an edge case of an eggplant”, and ideally I’d be able to call it a “half-eggplant, half-X” and know what X was; and how much I care about that information is determined by the context.
I think, in all of these cases, I would decide “Well, it’s kind of an eggplant and kind of not”, and lose interest in the question of whether I would call it an “eggplant” (except in that last case, though personally I’m with Feynman’s dad on not caring too much about the official name of such things) in favor of the underlying question that I cared about. My initial idea, that there would be either a classical eggplant or nothing in the kitchen, turned out to be incoherent in the face of reality, and I dropped the idea in favor of some new approximation to reality that was true and was relevant to my purpose.
What do you know, there’s an Eliezer essay on “dissolving the question”. Though working through an example is done in another post (on the question “If a tree falls in a forest...?”).
The problem is that almost all concepts are vague, including “vague” and “exact”. And things often fit a concept to, like, 90% or 10%. Instead of being a clear 50% edge case. If none of these cases allows for the application of Bayesian updating, because we “lose interest” in the question of how to update, then conditionalization isn’t applicable to the real world.
The edges of perhaps most real-world concepts are vague, but there are lots of central cases where the item clearly fits into the concept, on the dimensions that matter. Probably 99% of the time, when my roommate goes and buys a fruit or vegetable, I am not confounded by it not belonging to a known species, or by it being half rotten or having its insides replaced or being several fruits stitched together. The eggplant may be unusually large, or wet, or dusty, or bruised, perhaps more than I realized an eggplant could be. But, for many purposes, I don’t care about most of those dimensions.
Thus, 99% of the time I can glance into the kitchen and make a “known unknown” type of update on the type of fruit-object there or lack thereof; and 1% of the time I see something bizarre, discard my original model, and pick a new question and make a different type of update on that.
It appears you are appealing to rounding: Most concepts are vague, but we should round the partial containment relation to a binary one. Presumably anything which is above 50% eggplant is rounded to 100%, and anything below is rounded to 0%.
And you appear to be saying in 99% of cases, the vagueness isn’t close to 50% anyway, but closer to 99% or 1%. That may be the case of eggplants, or many nouns (though not all), but certainly not for many adjectives, like “large” or “wet” or “dusty”. (Or “red”, “rational”, “risky” etc.)
No, it’s more like what you encounter in digital circuitry. Anything above 90% eggplant is rounded to 100%, anything below 10% eggplant is rounded to 0%, and anything between 10% and 90% is unexpected, out of spec, and triggers a “Wait, what?” and the sort of rethinking I’ve outlined above, which should dissolve the question of “Is it really eggplant?” in favor of “Is it food my roommate is likely to eat?” or whatever new question my underlying purpose suggests, which generally will register as >90% or <10%.
Do note that the difficulty around vagueness isn’t whether objects in general vary on a particular dimension in a continuous way; rather, it’s whether the objects I’m encountering in practice, and needing to judge on that dimension, yield a bunch of values that are close enough to my cutoff point that it’s difficult for me to decide. Are my clothes dry enough to put away? I don’t need to concern myself with whether they’re “dry” in an abstract general sense. (If I had to communicate with others about it, “dry” = “I touch them and don’t feel any moisture”; “sufficiently dry” = “I would put them away”.)
And, in practice, people often engineer things such that there’s a big margin of error and there usually aren’t any difficult decisions to make whose impact is important. One may pick one’s decision point of “dry enough” to be significantly drier than it “needs” to be, because erring in that direction is less of a problem than the opposite (so that, when I encounter cases in the range of 40-60% “dry enough”, either answer is fine and therefore I pick at random / based on my mood or whatever); and one might follow practices like always leaving clothes hanging up overnight or putting them on a dryer setting that’s reliably more than long enough, so that by the time one checks them, they’re pretty much always on the “dry” side of even that conservative boundary.
Occasionally, the decision is difficult, and the impact matters. That situation sucks, for humans and machines:
https://en.wikipedia.org/wiki/Buridan’s_ass#Buridan’s_principle
Which is why we tend to engineer things to avoid that.
Note that in the example we never asked the question “Is it really an eggplant?” in the first place, so this isn’t a question for us to dissolve. The question was rather how to update our original belief, or whether to update it at all (leave it unchanged). You are essentially arguing that Bayesian updating only works for beliefs whose vagueness (fuzzy truth value) is >90% or <10%. That Bayesian updating isn’t applicable for cases between 90% and 10%. So if we have a case with 80% or 20% vagueness, we can’t use the conditionalization rule at all.
This “restricted rounding” solution seems reasonable enough to me, but less than satisfying. First, why not place the boundaries differently? Like at 80%/20%? 70%/30%? 95%/5%? Heck, why not 50%/50%? It’s not clear where, and based on which principles, to draw the line between using rounding and not using conditionalization. Second, we are arguably throwing information away when we have a case of vagueness between the boundaries and refrain from doing Bayesian updating. There should be an updating solution which works for all degrees of vagueness so long as we can’t justify specific rounding boundaries of 50%/50%.
This solution assumes we can only use probability estimates when they are a) relevant to practical decisions, and b) that cases between 90% and 10% vagueness are never decision relevant. Even if we assume b) is true, a) poses a significant restriction. It makes Bayesian probability theory a slave of decision theory. Whenever beliefs aren’t decision relevant and have a vagueness between the boundaries, we wouldn’t be allowed to use any updating. E.g. when we are just passively observing evidence, as it happens in science, without having any instrumental intention with our observations other than updating our beliefs. But arguably it’s decision theory that relies on probability theory, not the other way round. E.g. in Savage’s or Jeffrey’s decision theories, which both use subjective probabilities as input in order to calculate expected utility.