This post seems made to order to apply recently acquired knowledge. If I come across as pedantic, please attribute that to learner’s thrill. From Probability Theory:
“Seeing is inference from incomplete information”. -- E.T. Jaynes
Your usual sensory information is inadequate data. You’re dealing with that every day. This seems a good starting point to generalize from; brains in vats seem like overkill to approach the question.
Alice and Bob are faced with a scenario of decision in uncertainty. Probability theory and decision theory are normative frameworks that apply there. All the information you’ve given is symmetrical, favoring no choice over the other.
Should Alice or Bob do anything at all ? That depends on the consequences to them of guessing one way or the other, or not guessing at all. If the outcomes are equally good (or equally bad) guessing randomly is optimal.
Should they act differently ? There’s nothing in the information you’ve provided that seems to break the symmetry in uncertainty, so I’d say no.
Should they circle more than one color ? … And other variants—you’ve given no reasons to prefer one outcome to another, so in general we can’t say how they should act.
If Alice and Bob could coordinate ? They would (as far as I can tell by assessing the information given) have no more definite information by pooling their knowledge than they have separately.
Very well-put, Morendil. The decision one should make here depends on the consequences of erring one way or the other and so there’s insufficient information. One quibble though:
Your usual sensory information is inadequate data. You’re dealing with that every day. This seems a good starting point to generalize from
It’s true, but I don’t think there’s anything such as “adequate data” to compare to. In a sense, all data is going to be inadequate. David MacKay’s cardinal rule of information theory is, “To make inferences, you have to make assumptions.” No matter how much data you get, it’s going to be building on a prior. The data must be interpreted in light of the prior.
Human cognition has been refined over the evolutionary history to start from very good priors which allow it very accurate inferences from minimal data, and you have to go out of your way to find the places where the priors point it in the wrong direction, such as in optical illusions.
I wouldn’t call it a quibble: I agree. There is a lovely tension between the idea that all perception, not just seeing, is “inference from incomplete information”; and the peripatetic axiom, “nothing is in the intellect that was not first in the senses”.
The only way to have complete information is to be Laplace’s demon. No one else has truly “adequate data”, and all knowledge is in that sense incertain; nevertheless, inference does work pretty well. (So well that it sure feels as if logic need not have been “first in the senses”, even though it is a form of knowledge and should therefore be to some extent incertain… the epistemology, it burns us !).
Your usual sensory information is inadequate data. You’re dealing with that every day. This seems a good starting point to generalize from; brains in vats seem like overkill to approach the question.
Agreed. Brains-in-vats was one of the original questions that I was pondering and the specific questions were narrowed into goofy sensory data. Narrowing that down provided the two scenarios.
Should they act differently ? There’s nothing in the information you’ve provided that seems to break the symmetry in uncertainty, so I’d say no.
What I find interesting is that Bob has more information than Alice but is stuck with the same problem. I found it counter-intuitive that more information did not help provide an action. Is it better to think of Bob as having no more information than Alice?
Adding a memory of Blue to Alice seems like adding information and provides a clear action. Additionally adding a memory of Red removes the clear action. Is this because there is now doubt in the previous information? Or… ?
Should they circle more than one color ? … And other variants—you’ve given no reasons to prefer one outcome to another, so in general we can’t say how they should act.
Why wouldn’t Bob circle both Red and Blue if given the option?
What I find interesting is that Bob has more information than Alice but is stuck with the same problem
Yes, it seems that Bob has more information than Alice.
This is perhaps a good context to consider the supposed DIKW hierarchy: data < information < knowledge < wisdom. Or the related observation from Bateson that information is “a difference that makes a difference”.
We can say that Bob has more data than Alice, but since this data has no effect on how Bob may weigh his choices, it’s a difference that makes no difference.
Is this because there is now doubt in the previous information ?
“Doubt” is data, too (or what Jaynes would call “prior information”). Give Alice a memory of a blue ball, but at the same time give her a reason (unspecific) to doubt her senses, so that she reasons “I recall a blue ball, but I don’t want to take that into account.” This has the same effect as giving Bob conflicting memories.
We can say that Bob has more data than Alice, but since this data has no effect on how Bob may weigh his choices, it’s a difference that makes no difference.
Okay, that makes sense to me.
Give Alice a memory of a blue ball, but at the same time give her a reason (unspecific) to doubt her senses, so that she reasons “I recall a blue ball, but I don’t want to take that into account.” This has the same effect as giving Bob conflicting memories.
Ah, okay, that makes a piece of the puzzle click into place.
In DIKW terms, what happens when we add Blue to Alice? When we later add Red? My hunch is that the label on the data simply changes as the set of data becomes useful or useless.
Also, would anything change if we add “Green” to Bob’s choice list? My guess is that it would because Bob’s memories of Red and Blue are useful in asking about Green. Specifically, there is no memory of Green and there are a memories of Red and Blue.
What I find interesting is that Bob has more information than Alice but is stuck with the same problem. I found it counter-intuitive that more information did not help provide an action. Is it better to think of Bob as having no more information than Alice?
The way you’ve set the question up Bob doesn’t have any more relevant/useful information than Alice. They are both faced with only two apparently mutually exclusive options (red or blue) and you have not provided any information about how the test is scored or why either should have any reason to prefer to answer it over not answering it. Since Bob has two logically inconsistent memories he does not actually have any more relevant information than Alice and so there should not be anything counter-intuitive about the fact that the information doesn’t change his probabilities.
Adding a memory of Blue to Alice seems like adding information and provides a clear action. Additionally adding a memory of Red removes the clear action. Is this because there is now doubt in the previous information? Or… ?
There’s other information implicit in the decision that you are not accounting for. Alice has a set of background beliefs and assumptions, one of which is probably that her memory is generally believed to correlate with true facts about external reality. In the case of discovering logical inconsistencies in her memory she has to revise her beliefs about the reliability of her memory and change how she weights remembered facts as evidence. You can’t just ignore the implicit background knowledge that provides the context for the agents’ decision making when considering how they update in the light of new evidence.
Why wouldn’t Bob circle both Red and Blue if given the option?
You haven’t given enough context for anyone to provide an answer to this question. When confronted with the multiple choice question Bob may come up with a theory about what the existence of this question implies. If he hasn’t been given any specific reason to believe there are any particular rules applied to the scoring of the answer he gives then he will have to fall back on his background knowledge about what kinds of agents might set him such a question and what their motivations and agendas might be. That will play into his decision about how to act.
This post seems made to order to apply recently acquired knowledge. If I come across as pedantic, please attribute that to learner’s thrill. From Probability Theory:
“Seeing is inference from incomplete information”. -- E.T. Jaynes
Your usual sensory information is inadequate data. You’re dealing with that every day. This seems a good starting point to generalize from; brains in vats seem like overkill to approach the question.
Alice and Bob are faced with a scenario of decision in uncertainty. Probability theory and decision theory are normative frameworks that apply there. All the information you’ve given is symmetrical, favoring no choice over the other.
Should Alice or Bob do anything at all ? That depends on the consequences to them of guessing one way or the other, or not guessing at all. If the outcomes are equally good (or equally bad) guessing randomly is optimal.
Should they act differently ? There’s nothing in the information you’ve provided that seems to break the symmetry in uncertainty, so I’d say no.
Should they circle more than one color ? … And other variants—you’ve given no reasons to prefer one outcome to another, so in general we can’t say how they should act.
If Alice and Bob could coordinate ? They would (as far as I can tell by assessing the information given) have no more definite information by pooling their knowledge than they have separately.
Very well-put, Morendil. The decision one should make here depends on the consequences of erring one way or the other and so there’s insufficient information. One quibble though:
It’s true, but I don’t think there’s anything such as “adequate data” to compare to. In a sense, all data is going to be inadequate. David MacKay’s cardinal rule of information theory is, “To make inferences, you have to make assumptions.” No matter how much data you get, it’s going to be building on a prior. The data must be interpreted in light of the prior.
Human cognition has been refined over the evolutionary history to start from very good priors which allow it very accurate inferences from minimal data, and you have to go out of your way to find the places where the priors point it in the wrong direction, such as in optical illusions.
I wouldn’t call it a quibble: I agree. There is a lovely tension between the idea that all perception, not just seeing, is “inference from incomplete information”; and the peripatetic axiom, “nothing is in the intellect that was not first in the senses”.
The only way to have complete information is to be Laplace’s demon. No one else has truly “adequate data”, and all knowledge is in that sense incertain; nevertheless, inference does work pretty well. (So well that it sure feels as if logic need not have been “first in the senses”, even though it is a form of knowledge and should therefore be to some extent incertain… the epistemology, it burns us !).
Agreed. Brains-in-vats was one of the original questions that I was pondering and the specific questions were narrowed into goofy sensory data. Narrowing that down provided the two scenarios.
What I find interesting is that Bob has more information than Alice but is stuck with the same problem. I found it counter-intuitive that more information did not help provide an action. Is it better to think of Bob as having no more information than Alice?
Adding a memory of Blue to Alice seems like adding information and provides a clear action. Additionally adding a memory of Red removes the clear action. Is this because there is now doubt in the previous information? Or… ?
Why wouldn’t Bob circle both Red and Blue if given the option?
Yes, it seems that Bob has more information than Alice.
This is perhaps a good context to consider the supposed DIKW hierarchy: data < information < knowledge < wisdom. Or the related observation from Bateson that information is “a difference that makes a difference”.
We can say that Bob has more data than Alice, but since this data has no effect on how Bob may weigh his choices, it’s a difference that makes no difference.
“Doubt” is data, too (or what Jaynes would call “prior information”). Give Alice a memory of a blue ball, but at the same time give her a reason (unspecific) to doubt her senses, so that she reasons “I recall a blue ball, but I don’t want to take that into account.” This has the same effect as giving Bob conflicting memories.
Okay, that makes sense to me.
Ah, okay, that makes a piece of the puzzle click into place.
In DIKW terms, what happens when we add Blue to Alice? When we later add Red? My hunch is that the label on the data simply changes as the set of data becomes useful or useless.
Also, would anything change if we add “Green” to Bob’s choice list? My guess is that it would because Bob’s memories of Red and Blue are useful in asking about Green. Specifically, there is no memory of Green and there are a memories of Red and Blue.
Interesting.
The way you’ve set the question up Bob doesn’t have any more relevant/useful information than Alice. They are both faced with only two apparently mutually exclusive options (red or blue) and you have not provided any information about how the test is scored or why either should have any reason to prefer to answer it over not answering it. Since Bob has two logically inconsistent memories he does not actually have any more relevant information than Alice and so there should not be anything counter-intuitive about the fact that the information doesn’t change his probabilities.
There’s other information implicit in the decision that you are not accounting for. Alice has a set of background beliefs and assumptions, one of which is probably that her memory is generally believed to correlate with true facts about external reality. In the case of discovering logical inconsistencies in her memory she has to revise her beliefs about the reliability of her memory and change how she weights remembered facts as evidence. You can’t just ignore the implicit background knowledge that provides the context for the agents’ decision making when considering how they update in the light of new evidence.
You haven’t given enough context for anyone to provide an answer to this question. When confronted with the multiple choice question Bob may come up with a theory about what the existence of this question implies. If he hasn’t been given any specific reason to believe there are any particular rules applied to the scoring of the answer he gives then he will have to fall back on his background knowledge about what kinds of agents might set him such a question and what their motivations and agendas might be. That will play into his decision about how to act.