How to check that you aren’t a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind’s workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.
No, there’s no way of knowing that you’re not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.
The “brain in the vat” idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.
If you are a brain in a vat then that should alter sensory perception. It shouldn’t alter cognitive processes (say ability to add numbers, or to spell or the like). You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain, but the point is that we have data about how the external world relates to us that isn’t purely sensory.
You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain,
This is the entire point of the brain in the vat idea. It’s not that “you could posit it”, you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate “experienced brain damage” (in our world) with “reduced mental faculties”, that just means that the vat imposes that correlation on us through its brain life support system.
Although I don’t claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable. The only thing we can really know is that we are a thing that thinks. This is Descartes 101.
Hmm. Your comment has brought to my attention an issue I hadn’t thought of before.
Are you familiar with Aumann’s knowledge operators)? In brief, he posits an all-encompassing set of world states that describe your state of mind as well as everything else. Events are subsets of world states, and the knowledge operator K transforms an event E into another event K(E): “I know that E”. Note that the operator’s output is of the same type as its input—a subset of the all-encompassing universe of discourse—and so it’s natural to try iterating the operator, obtaining K(K(E)) and so on.
Which brings me to my question. Let E be the event “you are a thing that thinks”, or “you exist”. You have read Descartes and know how to logically deduce E. My question is, do you also know that K(E)? K(K(E))? These are stronger statements than E—smaller subsets of the universe of discourse—so they could help you learn more about the external world. The first few iterations imply that you have functioning memory and reason, at the very least. Or maybe you could take the other horn of the dilemma: admit that you know E but deny knowing that you know it. That would be pretty awesome!
Or maybe you could take the other horn of the dilemma: claim that you know E but you don’t know that you know it. That would be pretty awesome!
When I was younger, a group of my friends started teasing others because they didn’t know the Hindu-Arabic number system. In reality, of course, they did know it, but they didn’t know that they knew it—that was the joke.
My question is, do you also know that K(E)? K(K(E))?
I have a sensory/gut experience of being a thinking being, or, as you put it, E.
Based on that experience, I develop the abstract belief that I exist, i.e., K(E).
By induction, if K(E) is reliable, then so is K(K(K(K(K(K(K(E)))))))). In other words, there is no particular reason to doubt that my self-reflective abstract propositional knowledge is correct, short of doubting the original proposition.
So I like the distinction between E and K(E), but I’m not sure what insights further recursion is supposed to provide.
I wasn’t familiar with this description of “world states”, but it sounds interesting, yes. I take it that positing “I am a think that things” is the same as asserting K(E). In asserting K(K(E)), I assert that I know that I know that I am a thing that thinks. If this understanding is incorrect, my following logic doesn’t apply.
I would argue that K(K(E)) is actually a necessary condition for K(E). Because if I don’t know that I know proposition A, then I don’t know proposition A.
Edit/Revised: I think all you have to do is realize that “K(K(A)) false” permits “K(A) false”. At first I had a little proof but now it seems just redundant so I deleted it.
So I guess I disagree, I think the iterations K(K...) are actually weaker statements, which are necessary for K(A) to be achieved. Consequentially I don’t see how you can learn anything beyond K(A).
K(A) is always a stronger statement than A because if you know K(A) you necessarily know A. (To get the terms clear: a “strong” statement corresponds to a smaller set of world states than a “weak” one.) It is debatable whether K(K(A)) is always equivalent to K(A) for human beings. I need to think about it more.
Format definition of K(E)={s \in S | P(s) \subset E }, where P is partition of S, ensures that K(K(E))=K(E). It’s easy to see: if s \in K(E) then P(s) \subset e, thus s \in K(K(E)), and similarly for s \notin K(E).
As for informal sence, I don’t see much use of K(K(E)) where E is a plain fact, if I aware that I know E, introspecting on that awareness will provide as much K-s as I like and little more. If I don’t aware that I know E (deep buried memory?), I will be aware of it when I remember it. But If I know that I know some class of facts or rules, that is useful for planning. However I can’t come up with useful example for K(K(K())) and higher.
Addition: Aumann’s formalization have limitations: it can’t represent false knowledge, memory glitches (when I know that I know something, but I can’t remember it), meta-knowledge, knowledge of rules of any kind (I’m not completely sure about rules).
This is the entire point of the brain in the vat idea. It’s not that “you could posit it”, you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate “experienced brain damage” (in our world) with “reduced mental faculties”, that just means that the vat imposes that correlation on us through its brain life support system.
When I’ve read about the brain-the-vat as an example before they normally just talk about sensory aspects. People don’t mention anything like altering the brain itself. So at minimum, cousin it has picked up a hole in how this is frequently described.
Although I don’t claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable.
Considering how much philosophy is complete nonsense I’d think that LWers would be more careful about using the argument that something in philosophy is widely known to be not resolvable. I agree that if when people are talking about the brain-the-vat they mean one where the vat is able to alter the brain itself in the process then this is not resolvable.
People don’t mention anything like altering the brain itself.
Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The “human experiences” are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.
FWIW, my original comment talked about a realistic version of brain in a vat, not the philosophical idealized model. But now that I thought about it some more, the idealized model is seeming harder and harder to implement.
The robots who take care of my vat must possess lots of equipment besides electrodes! A hammer, boxing gloves, some cannabis extract, a faster-than-light transmitter so I can’t measure the round-trip signal delay… Think about this: what if I went to a doctor and asked them to do an MRI scan as I thought about stuff? Or hooked some electrodes to my head and asked a friend to stimulate my neurons, telling me which ones only afterward? Bottom line, I could be an actual human in an actual world, or a completely simulated human in a completely simulated world, but any in-between situations—like brains in vats—can be detected pretty easily.
Um, if you’re a brain in a vat, then any “brain” you perceive in the real world like on a “real world” MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you’re a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.
No, there’s no way of knowing that you’re not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.
The “brain in the vat” idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.
If you are a brain in a vat then that should alter sensory perception. It shouldn’t alter cognitive processes (say ability to add numbers, or to spell or the like). You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain, but the point is that we have data about how the external world relates to us that isn’t purely sensory.
You don’t seem to be familiar with this concept.
This is the entire point of the brain in the vat idea. It’s not that “you could posit it”, you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate “experienced brain damage” (in our world) with “reduced mental faculties”, that just means that the vat imposes that correlation on us through its brain life support system.
Although I don’t claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable. The only thing we can really know is that we are a thing that thinks. This is Descartes 101.
Hmm. Your comment has brought to my attention an issue I hadn’t thought of before.
Are you familiar with Aumann’s knowledge operators)? In brief, he posits an all-encompassing set of world states that describe your state of mind as well as everything else. Events are subsets of world states, and the knowledge operator K transforms an event E into another event K(E): “I know that E”. Note that the operator’s output is of the same type as its input—a subset of the all-encompassing universe of discourse—and so it’s natural to try iterating the operator, obtaining K(K(E)) and so on.
Which brings me to my question. Let E be the event “you are a thing that thinks”, or “you exist”. You have read Descartes and know how to logically deduce E. My question is, do you also know that K(E)? K(K(E))? These are stronger statements than E—smaller subsets of the universe of discourse—so they could help you learn more about the external world. The first few iterations imply that you have functioning memory and reason, at the very least. Or maybe you could take the other horn of the dilemma: admit that you know E but deny knowing that you know it. That would be pretty awesome!
When I was younger, a group of my friends started teasing others because they didn’t know the Hindu-Arabic number system. In reality, of course, they did know it, but they didn’t know that they knew it—that was the joke.
I have a sensory/gut experience of being a thinking being, or, as you put it, E.
Based on that experience, I develop the abstract belief that I exist, i.e., K(E).
By induction, if K(E) is reliable, then so is K(K(K(K(K(K(K(E)))))))). In other words, there is no particular reason to doubt that my self-reflective abstract propositional knowledge is correct, short of doubting the original proposition.
So I like the distinction between E and K(E), but I’m not sure what insights further recursion is supposed to provide.
I just saw this and realized I basically just expanded on this above.
I wasn’t familiar with this description of “world states”, but it sounds interesting, yes. I take it that positing “I am a think that things” is the same as asserting K(E). In asserting K(K(E)), I assert that I know that I know that I am a thing that thinks. If this understanding is incorrect, my following logic doesn’t apply.
I would argue that K(K(E)) is actually a necessary condition for K(E). Because if I don’t know that I know proposition A, then I don’t know proposition A.
Edit/Revised: I think all you have to do is realize that “K(K(A)) false” permits “K(A) false”. At first I had a little proof but now it seems just redundant so I deleted it.
So I guess I disagree, I think the iterations K(K...) are actually weaker statements, which are necessary for K(A) to be achieved. Consequentially I don’t see how you can learn anything beyond K(A).
K(A) is always a stronger statement than A because if you know K(A) you necessarily know A. (To get the terms clear: a “strong” statement corresponds to a smaller set of world states than a “weak” one.) It is debatable whether K(K(A)) is always equivalent to K(A) for human beings. I need to think about it more.
Format definition of K(E)={s \in S | P(s) \subset E }, where P is partition of S, ensures that K(K(E))=K(E). It’s easy to see: if s \in K(E) then P(s) \subset e, thus s \in K(K(E)), and similarly for s \notin K(E).
As for informal sence, I don’t see much use of K(K(E)) where E is a plain fact, if I aware that I know E, introspecting on that awareness will provide as much K-s as I like and little more. If I don’t aware that I know E (deep buried memory?), I will be aware of it when I remember it. But If I know that I know some class of facts or rules, that is useful for planning. However I can’t come up with useful example for K(K(K())) and higher.
Addition: Aumann’s formalization have limitations: it can’t represent false knowledge, memory glitches (when I know that I know something, but I can’t remember it), meta-knowledge, knowledge of rules of any kind (I’m not completely sure about rules).
When I’ve read about the brain-the-vat as an example before they normally just talk about sensory aspects. People don’t mention anything like altering the brain itself. So at minimum, cousin it has picked up a hole in how this is frequently described.
Considering how much philosophy is complete nonsense I’d think that LWers would be more careful about using the argument that something in philosophy is widely known to be not resolvable. I agree that if when people are talking about the brain-the-vat they mean one where the vat is able to alter the brain itself in the process then this is not resolvable.
Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The “human experiences” are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.
FWIW, my original comment talked about a realistic version of brain in a vat, not the philosophical idealized model. But now that I thought about it some more, the idealized model is seeming harder and harder to implement.
The robots who take care of my vat must possess lots of equipment besides electrodes! A hammer, boxing gloves, some cannabis extract, a faster-than-light transmitter so I can’t measure the round-trip signal delay… Think about this: what if I went to a doctor and asked them to do an MRI scan as I thought about stuff? Or hooked some electrodes to my head and asked a friend to stimulate my neurons, telling me which ones only afterward? Bottom line, I could be an actual human in an actual world, or a completely simulated human in a completely simulated world, but any in-between situations—like brains in vats—can be detected pretty easily.
Um, if you’re a brain in a vat, then any “brain” you perceive in the real world like on a “real world” MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you’re a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.