This is the entire point of the brain in the vat idea. It’s not that “you could posit it”, you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate “experienced brain damage” (in our world) with “reduced mental faculties”, that just means that the vat imposes that correlation on us through its brain life support system.
When I’ve read about the brain-the-vat as an example before they normally just talk about sensory aspects. People don’t mention anything like altering the brain itself. So at minimum, cousin it has picked up a hole in how this is frequently described.
Although I don’t claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable.
Considering how much philosophy is complete nonsense I’d think that LWers would be more careful about using the argument that something in philosophy is widely known to be not resolvable. I agree that if when people are talking about the brain-the-vat they mean one where the vat is able to alter the brain itself in the process then this is not resolvable.
People don’t mention anything like altering the brain itself.
Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The “human experiences” are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.
FWIW, my original comment talked about a realistic version of brain in a vat, not the philosophical idealized model. But now that I thought about it some more, the idealized model is seeming harder and harder to implement.
The robots who take care of my vat must possess lots of equipment besides electrodes! A hammer, boxing gloves, some cannabis extract, a faster-than-light transmitter so I can’t measure the round-trip signal delay… Think about this: what if I went to a doctor and asked them to do an MRI scan as I thought about stuff? Or hooked some electrodes to my head and asked a friend to stimulate my neurons, telling me which ones only afterward? Bottom line, I could be an actual human in an actual world, or a completely simulated human in a completely simulated world, but any in-between situations—like brains in vats—can be detected pretty easily.
Um, if you’re a brain in a vat, then any “brain” you perceive in the real world like on a “real world” MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you’re a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.
When I’ve read about the brain-the-vat as an example before they normally just talk about sensory aspects. People don’t mention anything like altering the brain itself. So at minimum, cousin it has picked up a hole in how this is frequently described.
Considering how much philosophy is complete nonsense I’d think that LWers would be more careful about using the argument that something in philosophy is widely known to be not resolvable. I agree that if when people are talking about the brain-the-vat they mean one where the vat is able to alter the brain itself in the process then this is not resolvable.
Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The “human experiences” are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.
FWIW, my original comment talked about a realistic version of brain in a vat, not the philosophical idealized model. But now that I thought about it some more, the idealized model is seeming harder and harder to implement.
The robots who take care of my vat must possess lots of equipment besides electrodes! A hammer, boxing gloves, some cannabis extract, a faster-than-light transmitter so I can’t measure the round-trip signal delay… Think about this: what if I went to a doctor and asked them to do an MRI scan as I thought about stuff? Or hooked some electrodes to my head and asked a friend to stimulate my neurons, telling me which ones only afterward? Bottom line, I could be an actual human in an actual world, or a completely simulated human in a completely simulated world, but any in-between situations—like brains in vats—can be detected pretty easily.
Um, if you’re a brain in a vat, then any “brain” you perceive in the real world like on a “real world” MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you’re a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.