Everyone falls into a coma where they get to control their own individual apparent reality. Meanwhile they all starve to death or run into other problems because nothing about the wish says they need to stay alive.
Well, the wish doesn’t say “give me the ability to control my sensory experience forever”. If you die, your ability to control your body is discontinued, but that doesn’t mean you couldn’t control your body.
Suppose that a person with locked-in-syndrome wished for voluntary control of their body. Their disorder is completely cured, and they gain the ability to control their body like anyone else. Would you say that their wish wasn’t really granted unless they never die?
You aren’t necessarily stuck anywhere. How the statement “I want to talk to Brian” gets unpacked once the wish has been implemented depends on how “control” gets unpacked. Any statement we make about sensory experiences we wish to have involve control only on one conceptual level. We can’t control what Brian says once we’re talking to him, but we never specified that we wanted control over it either. I think that you wind up with a conflict where you ask for control on the wrong conceptual level, or two different levels conflict. I’m having trouble coming up with examples though.
And if “I want to talk to Brian” is parsed that way doesn’t that require telling Brian that someone wants to talk to him, which for at least a few seconds takes control away from Brian of part of his sensory input?
So a problem is that it would be impossible to know what options to make more obviously available to you. If the action space isn’t screened off the number of options you have is huge. There’s no way to present these options to a person in a way that satisfies “maximum control”. As soon as we get into suggesting actions we’re back to the problem of optimizing for what makes humans happy.
Can someone help me corrupt this wish?
“Give humans control over their own sensory inputs.”
Everyone falls into a coma where they get to control their own individual apparent reality. Meanwhile they all starve to death or run into other problems because nothing about the wish says they need to stay alive.
Doesn’t discontinuation of the sensory experience count as a lack of control?
Well, the wish doesn’t say “give me the ability to control my sensory experience forever”. If you die, your ability to control your body is discontinued, but that doesn’t mean you couldn’t control your body.
can you expand a little on this?
Suppose that a person with locked-in-syndrome wished for voluntary control of their body. Their disorder is completely cured, and they gain the ability to control their body like anyone else. Would you say that their wish wasn’t really granted unless they never die?
personally yes, but I realize this is strange.
Hmm, possibly. But everyone stuck in their own sensory setting with no connection to anyone else is still pretty bad.
You aren’t necessarily stuck anywhere. How the statement “I want to talk to Brian” gets unpacked once the wish has been implemented depends on how “control” gets unpacked. Any statement we make about sensory experiences we wish to have involve control only on one conceptual level. We can’t control what Brian says once we’re talking to him, but we never specified that we wanted control over it either. I think that you wind up with a conflict where you ask for control on the wrong conceptual level, or two different levels conflict. I’m having trouble coming up with examples though.
And if “I want to talk to Brian” is parsed that way doesn’t that require telling Brian that someone wants to talk to him, which for at least a few seconds takes control away from Brian of part of his sensory input?
So a problem is that it would be impossible to know what options to make more obviously available to you. If the action space isn’t screened off the number of options you have is huge. There’s no way to present these options to a person in a way that satisfies “maximum control”. As soon as we get into suggesting actions we’re back to the problem of optimizing for what makes humans happy.
This is highly helpful BTW.
None of that control is automated, and this manual control is the only source of input.
hahaha please specify wavelengths of light that will hit each receptor. Very good.
Exactly! It’d be pretty sucky.