You aren’t necessarily stuck anywhere. How the statement “I want to talk to Brian” gets unpacked once the wish has been implemented depends on how “control” gets unpacked. Any statement we make about sensory experiences we wish to have involve control only on one conceptual level. We can’t control what Brian says once we’re talking to him, but we never specified that we wanted control over it either. I think that you wind up with a conflict where you ask for control on the wrong conceptual level, or two different levels conflict. I’m having trouble coming up with examples though.
And if “I want to talk to Brian” is parsed that way doesn’t that require telling Brian that someone wants to talk to him, which for at least a few seconds takes control away from Brian of part of his sensory input?
So a problem is that it would be impossible to know what options to make more obviously available to you. If the action space isn’t screened off the number of options you have is huge. There’s no way to present these options to a person in a way that satisfies “maximum control”. As soon as we get into suggesting actions we’re back to the problem of optimizing for what makes humans happy.
Hmm, possibly. But everyone stuck in their own sensory setting with no connection to anyone else is still pretty bad.
You aren’t necessarily stuck anywhere. How the statement “I want to talk to Brian” gets unpacked once the wish has been implemented depends on how “control” gets unpacked. Any statement we make about sensory experiences we wish to have involve control only on one conceptual level. We can’t control what Brian says once we’re talking to him, but we never specified that we wanted control over it either. I think that you wind up with a conflict where you ask for control on the wrong conceptual level, or two different levels conflict. I’m having trouble coming up with examples though.
And if “I want to talk to Brian” is parsed that way doesn’t that require telling Brian that someone wants to talk to him, which for at least a few seconds takes control away from Brian of part of his sensory input?
So a problem is that it would be impossible to know what options to make more obviously available to you. If the action space isn’t screened off the number of options you have is huge. There’s no way to present these options to a person in a way that satisfies “maximum control”. As soon as we get into suggesting actions we’re back to the problem of optimizing for what makes humans happy.
This is highly helpful BTW.