Caveat: I hadn’t thought all these things through as much at the time, so there are ideas I’m developing during this conversation.
So I’d see this world as maintaining the possibility for surprise and objections. They could have informed Grant of how they would drop info into him. Instead, they expected it to come up at some point (which it did), and, depending on his reaction at the time (which could vary with the manner it was brought up), they would change their interaction with him. This also gives people the possibility of manipulating the interaction with him, as the lady talking with him did. People still have choices, and those choices have consequences and are not all pre-ordained.
And I’m not arguing that the world will become perfectly enlightened and agreeing with my values (or the extended cluster of my values) by 2064, just that AIs will have tools to achieve their goals by them. Religious ideologies change all the time, and economic power is one strong thing that changes this (note how the truly stable religious communities only work by maintaining themselves at the dunbar number). When it becomes exceedingly expensive to not be an upload, while all uploads run cognitive circles around you while boasting of properly stable religious communities within them, the temptation to join will be large. And the possibility of anonymous forks can be offered then or later.
So I’d see this world as maintaining the possibility for surprise and objections. They could have informed Grant of how they would drop info into him. Instead, they expected it to come up at some point (which it did), and, depending on his reaction at the time (which could vary with the manner it was brought up), they would change their interaction with him. This also gives people the possibility of manipulating the interaction with him, as the lady talking with him did. People still have choices, and those choices have consequences and are not all pre-ordained.
That works fine though my problem with it is that it makes that whole interaction weird since there’s no reasonable way the person bringing up that information could have known that. It does kind of make things slightly weird in retrospect. I also have some degree of issue with the whole idea of people still having “choice” in any meaningful way in the world of the powers. I mean it kind of reminds me of the entity in newcomb’s problem, if humans are utterly predictable to them, than any appearance of a choice that wasn’t made for you by the AI seems impossible, with the possible exception of sufficiently strong precommitments made presingularity. When you understand perfectly how every action will affect a human’s causal mental process, then there’s no choosing to not interfere, you are forced to choose exactly which final outcome you want. Still I could easily see this being the case in your scenario but i’m not sure it is.
As for oppressive cultures, making their lives expensive isn’t likely to do much good, starving them out wouldn’t be a good idea for really obvious reasons(PR mainly), and denying them luxuries is going to be of limited use as well. I mean when they probably view the powers as demons using subtle methods to convince them to join you is going to be pretty ineffective it would seem. Also again while these cultures would normally change over time, the fact that the outside world is totally evil as far as their concerned is going to make them extremely insular and may make them more extreme. As for the uploads making them jealous that’s kind of laughable given they probably view them as demons or puppets of the devil anyway. Any kind of negotiations with a culture that views you as literally the devil is probably doomed to failure. Also keep in mind even if you can get some people out of the communities, the rest of the community will only get worse via evaporative cooling: http://lesswrong.com/lw/lr/evaporative_cooling_of_group_beliefs/
Also keep in mind they will have a pretty high birthrate in all likelihood so they can replace some losses.
Caveat: I hadn’t thought all these things through as much at the time, so there are ideas I’m developing during this conversation.
So I’d see this world as maintaining the possibility for surprise and objections. They could have informed Grant of how they would drop info into him. Instead, they expected it to come up at some point (which it did), and, depending on his reaction at the time (which could vary with the manner it was brought up), they would change their interaction with him. This also gives people the possibility of manipulating the interaction with him, as the lady talking with him did. People still have choices, and those choices have consequences and are not all pre-ordained.
And I’m not arguing that the world will become perfectly enlightened and agreeing with my values (or the extended cluster of my values) by 2064, just that AIs will have tools to achieve their goals by them. Religious ideologies change all the time, and economic power is one strong thing that changes this (note how the truly stable religious communities only work by maintaining themselves at the dunbar number). When it becomes exceedingly expensive to not be an upload, while all uploads run cognitive circles around you while boasting of properly stable religious communities within them, the temptation to join will be large. And the possibility of anonymous forks can be offered then or later.
That works fine though my problem with it is that it makes that whole interaction weird since there’s no reasonable way the person bringing up that information could have known that. It does kind of make things slightly weird in retrospect.
I also have some degree of issue with the whole idea of people still having “choice” in any meaningful way in the world of the powers. I mean it kind of reminds me of the entity in newcomb’s problem, if humans are utterly predictable to them, than any appearance of a choice that wasn’t made for you by the AI seems impossible, with the possible exception of sufficiently strong precommitments made presingularity. When you understand perfectly how every action will affect a human’s causal mental process, then there’s no choosing to not interfere, you are forced to choose exactly which final outcome you want. Still I could easily see this being the case in your scenario but i’m not sure it is.
As for oppressive cultures, making their lives expensive isn’t likely to do much good, starving them out wouldn’t be a good idea for really obvious reasons(PR mainly), and denying them luxuries is going to be of limited use as well. I mean when they probably view the powers as demons using subtle methods to convince them to join you is going to be pretty ineffective it would seem.
Also again while these cultures would normally change over time, the fact that the outside world is totally evil as far as their concerned is going to make them extremely insular and may make them more extreme. As for the uploads making them jealous that’s kind of laughable given they probably view them as demons or puppets of the devil anyway.
Any kind of negotiations with a culture that views you as literally the devil is probably doomed to failure. Also keep in mind even if you can get some people out of the communities, the rest of the community will only get worse via evaporative cooling: http://lesswrong.com/lw/lr/evaporative_cooling_of_group_beliefs/ Also keep in mind they will have a pretty high birthrate in all likelihood so they can replace some losses.