Assuming humans don’t want the AI to make new people that are simply easier to maximize, if it created a new person, all people on the earth view this negatively and their well-being drops.
I’m not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.
Indeed it’s difficult to say precisely, that’s why I used what we can do now as analogy. I can’t really rewire a person’s values at all except through persuasion or other such methods.
An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.
Even our best neuroscientists can’t do that unless I’m ignorant to some profound advances.
Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don’t think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn’t convince people to want diamonds.
I’m not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.
You must also consider that well-being need not be defined as a positive function. Even if it wasn’t, if the gain of adding a person was less than drop in well-being of others, it wouldn’t be beneficial unless the AI was able to without prevention, create many more such people.
An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.
I’m sure it’d be better than me (unless I’m also heavily augmented by technology, but we can avoid that issue for now). On what grounds can you say that it’d be able to persuade me to anything it wants? Intelligence doesn’t mean you can do anything and think this needs to be justified.
Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don’t think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn’t convince people to want diamonds.
I know they’re mere mortals. We’re operating under the assumption that the AI’s methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect. The point of the assumption is to ask what the AI could do without more direct manipulation. To that end, only persuasion has been offered and as I’ve stated, I’m not seeing a compelling argument for why an AI could persuade anyone to anything.
Even if it wasn’t, if the gain of adding a person was less than drop in well-being of others, it wouldn’t be beneficial unless the AI was able to without prevention, create many more such people.
Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?
We’re operating under the assumption that the AI’s methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect.
If you allow it to use the same tools but better, it will be enough. If you don’t, it’s likely to only try to do things humans would do, on the basis that they’re not smart enough to do what they really want done.
Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?
That’s not my point. The point is people aren’t going to be happy if an AI starts making people that are easier to maximize for the sole reason that they’re easier to maximize. This will suggest a problem to us by the very virtue that we are discussing hypotheticals where doing so is considered a problem by us.
If you allow it to use the same tools but better, it will be enough. If you don’t, it’s likely to only try to do things humans would do, on the basis that they’re not smart enough to do what they really want done.
You seem to be trying to break the hypothetical assumption on the basis that I have not specified a complete criteria that would prevent an AI from rewiring the human brain. I’m not interested in trying to find a set of rules that would prevent an AI from rewiring human’s brain (and I never tried to provide any, that’s why it’s called an assumption), because I’m not posing that as a solution to the problem. I’ve made this assumption to try and generate discussion all the problems where it will break down since typically discussion seems to stop at “it will rewire us”. Trying to assert “yeah but it would rewire because you haven’t strongly specified how it couldn’t” really isn’t relevant to what I’m asking since I’m trying to get specifically at what it could do besides that.
I’m not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.
An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.
Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don’t think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn’t convince people to want diamonds.
The more people in what? Any particular moment in time? The complete timeline of any given Everett Branch? The whole multiverse?
Between an Everett branch of 10 billion people, and ten Everett branches of 1 billion people each, which do you prefer?
Between 10 billion people that live in the same century, and one billion people per century over a span of ten centuries, which do you prefer?
The whole multiverse.
You must also consider that well-being need not be defined as a positive function. Even if it wasn’t, if the gain of adding a person was less than drop in well-being of others, it wouldn’t be beneficial unless the AI was able to without prevention, create many more such people.
I’m sure it’d be better than me (unless I’m also heavily augmented by technology, but we can avoid that issue for now). On what grounds can you say that it’d be able to persuade me to anything it wants? Intelligence doesn’t mean you can do anything and think this needs to be justified.
I know they’re mere mortals. We’re operating under the assumption that the AI’s methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect. The point of the assumption is to ask what the AI could do without more direct manipulation. To that end, only persuasion has been offered and as I’ve stated, I’m not seeing a compelling argument for why an AI could persuade anyone to anything.
Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?
If you allow it to use the same tools but better, it will be enough. If you don’t, it’s likely to only try to do things humans would do, on the basis that they’re not smart enough to do what they really want done.
That’s not my point. The point is people aren’t going to be happy if an AI starts making people that are easier to maximize for the sole reason that they’re easier to maximize. This will suggest a problem to us by the very virtue that we are discussing hypotheticals where doing so is considered a problem by us.
You seem to be trying to break the hypothetical assumption on the basis that I have not specified a complete criteria that would prevent an AI from rewiring the human brain. I’m not interested in trying to find a set of rules that would prevent an AI from rewiring human’s brain (and I never tried to provide any, that’s why it’s called an assumption), because I’m not posing that as a solution to the problem. I’ve made this assumption to try and generate discussion all the problems where it will break down since typically discussion seems to stop at “it will rewire us”. Trying to assert “yeah but it would rewire because you haven’t strongly specified how it couldn’t” really isn’t relevant to what I’m asking since I’m trying to get specifically at what it could do besides that.