Even if it wasn’t, if the gain of adding a person was less than drop in well-being of others, it wouldn’t be beneficial unless the AI was able to without prevention, create many more such people.
Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?
We’re operating under the assumption that the AI’s methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect.
If you allow it to use the same tools but better, it will be enough. If you don’t, it’s likely to only try to do things humans would do, on the basis that they’re not smart enough to do what they really want done.
Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?
That’s not my point. The point is people aren’t going to be happy if an AI starts making people that are easier to maximize for the sole reason that they’re easier to maximize. This will suggest a problem to us by the very virtue that we are discussing hypotheticals where doing so is considered a problem by us.
If you allow it to use the same tools but better, it will be enough. If you don’t, it’s likely to only try to do things humans would do, on the basis that they’re not smart enough to do what they really want done.
You seem to be trying to break the hypothetical assumption on the basis that I have not specified a complete criteria that would prevent an AI from rewiring the human brain. I’m not interested in trying to find a set of rules that would prevent an AI from rewiring human’s brain (and I never tried to provide any, that’s why it’s called an assumption), because I’m not posing that as a solution to the problem. I’ve made this assumption to try and generate discussion all the problems where it will break down since typically discussion seems to stop at “it will rewire us”. Trying to assert “yeah but it would rewire because you haven’t strongly specified how it couldn’t” really isn’t relevant to what I’m asking since I’m trying to get specifically at what it could do besides that.
Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?
If you allow it to use the same tools but better, it will be enough. If you don’t, it’s likely to only try to do things humans would do, on the basis that they’re not smart enough to do what they really want done.
That’s not my point. The point is people aren’t going to be happy if an AI starts making people that are easier to maximize for the sole reason that they’re easier to maximize. This will suggest a problem to us by the very virtue that we are discussing hypotheticals where doing so is considered a problem by us.
You seem to be trying to break the hypothetical assumption on the basis that I have not specified a complete criteria that would prevent an AI from rewiring the human brain. I’m not interested in trying to find a set of rules that would prevent an AI from rewiring human’s brain (and I never tried to provide any, that’s why it’s called an assumption), because I’m not posing that as a solution to the problem. I’ve made this assumption to try and generate discussion all the problems where it will break down since typically discussion seems to stop at “it will rewire us”. Trying to assert “yeah but it would rewire because you haven’t strongly specified how it couldn’t” really isn’t relevant to what I’m asking since I’m trying to get specifically at what it could do besides that.