One problem is that if it can create new people, any rules about changing people would be pointless. If it cannot create new people, then it ends up with a Utopia for 6 billion people, which is nothing compared to what could have been.
This could be fixed by letting it rewire human brains, but limiting it to doing what humans would be okay with, if it didn’t rewire their brains. This is better, but it still runs into problems in that people wouldn’t fully understand what’s going on. What you need to do is program it so that it does what people would like if they were smarter, faster, and more the people they wish they were. In other words, use CEV.
Also, it’s very hard to define what exactly constitutes “rewiring a human brain”. If you make it too general, the AI can’t do anything, because that would affect human brains. If you make it too specific, the AI would have some slight limitations on how exactly it messes with people’s minds.
Creating new people is potentially a problem, but I’m not entirely convinced. Let me elaborate. When you say:
What you need to do is program it so that it does what people would like if they were smarter, faster, and more the people they wish they were. In other words, use CEV.
Doesn’t this kind of restate in different words that it models human well-being and tries to maximize that? I imagine when you phrased it this way that such an AI wouldn’t create new people that are easier to maximize because that isn’t what humans would want. And if that’s not what humans would want doesn’t that just mean it’s negatively viewed in their well-being and my original definition suffices? Assuming humans don’t want the AI to make new people that are simply easier to maximize, if it created a new person, all people on the earth view this negatively and their well-being drops. In fact, it may lead to humans shutting the AI down, so the AI deduces that it cannot create new people that are easier to maximize. The only possible hole in that I see is if the AI could suddenly create an enormous number of people at once..
Also, it’s very hard to define what exactly constitutes “rewiring a human brain”. If you make it too general, the AI can’t do anything, because that would affect human brains. If you make it too specific, the AI would have some slight limitations on how exactly it messes with people’s minds.
Indeed it’s difficult to say precisely, that’s why I used what we can do now as analogy. I can’t really rewire a person’s values at all except through persuasion or other such methods. Even our best neuroscientists can’t do that unless I’m ignorant to some profound advances. The most we can really do is tweak pleasure centers (which as I stated isn’t the metric for well-being) or effectively break the brain so the person is non-operational, but I’d argue that non-operational humans have effectively zero measure of well-being anyway (for similar reasons as to why I’d say a bug has a lower scale of well-being than a human does).
Assuming humans don’t want the AI to make new people that are simply easier to maximize, if it created a new person, all people on the earth view this negatively and their well-being drops.
I’m not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.
Indeed it’s difficult to say precisely, that’s why I used what we can do now as analogy. I can’t really rewire a person’s values at all except through persuasion or other such methods.
An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.
Even our best neuroscientists can’t do that unless I’m ignorant to some profound advances.
Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don’t think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn’t convince people to want diamonds.
I’m not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.
You must also consider that well-being need not be defined as a positive function. Even if it wasn’t, if the gain of adding a person was less than drop in well-being of others, it wouldn’t be beneficial unless the AI was able to without prevention, create many more such people.
An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.
I’m sure it’d be better than me (unless I’m also heavily augmented by technology, but we can avoid that issue for now). On what grounds can you say that it’d be able to persuade me to anything it wants? Intelligence doesn’t mean you can do anything and think this needs to be justified.
Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don’t think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn’t convince people to want diamonds.
I know they’re mere mortals. We’re operating under the assumption that the AI’s methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect. The point of the assumption is to ask what the AI could do without more direct manipulation. To that end, only persuasion has been offered and as I’ve stated, I’m not seeing a compelling argument for why an AI could persuade anyone to anything.
Even if it wasn’t, if the gain of adding a person was less than drop in well-being of others, it wouldn’t be beneficial unless the AI was able to without prevention, create many more such people.
Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?
We’re operating under the assumption that the AI’s methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect.
If you allow it to use the same tools but better, it will be enough. If you don’t, it’s likely to only try to do things humans would do, on the basis that they’re not smart enough to do what they really want done.
Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?
That’s not my point. The point is people aren’t going to be happy if an AI starts making people that are easier to maximize for the sole reason that they’re easier to maximize. This will suggest a problem to us by the very virtue that we are discussing hypotheticals where doing so is considered a problem by us.
If you allow it to use the same tools but better, it will be enough. If you don’t, it’s likely to only try to do things humans would do, on the basis that they’re not smart enough to do what they really want done.
You seem to be trying to break the hypothetical assumption on the basis that I have not specified a complete criteria that would prevent an AI from rewiring the human brain. I’m not interested in trying to find a set of rules that would prevent an AI from rewiring human’s brain (and I never tried to provide any, that’s why it’s called an assumption), because I’m not posing that as a solution to the problem. I’ve made this assumption to try and generate discussion all the problems where it will break down since typically discussion seems to stop at “it will rewire us”. Trying to assert “yeah but it would rewire because you haven’t strongly specified how it couldn’t” really isn’t relevant to what I’m asking since I’m trying to get specifically at what it could do besides that.
I’d suggest reading Failed Utopia #4-2.
One problem is that if it can create new people, any rules about changing people would be pointless. If it cannot create new people, then it ends up with a Utopia for 6 billion people, which is nothing compared to what could have been.
This could be fixed by letting it rewire human brains, but limiting it to doing what humans would be okay with, if it didn’t rewire their brains. This is better, but it still runs into problems in that people wouldn’t fully understand what’s going on. What you need to do is program it so that it does what people would like if they were smarter, faster, and more the people they wish they were. In other words, use CEV.
Also, it’s very hard to define what exactly constitutes “rewiring a human brain”. If you make it too general, the AI can’t do anything, because that would affect human brains. If you make it too specific, the AI would have some slight limitations on how exactly it messes with people’s minds.
Thanks for the link, I’ll give it a read.
Creating new people is potentially a problem, but I’m not entirely convinced. Let me elaborate. When you say:
Doesn’t this kind of restate in different words that it models human well-being and tries to maximize that? I imagine when you phrased it this way that such an AI wouldn’t create new people that are easier to maximize because that isn’t what humans would want. And if that’s not what humans would want doesn’t that just mean it’s negatively viewed in their well-being and my original definition suffices? Assuming humans don’t want the AI to make new people that are simply easier to maximize, if it created a new person, all people on the earth view this negatively and their well-being drops. In fact, it may lead to humans shutting the AI down, so the AI deduces that it cannot create new people that are easier to maximize. The only possible hole in that I see is if the AI could suddenly create an enormous number of people at once..
Indeed it’s difficult to say precisely, that’s why I used what we can do now as analogy. I can’t really rewire a person’s values at all except through persuasion or other such methods. Even our best neuroscientists can’t do that unless I’m ignorant to some profound advances. The most we can really do is tweak pleasure centers (which as I stated isn’t the metric for well-being) or effectively break the brain so the person is non-operational, but I’d argue that non-operational humans have effectively zero measure of well-being anyway (for similar reasons as to why I’d say a bug has a lower scale of well-being than a human does).
I’m not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.
An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.
Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don’t think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn’t convince people to want diamonds.
The more people in what? Any particular moment in time? The complete timeline of any given Everett Branch? The whole multiverse?
Between an Everett branch of 10 billion people, and ten Everett branches of 1 billion people each, which do you prefer?
Between 10 billion people that live in the same century, and one billion people per century over a span of ten centuries, which do you prefer?
The whole multiverse.
You must also consider that well-being need not be defined as a positive function. Even if it wasn’t, if the gain of adding a person was less than drop in well-being of others, it wouldn’t be beneficial unless the AI was able to without prevention, create many more such people.
I’m sure it’d be better than me (unless I’m also heavily augmented by technology, but we can avoid that issue for now). On what grounds can you say that it’d be able to persuade me to anything it wants? Intelligence doesn’t mean you can do anything and think this needs to be justified.
I know they’re mere mortals. We’re operating under the assumption that the AI’s methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect. The point of the assumption is to ask what the AI could do without more direct manipulation. To that end, only persuasion has been offered and as I’ve stated, I’m not seeing a compelling argument for why an AI could persuade anyone to anything.
Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?
If you allow it to use the same tools but better, it will be enough. If you don’t, it’s likely to only try to do things humans would do, on the basis that they’re not smart enough to do what they really want done.
That’s not my point. The point is people aren’t going to be happy if an AI starts making people that are easier to maximize for the sole reason that they’re easier to maximize. This will suggest a problem to us by the very virtue that we are discussing hypotheticals where doing so is considered a problem by us.
You seem to be trying to break the hypothetical assumption on the basis that I have not specified a complete criteria that would prevent an AI from rewiring the human brain. I’m not interested in trying to find a set of rules that would prevent an AI from rewiring human’s brain (and I never tried to provide any, that’s why it’s called an assumption), because I’m not posing that as a solution to the problem. I’ve made this assumption to try and generate discussion all the problems where it will break down since typically discussion seems to stop at “it will rewire us”. Trying to assert “yeah but it would rewire because you haven’t strongly specified how it couldn’t” really isn’t relevant to what I’m asking since I’m trying to get specifically at what it could do besides that.