What idiot is going to give an AGI a goal which completely disrespects human property rights from the moment it is built?
It would be someone with higher values than that, and this does not require any idiocy. There are many things wrong with the property allocation in this world, and they’ll likely get exaggerated in the presence of higher technology. You’d need a very specific kind of humility to refuse to step over that boundary in particular.
If it has goals which were not possible to achieve once turned off, then it would respect property rights for a very long time as an instrumental goal.
Not necessarily “a very long time” on human timescales. It may respect these laws for a large part of its development, and then strike once it has amassed sufficient capability to have a good chance at overpowering human resistance (which may happen quite quickly in a fast takeoff scenario). See Chapter 6, “An AI takeover scenario”.
So we are considering a small team with some computers claiming superior understanding of what the best set of property rights is for the world?
Even if they are generally correct in their understanding, by disrespecting norms and laws regarding property, they are putting themselves in the middle of a billion previously negotiated human-to-human disputes and ambitions, small and large, in an instant. Yes, that is foolish of them.
Human systems like those which set property rights either change over the course of years, or typically the change is associated with violence.
I do not see a morally superior developer + AGI team working so quickly on property rights in particular, and thereby setting off a violent response. A foolish development team might do that, but a wise team would roll the technology and the wrenching changes out gradually.
If they really are morally superior, they will first find ways to grow the pie, then come back to changing how it gets divided up.
So we are considering a small team with some computers claiming superior understanding of what the best set of property rights is for the world?
No. That would be worked out by the FAI itself, as part of calculating all of the implications of its value systems, most likely using something like CEV to look at humanity in general and extrapolating their preferences. The programmers wouldn’t need to, and indeed probably couldn’t, understand all of the tradeoffs involved.
If they really are morally superior, they will first find ways to grow the pie, then come back to changing how it gets divided up.
There are large costs to that. People will die and suffer in the meantime. Parts of humanity’s cosmic endowment will slip out of reach due to the inflation of the universe, because you weren’t willing to grab the local resources needed to build probe launchers to get to them in time. Other parts will remain rechable, but will have decreased in negentropy due to stars having continued to burn for longer than they needed to. If you can fix these things earlier, there’s a strong reason to do so.
It would be someone with higher values than that, and this does not require any idiocy. There are many things wrong with the property allocation in this world, and they’ll likely get exaggerated in the presence of higher technology. You’d need a very specific kind of humility to refuse to step over that boundary in particular.
Not necessarily “a very long time” on human timescales. It may respect these laws for a large part of its development, and then strike once it has amassed sufficient capability to have a good chance at overpowering human resistance (which may happen quite quickly in a fast takeoff scenario). See Chapter 6, “An AI takeover scenario”.
So we are considering a small team with some computers claiming superior understanding of what the best set of property rights is for the world?
Even if they are generally correct in their understanding, by disrespecting norms and laws regarding property, they are putting themselves in the middle of a billion previously negotiated human-to-human disputes and ambitions, small and large, in an instant. Yes, that is foolish of them.
Human systems like those which set property rights either change over the course of years, or typically the change is associated with violence.
I do not see a morally superior developer + AGI team working so quickly on property rights in particular, and thereby setting off a violent response. A foolish development team might do that, but a wise team would roll the technology and the wrenching changes out gradually.
If they really are morally superior, they will first find ways to grow the pie, then come back to changing how it gets divided up.
No. That would be worked out by the FAI itself, as part of calculating all of the implications of its value systems, most likely using something like CEV to look at humanity in general and extrapolating their preferences. The programmers wouldn’t need to, and indeed probably couldn’t, understand all of the tradeoffs involved.
There are large costs to that. People will die and suffer in the meantime. Parts of humanity’s cosmic endowment will slip out of reach due to the inflation of the universe, because you weren’t willing to grab the local resources needed to build probe launchers to get to them in time. Other parts will remain rechable, but will have decreased in negentropy due to stars having continued to burn for longer than they needed to. If you can fix these things earlier, there’s a strong reason to do so.