Most are relatively friendly to those with near equal power. Consider all the “abusive cop stories”, or how children are rarely taken seriously, and the standard line about how power corrupts.
Two observations about this entire train of thought:
Any argument along the lines of “humanity is generally non-friendly” shows a generally pessimistic view of human-nature (just an observation)
Nothing about the entire idea of a sandbox sim for AI is incompatible with other improvements to make AI more friendly—naturally we’d want to implement those as well
Consider this an additional safeguard, that is practical and potentially provably safe (if that is desirable)
Any argument along the lines of “humanity is generally non-friendly” shows a generally pessimistic view of human-nature (just an observation)
I found this labelling distracting. Especially since when we are talking about “Friendly AI” humans are not even remotely friendly in the relevant sense. It isn’t anything to do with ‘pessimism’. Believing that humans are friendly in that sense would be flat out wrong.
I like the idea of the sandbox as a purely additional measure. But I wouldn’t remotely consider it safe. Not just because a superintelligence may find a bug in the system. Because humans are not secure. I more or less assume that the AI will find a way to convince the creators to release it into the ‘real world’.
Especially since when we are talking about “Friendly AI” humans are not even remotely friendly in the relevant sense
Point taken—Friendliness for an AI is a much higher standard than even idealized human morality. Fine. But to get to that Friendliness, you need to define CEV in the first place, so improving humans and evolving them forward is a route towards that.
But again I didn’t mean to imply we need to create perfect human-sims. Not even close. This is an additional measure.
I more or less assume that the AI will find a way to convince the creators to release it into the ‘real world’.
This is an unreasonable leap of faith if the AI doesn’t even believe that there are ‘creators’ in the first place.
I didn’t consider the moral implications. They are complex.
If you think about it though, the great future promise of the Singularity for humans is some type of uploading into designed virtual universes (the heaven scenario).
And in our current (admittedly) simple precursors, we have no compunctions creating sim worlds entirely for our amusement. At some point that would have to change.
I imagine there will probably be much simpler techniques for making safe-enough AI without going to the trouble of making an entire isolated sim world.
However, ultimately making big sim worlds will be one of our main aims, so isolated sims are more relevant for that reason—not because they are the quickest route to safe AI.
And? Most are, and this feature set would be under many levels of designer control.
Most are relatively friendly to those with near equal power. Consider all the “abusive cop stories”, or how children are rarely taken seriously, and the standard line about how power corrupts.
Two observations about this entire train of thought:
Any argument along the lines of “humanity is generally non-friendly” shows a generally pessimistic view of human-nature (just an observation)
Nothing about the entire idea of a sandbox sim for AI is incompatible with other improvements to make AI more friendly—naturally we’d want to implement those as well
Consider this an additional safeguard, that is practical and potentially provably safe (if that is desirable)
I found this labelling distracting. Especially since when we are talking about “Friendly AI” humans are not even remotely friendly in the relevant sense. It isn’t anything to do with ‘pessimism’. Believing that humans are friendly in that sense would be flat out wrong.
I like the idea of the sandbox as a purely additional measure. But I wouldn’t remotely consider it safe. Not just because a superintelligence may find a bug in the system. Because humans are not secure. I more or less assume that the AI will find a way to convince the creators to release it into the ‘real world’.
Point taken—Friendliness for an AI is a much higher standard than even idealized human morality. Fine. But to get to that Friendliness, you need to define CEV in the first place, so improving humans and evolving them forward is a route towards that.
But again I didn’t mean to imply we need to create perfect human-sims. Not even close. This is an additional measure.
This is an unreasonable leap of faith if the AI doesn’t even believe that there are ‘creators’ in the first place.
Do you believe there are creators?
You do realize you’re suggesting putting an entire civilization into a jar for economic gain because you can, right?
Upvoted for witty reply.
I didn’t consider the moral implications. They are complex.
If you think about it though, the great future promise of the Singularity for humans is some type of uploading into designed virtual universes (the heaven scenario).
And in our current (admittedly) simple precursors, we have no compunctions creating sim worlds entirely for our amusement. At some point that would have to change.
I imagine there will probably be much simpler techniques for making safe-enough AI without going to the trouble of making an entire isolated sim world.
However, ultimately making big sim worlds will be one of our main aims, so isolated sims are more relevant for that reason—not because they are the quickest route to safe AI.