Phil Goetz, why should I care what sort of creatures the universe “tends to produce”? What makes this a moral argument that should move me? Do you think that most creatures the universe produces must inevitably evolve to be moved by such an argument?
I stated the reason:
We MUST make this meta-level argument that the universe inherently produces creatures with pretty-valuable values. We have no other way of claiming to be better than pebble-sorters.
I don’t think that we can argue for our framework of ideas from within our framework of ideas. If we continue to insist that we are better than pebble-sorters, we can justify it only by claiming that the processes that lead to our existence tend to produce good outcomes, whereas the hypothetical pebble-sorters are chosen from a much larger set of possible beings, with a much lower average moral acceptability.
A problem with this is that all sorts of insects and animals exist with horrifying “moral systems”. We might convince ourselves that morals improve as a society becomes more complex. (That’s just a thought in postscript.)
One possible conclusion—not one that I have reached, but one that you might conclude if the evidence comes out a certain way—is that the right thing to do is not to make any attempt to control the morals of AIs, because general evolutionary processes may be better at designing morals than we are.
I stated the reason:
I don’t think that we can argue for our framework of ideas from within our framework of ideas. If we continue to insist that we are better than pebble-sorters, we can justify it only by claiming that the processes that lead to our existence tend to produce good outcomes, whereas the hypothetical pebble-sorters are chosen from a much larger set of possible beings, with a much lower average moral acceptability.
A problem with this is that all sorts of insects and animals exist with horrifying “moral systems”. We might convince ourselves that morals improve as a society becomes more complex. (That’s just a thought in postscript.)
One possible conclusion—not one that I have reached, but one that you might conclude if the evidence comes out a certain way—is that the right thing to do is not to make any attempt to control the morals of AIs, because general evolutionary processes may be better at designing morals than we are.