Most of these questions are beyond our current ability to answer. We can speculate and counter-speculate, but we don’t know. The immediate barrier to understanding is that we do not know what pleasure and pain, happiness and suffering are, the way that we think we know what a star or a galaxy is.
We have a concept of matter. We have a concept of computation. We have a concept of goal-directed computation. So we can imagine a galaxy of machines, acting according to shared or conflicting utility functions, and constrained by competition and the death of the universe. But we do not know how that would or could feel; we don’t even know that it needs to feel like anything at all. If we imagine the galaxy populated with people, that raises another problem—the possibility of the known range of human experience, including its worst dimensions, being realized many times over. That is a conundrum in itself. But the biggest unknown concerns the forms of experience, and the quality of life, of “godlike AIs” and other such hypothetical entities.
The present reality of the world is that humanity is reaching out for technological power in a thousand ways and in a thousand places. That is the reality that will issue either in catastrophe or in superintelligence. The idea of simply halting that process through cautionary persuasion is futile. To actually stop it, and not just slow it down, would require force. So I think the most constructive attitude towards these doubts about the further future is to see them as input to the process which will create superintelligence. If this superintelligence acts with even an approximation of humaneness, it will be sensitive to such issues, and if it really does embody something like the extrapolated volition of humanity, it will resolve them as we would wish to see them resolved.
Therefore, I propose that your title question—“Should humanity give birth to a galactic civilization?”—should be regarded as a benchmark of progress towards an exact concept of friendliness. A friendly AI should be able to answer that question, and explain its answer; and a formal strategy for friendly AI should be able to explain how its end product—the AI itself—would be capable of answering the question.
Most of these questions are beyond our current ability to answer. We can speculate and counter-speculate, but we don’t know. The immediate barrier to understanding is that we do not know what pleasure and pain, happiness and suffering are, the way that we think we know what a star or a galaxy is.
We have a concept of matter. We have a concept of computation. We have a concept of goal-directed computation. So we can imagine a galaxy of machines, acting according to shared or conflicting utility functions, and constrained by competition and the death of the universe. But we do not know how that would or could feel; we don’t even know that it needs to feel like anything at all. If we imagine the galaxy populated with people, that raises another problem—the possibility of the known range of human experience, including its worst dimensions, being realized many times over. That is a conundrum in itself. But the biggest unknown concerns the forms of experience, and the quality of life, of “godlike AIs” and other such hypothetical entities.
The present reality of the world is that humanity is reaching out for technological power in a thousand ways and in a thousand places. That is the reality that will issue either in catastrophe or in superintelligence. The idea of simply halting that process through cautionary persuasion is futile. To actually stop it, and not just slow it down, would require force. So I think the most constructive attitude towards these doubts about the further future is to see them as input to the process which will create superintelligence. If this superintelligence acts with even an approximation of humaneness, it will be sensitive to such issues, and if it really does embody something like the extrapolated volition of humanity, it will resolve them as we would wish to see them resolved.
Therefore, I propose that your title question—“Should humanity give birth to a galactic civilization?”—should be regarded as a benchmark of progress towards an exact concept of friendliness. A friendly AI should be able to answer that question, and explain its answer; and a formal strategy for friendly AI should be able to explain how its end product—the AI itself—would be capable of answering the question.