The objects in question (super-intelligent AIs) don’t currently exist, so we don’t have access to real examples of them to study. One might still want to study them because it seems like there’s a high chance they will exist. So indirect access seems necessary, e.g. conceptual analysis, mathematics, hand-wavy reasoning (specifically, reasoning that’s hand-wavy about some things but tries to be non-hand-wavy about at least some other things), reasoning by analogy with non-super-intelligent things like humans, animals, evolution, or contemporary machine learning (on which we can do more rigorous reasoning and experiments). This is unfortunate but seems unavoidable. Do you see a way to study super-intelligent AI more rigorously or scientifically?
The objects in question (super-intelligent AIs) don’t currently exist, so we don’t have access to real examples of them to study. One might still want to study them because it seems like there’s a high chance they will exist. So indirect access seems necessary, e.g. conceptual analysis, mathematics, hand-wavy reasoning (specifically, reasoning that’s hand-wavy about some things but tries to be non-hand-wavy about at least some other things), reasoning by analogy with non-super-intelligent things like humans, animals, evolution, or contemporary machine learning (on which we can do more rigorous reasoning and experiments). This is unfortunate but seems unavoidable. Do you see a way to study super-intelligent AI more rigorously or scientifically?