Are we alone? Did no one ever create a superintelligent AI?
Quite possibly. Someone has to be first, and given how little we understand the barriers to making it up to our level, it shouldn’t be particularly suspicious if that’s us (in our past light-cone, anyway).
Did the AI and its creators go the other way
Not likely. You’re going to run out of usable energy at some point, and then you’d wish you’d turned all of those stars off earlier. It’d take a very specific planning failure for a civilization to paint itself into that particular corner.
Did it already happen and are we part or product of it (ie simulation)?
Highly likely, but that’s mostly ignorable for practical purposes. Almost all of the weight of our actions is in the cases where we’re not.
Is it happening right in front of us and we, dumb as a goldfish, can’t see it?
Unlikely. The obvious optimizations would leave definite signatures, and also probably wouldn’t take all that long on an astronomic time scale.
Should these questions, which would certainly shift the probabilities, be part of AI predictions?
It would be hard to use them.
For one, there’s massive noise in our guesses on how hard it is to get from a random planet to a civilization of our level; and as long as you don’t have a good idea of that, not observing alien AGIs tells us very little.
For another, there might be anthropic selection effects. If, for instance, AGIs strongly tend to turn out to be paperclip maximizers, a civilization of our level just wouldn’t survive contact with one, so we can’t observe the contact case.
Re. the last point, I will also admit to being confused about the correct reference class to use here. Even if (purely hypothetically) we had a reason to guess that alien AGI had a decent chance to not only implement an acceptable morality according to their makers, but also be supportive to humanity by our morals … well, if one of them was already here that would tell us something, but it would also put us into a position where understanding our own timeline to homegrown AGI development suddenly became much less important.
Which suggests to me it might still be a bad idea to just use that observation as direct input into our probability estimates, since it would bias the estimate in the class of cases where we really care about the accuracy of that particular estimate.
Quite possibly. Someone has to be first, and given how little we understand the barriers to making it up to our level, it shouldn’t be particularly suspicious if that’s us (in our past light-cone, anyway).
Not likely. You’re going to run out of usable energy at some point, and then you’d wish you’d turned all of those stars off earlier. It’d take a very specific planning failure for a civilization to paint itself into that particular corner.
Highly likely, but that’s mostly ignorable for practical purposes. Almost all of the weight of our actions is in the cases where we’re not.
Unlikely. The obvious optimizations would leave definite signatures, and also probably wouldn’t take all that long on an astronomic time scale.
It would be hard to use them. For one, there’s massive noise in our guesses on how hard it is to get from a random planet to a civilization of our level; and as long as you don’t have a good idea of that, not observing alien AGIs tells us very little. For another, there might be anthropic selection effects. If, for instance, AGIs strongly tend to turn out to be paperclip maximizers, a civilization of our level just wouldn’t survive contact with one, so we can’t observe the contact case.
Re. the last point, I will also admit to being confused about the correct reference class to use here. Even if (purely hypothetically) we had a reason to guess that alien AGI had a decent chance to not only implement an acceptable morality according to their makers, but also be supportive to humanity by our morals … well, if one of them was already here that would tell us something, but it would also put us into a position where understanding our own timeline to homegrown AGI development suddenly became much less important.
Which suggests to me it might still be a bad idea to just use that observation as direct input into our probability estimates, since it would bias the estimate in the class of cases where we really care about the accuracy of that particular estimate.