The best empirical estimate we have for the probability that the Earth will not fail, is the fraction of Earth-like planets around us that have succeeded. (Zero.)
That said, they don’t seem to have failed after achieving AI, so I don’t know if that really tells us much.
What are you considering “Earth-like planets”? I’m an astronomer and I’m not aware of any (under the usual definition of 0.5-10 Earth masses, with an equilibrium temperature allowing liquid water).
(And I would explain my vote on this post, but doing so in the absence of a request by the poster seems to be a downvotable offense.)
I didn’t downvote your comment, but for me, “they don’t seem to have failed after achieving AI, so I don’t know if that really tells us much” is an overwhelmingly strong objection to drawing any conclusion about existential risk from the Great Silence.
It seems like we have a sample set of zero, as successes are not by definition or axiom noticeable. Certainly possibly noticeable but not required to be so. Failures are also not required to be noticeable. No earth-like planets sustaining life or having evidence of having sustained life have been documented yet. The probability estimate is useless, with a total margin of error.
The best empirical estimate we have for the probability that the Earth will not fail, is the fraction of Earth-like planets around us that have succeeded. (Zero.)
That said, they don’t seem to have failed after achieving AI, so I don’t know if that really tells us much.
What are you considering “Earth-like planets”? I’m an astronomer and I’m not aware of any (under the usual definition of 0.5-10 Earth masses, with an equilibrium temperature allowing liquid water).
(And I would explain my vote on this post, but doing so in the absence of a request by the poster seems to be a downvotable offense.)
We’ve located enough planets nearby to infer that there are a large number of planets somewhat nearby that could support life.
I’m curious why people hate this comment so much. Is it just that they don’t like hearing that we’re likely to fail?
I’m also curious why most people haven’t upvoted this post by EY. Same reason?
I didn’t downvote your comment, but for me, “they don’t seem to have failed after achieving AI, so I don’t know if that really tells us much” is an overwhelmingly strong objection to drawing any conclusion about existential risk from the Great Silence.
If it’s an overwhelmingly strong objection, why would people downvote me for raising it?
It seems like we have a sample set of zero, as successes are not by definition or axiom noticeable. Certainly possibly noticeable but not required to be so. Failures are also not required to be noticeable. No earth-like planets sustaining life or having evidence of having sustained life have been documented yet. The probability estimate is useless, with a total margin of error.