The difficulty with this question is that we can easily miss « signs » that would be obvious with a better understanding of our word. As an example, imagine one century from now we have extremely good simulations of life emergence and the formation of our solar system, and it turns out that our moon is 10^-345 unlikely (unless something deliberately tried to get one), and it turns out that the emergence of our life critically depends on tides. In retrospect, we would say the signs were as obvious as the moon in the sky -we just couldn’t catch it before we know our own emergence better.
Notice that I don’t believe this particular SF scenario (it may comes from Isaac Asimov -not sure). The point is: there are so many possible scenarii where our capability to recognize obvious sign, at least in retrospect critically depends on the state of our sciences. How could we deal with this kind of knightian uncertainty?
See Randall Munroe for a more striking explanation of this idea.
The difficulty with this question is that we can easily miss « signs » that would be obvious with a better understanding of our word. As an example, imagine one century from now we have extremely good simulations of life emergence and the formation of our solar system, and it turns out that our moon is 10^-345 unlikely (unless something deliberately tried to get one), and it turns out that the emergence of our life critically depends on tides. In retrospect, we would say the signs were as obvious as the moon in the sky -we just couldn’t catch it before we know our own emergence better.
Notice that I don’t believe this particular SF scenario (it may comes from Isaac Asimov -not sure). The point is: there are so many possible scenarii where our capability to recognize obvious sign, at least in retrospect critically depends on the state of our sciences. How could we deal with this kind of knightian uncertainty?
See Randall Munroe for a more striking explanation of this idea.
https://xkcd.com/638/