Yes, I agree. As you point out, that’s a general kind of problem with decision-making in an environment of low probability that something spectacularly good might happen if I throw resources at X. (At one point I actually wrote a feature-length screenplay about this, with an AI attempting to throw cosmic resources at religion, in a low-probability attempt to unlock infinity. Got reasonably good scores in competition, but I was told at one point that “a computer misunderstanding its programming” was old hat. Oh well.)
My pronouncement of “exactly zero” is just what would follow from taking the stated scientific assumptions at face value, and applying them to the specific argument I was addressing. But I definitely agree that a real-world AI might come up with other arguments for expansion.
Yes, I agree. As you point out, that’s a general kind of problem with decision-making in an environment of low probability that something spectacularly good might happen if I throw resources at X. (At one point I actually wrote a feature-length screenplay about this, with an AI attempting to throw cosmic resources at religion, in a low-probability attempt to unlock infinity. Got reasonably good scores in competition, but I was told at one point that “a computer misunderstanding its programming” was old hat. Oh well.)
My pronouncement of “exactly zero” is just what would follow from taking the stated scientific assumptions at face value, and applying them to the specific argument I was addressing. But I definitely agree that a real-world AI might come up with other arguments for expansion.