If it seems plausible that there are aliens, I think “figure out what to do” would become a high-priority item, and I think there is a very significant chance “definitely don’t run it” would be the right answer and that the main resulting intervention would be to push hard against passive SETI (about which people are horrifyingly unconcerned).
A CDT agent wouldn’t care that we’d have been willing to pay a lot to prevent it, right?
Unless its predecessor entertained the possibility of being in a simulation run by a civilization like ours that made it to technological maturity.
The moral argument is not very compelling to me
This suggests a similar disagreement w.r.t. the expected moral value of unaligned AGI, which seems way more interesting and important.
I think the acausal trade argument depends on the aliens using UDT, and the aliens thinking there’s enough logical correlation between them and us
Only seems to require EDT. But I agree the question “can you trade with primates” is very open, and that the other routes to trade would also be quite speculative.
It seems implausible that we could reduce our uncertainty about the alien AI being unfriendly to <\1%, at least before we become superintelligent
We just care about the difference P(alien is friendly) - P(we are friendly). We don’t seem to be in an especially good situation to me, so I’m not as concerned. (Actually, I’m not sure whether you mean “friendly” in the sense of FAI or the conventional usage.)
I think the main question is how we feel about handing our planet to a random alien who happened to evolve first. If you are neutral about that, and think that we are in a “generic” situation w.r.t. alignment, then it seems like contact is a significant plus due to avoiding other risks. That’s where I’m at. But I can understand the case for concern.
I don’t see how. I think an EDT agent would make the decision by simulating (or doing some analysis that’s equivalent to this) a bunch of worlds, then look at the worlds where it or agents like it happened to make the message benign/malign to see what the humans do in those worlds, and it would see no correlation between its decision and what the humans do and therefore end up making the message malign.
Actually, I’m not sure whether you mean “friendly” in the sense of FAI or the conventional usage.
By “unfriendly” I meant that running the alien AI results in something as bad as extinction. So my point was that if P(running alien AI results in something as bad as extinction) > 1% then this risk would more than cancel out the expected gain of 1% of our future light cone from running the alien AI (conditional on alien colonization being as good as human colonization), and I don’t see how we can get this probability to be less than 1%.
If it seems plausible that there are aliens, I think “figure out what to do” would become a high-priority item, and I think there is a very significant chance “definitely don’t run it” would be the right answer and that the main resulting intervention would be to push hard against passive SETI (about which people are horrifyingly unconcerned).
Unless its predecessor entertained the possibility of being in a simulation run by a civilization like ours that made it to technological maturity.
This suggests a similar disagreement w.r.t. the expected moral value of unaligned AGI, which seems way more interesting and important.
Only seems to require EDT. But I agree the question “can you trade with primates” is very open, and that the other routes to trade would also be quite speculative.
We just care about the difference P(alien is friendly) - P(we are friendly). We don’t seem to be in an especially good situation to me, so I’m not as concerned. (Actually, I’m not sure whether you mean “friendly” in the sense of FAI or the conventional usage.)
I think the main question is how we feel about handing our planet to a random alien who happened to evolve first. If you are neutral about that, and think that we are in a “generic” situation w.r.t. alignment, then it seems like contact is a significant plus due to avoiding other risks. That’s where I’m at. But I can understand the case for concern.
I don’t see how. I think an EDT agent would make the decision by simulating (or doing some analysis that’s equivalent to this) a bunch of worlds, then look at the worlds where it or agents like it happened to make the message benign/malign to see what the humans do in those worlds, and it would see no correlation between its decision and what the humans do and therefore end up making the message malign.
By “unfriendly” I meant that running the alien AI results in something as bad as extinction. So my point was that if P(running alien AI results in something as bad as extinction) > 1% then this risk would more than cancel out the expected gain of 1% of our future light cone from running the alien AI (conditional on alien colonization being as good as human colonization), and I don’t see how we can get this probability to be less than 1%.