Isn’t this one adequately dissolved by the “anthropics as decision theory” perspective? If I’m programming the robot, I weigh the relevant problem as if there were 256:1:epsilon odds of false positive/serious problem/fatal problem, and thus I’d have it make the final jump if and only if the expectation of better data justified a 1-in-514 chance of mission failure.
Also, I don’t think that your tone at the end adds anything.
In this case: Where is the issue? If the ship survived 10 jumps, it is probably safe to make another one—the same decision could be done on earth, which exists in both relevant cases.
With sufficient intelligence, the robot could calculate a probability of 1-eps that there are robots from earth, even if earth could have developed without a robot-building species. The robot can use its own existence to update probabilities in both cases.
Isn’t this one adequately dissolved by the “anthropics as decision theory” perspective? If I’m programming the robot, I weigh the relevant problem as if there were 256:1:epsilon odds of false positive/serious problem/fatal problem, and thus I’d have it make the final jump if and only if the expectation of better data justified a 1-in-514 chance of mission failure.
Also, I don’t think that your tone at the end adds anything.
Uh, yes, you would program the robot that way. That’s the point.
In this case: Where is the issue? If the ship survived 10 jumps, it is probably safe to make another one—the same decision could be done on earth, which exists in both relevant cases.
With sufficient intelligence, the robot could calculate a probability of 1-eps that there are robots from earth, even if earth could have developed without a robot-building species. The robot can use its own existence to update probabilities in both cases.