I think the protagonist here should have looked at earth. If there was a technological intelligence on earth that cared about the state of Jupiter’s moons, then it could send rockets there. The most likely scenarios are a disaster bad enough to stop us launching spacecraft, and an AI that only cares about earth.
A super intelligence should assign non negligible probability to the result that actually happened. Given the tech was available, a space-probe containing an uploaded mind is not that unlikely. If such a probe was a real threat to the AI, it would have already blown up all space-probes on the off chance.
The upper bound given on the amount that malicious info can harm you is extremely loose. Malicious info can’t do much harm unless the enemy has a good understanding of the particular system that they are subverting.
I think the protagonist here should have looked at earth.
That’s certainly one plan that could have been tried, given a certain amount of outside-view, objective, rational analysis. Of course, one could also say that “Mark Watney should have avoided zapping Pathfinder” or “The comic character Cathy should just stick to her diet”; just because it’s a good plan doesn’t necessarily mean it’s one that an inside-view, subjective, emotional person is capable of thinking up, let alone following-through on.
Can you think of anything that a person could do, today, to increase the odds that, if they suddenly woke up post-apocalypse and with decades of solitary confinement ahead of them, they’d have increased odds of coming up with the most-winningest-possible plans for every aspect of their future life?
I think the protagonist here should have looked at earth.
Agreed. Either there is a superintelligence on Earth that thinks there’s non-negligible probability of another intelligence existing in the solar system, in which case it would sent probes out to search for that intelligence (or blow up all the space probes like Donald suggested) so not looking at Earth would not help, or there is no such superintelligence in which case not looking at Earth also would not help.
Given the tech was available, a space-probe containing an uploaded mind is not that unlikely.
Yep, or a space-probe containing another AI that could eventually become a threat to whatever is on Earth.
I think the protagonist here should have looked at earth. If there was a technological intelligence on earth that cared about the state of Jupiter’s moons, then it could send rockets there. The most likely scenarios are a disaster bad enough to stop us launching spacecraft, and an AI that only cares about earth.
A super intelligence should assign non negligible probability to the result that actually happened. Given the tech was available, a space-probe containing an uploaded mind is not that unlikely. If such a probe was a real threat to the AI, it would have already blown up all space-probes on the off chance.
The upper bound given on the amount that malicious info can harm you is extremely loose. Malicious info can’t do much harm unless the enemy has a good understanding of the particular system that they are subverting.
That’s certainly one plan that could have been tried, given a certain amount of outside-view, objective, rational analysis. Of course, one could also say that “Mark Watney should have avoided zapping Pathfinder” or “The comic character Cathy should just stick to her diet”; just because it’s a good plan doesn’t necessarily mean it’s one that an inside-view, subjective, emotional person is capable of thinking up, let alone following-through on.
Can you think of anything that a person could do, today, to increase the odds that, if they suddenly woke up post-apocalypse and with decades of solitary confinement ahead of them, they’d have increased odds of coming up with the most-winningest-possible plans for every aspect of their future life?
Agreed. Either there is a superintelligence on Earth that thinks there’s non-negligible probability of another intelligence existing in the solar system, in which case it would sent probes out to search for that intelligence (or blow up all the space probes like Donald suggested) so not looking at Earth would not help, or there is no such superintelligence in which case not looking at Earth also would not help.
Yep, or a space-probe containing another AI that could eventually become a threat to whatever is on Earth.