I’m not so sure. If we’re talking about a God-like unFriendly AI, it might do a quick survey of the human race atom by atom and then replace it with a lower entropy system. That way, it can analyze human beings without having them increase entropy, which is something even the AI cannot undo.
I wasn’t arguing against the possibility of atrocities (within the abstract discourse of “God-like AIs”, which BTW feels contrived to me), just imagine how much redundancy can be spared while keeping much of the information content of humanity. I was arguing that there is more room for benevolence than recognized in the presentation—benevolence from uncertainty of value. (Extending my “computation argument” from the “discounting” comment thread by Perplexed with an “information argument”.)
Only if freezing expends less energy than killing. If it doesn’t, the most energy efficient choice would be to scan humanity and then wipe them out before they use more energy.
I’m confused what you mean by scanning. If you mean “scan and preserve the information in a databank” then it’s a (perhaps very weak, depending of how much information relevant to us is actually retained) form of freezing I’ve been referring to (not necessarily literal freezing). If you mean “scan and compute some statistics, then discard the information”, it is killing.
I was thinking about the former type, which is indeed more like freezing. However, it is unlikely that an unFriendly AI would ever re-implement humanity (especially if it mostly cares about entropy), so it’s practically akin to killing.
I’m not so sure. If we’re talking about a God-like unFriendly AI, it might do a quick survey of the human race atom by atom and then replace it with a lower entropy system. That way, it can analyze human beings without having them increase entropy, which is something even the AI cannot undo.
I wasn’t arguing against the possibility of atrocities (within the abstract discourse of “God-like AIs”, which BTW feels contrived to me), just imagine how much redundancy can be spared while keeping much of the information content of humanity. I was arguing that there is more room for benevolence than recognized in the presentation—benevolence from uncertainty of value. (Extending my “computation argument” from the “discounting” comment thread by Perplexed with an “information argument”.)
But if you kill patterns that can be reused, you just waste entropy. So our argument is in favor of freezing, not killing.
Only if freezing expends less energy than killing. If it doesn’t, the most energy efficient choice would be to scan humanity and then wipe them out before they use more energy.
I’m confused what you mean by scanning. If you mean “scan and preserve the information in a databank” then it’s a (perhaps very weak, depending of how much information relevant to us is actually retained) form of freezing I’ve been referring to (not necessarily literal freezing). If you mean “scan and compute some statistics, then discard the information”, it is killing.
I was thinking about the former type, which is indeed more like freezing. However, it is unlikely that an unFriendly AI would ever re-implement humanity (especially if it mostly cares about entropy), so it’s practically akin to killing.