I wasn’t arguing against the possibility of atrocities (within the abstract discourse of “God-like AIs”, which BTW feels contrived to me), just imagine how much redundancy can be spared while keeping much of the information content of humanity. I was arguing that there is more room for benevolence than recognized in the presentation—benevolence from uncertainty of value. (Extending my “computation argument” from the “discounting” comment thread by Perplexed with an “information argument”.)
I wasn’t arguing against the possibility of atrocities (within the abstract discourse of “God-like AIs”, which BTW feels contrived to me), just imagine how much redundancy can be spared while keeping much of the information content of humanity. I was arguing that there is more room for benevolence than recognized in the presentation—benevolence from uncertainty of value. (Extending my “computation argument” from the “discounting” comment thread by Perplexed with an “information argument”.)