I think the number of safe people depends sensitively on the details of the 1,000,000,000,000,000xing. For example: Were they given a five minute lecture on the dangers of value lock-in? On the universe’s control panel, is the “find out what would I think if I reflected more, and what the actual causes are of everyone else’s opinions” button more prominently in view than the “turn everything into my favorite thing” button? And so on.
My model is that giving them the five-minute lecture on the dangers of value lock-in won’t help much. (We’ve tried giving five-minute lectures on the dangers of building superintelligences...) And I think most people executing “find out what would I think if I reflected more, and what the actual causes are of everyone else’s opinions” would get stuck in an epistemic pit and not realize it.
I think the number of safe people depends sensitively on the details of the 1,000,000,000,000,000xing. For example: Were they given a five minute lecture on the dangers of value lock-in? On the universe’s control panel, is the “find out what would I think if I reflected more, and what the actual causes are of everyone else’s opinions” button more prominently in view than the “turn everything into my favorite thing” button? And so on.
My model is that giving them the five-minute lecture on the dangers of value lock-in won’t help much. (We’ve tried giving five-minute lectures on the dangers of building superintelligences...) And I think most people executing “find out what would I think if I reflected more, and what the actual causes are of everyone else’s opinions” would get stuck in an epistemic pit and not realize it.