I would like humanity to have a glorious future. But it must be humanity’s future, not that of some rando such as myself who suddenly has godlike superpowers fall on them. Every intervention I might make would leave a god’s fingerprints on the future. Humanity’s future should consist of humanity’s fingerprints, and not to be just a finger-puppet on the hand of a god. Short of deflecting rogue asteroids beyond humanity’s ability to survive, there is likely very little I would do, beyond observing their development and keeping an eye out for anything that would destroy them.
It is said that God sees the fall of every sparrow; nevertheless the sparrow falls.
But you would spend a star to stop other rando from messing with humanity’s future, right? My point was more about humans not being low-impact, or impact measure depending on values. Because if even humans would destroy stars, I don’t get what people mean by non-fanatical maximization or why it matters.
If gods contend over humanity, it is unlikely to go well for humanity. See the Hundred Years War, and those gods didn’t even exist, and acted only through their believers.
I don’t get what people mean by non-fanatical maximization or why it matters.
Uncertainty about one’s utility calculations. Descending from godhood to the level of human capacity, if we do have utility functions (which is disputed) we cannot exhibit them, even to ourselves. We have uncertainties that we are unable to quantify as probabilities. Single-mindedly trying to maximise a single thing that happens to have a simple legible formulation leads only to disaster. The greater the power to do so, the worse the disaster.
Furthermore, different people have different goals. What would an anti-natalist do with godlike powers? Exterminate humanity by sterilizing everyone. A political fanatic of any stripe? Kill everyone who disagrees with them and force the remnant to march in step. A hedonist? Wirehead everyone. In the real world, such people do not do these things because they cannot. Is there anyone who would not be an x-risk if given these powers?
Hence my answer to the godhood question. For humanity to flourish, I would have to avoid being an existential threat myself.
Ok, whatever, let it be rogue asteroids—why deflecting them is not fanatical? How the kind of uncertainty that allows for so much power to be used would help with AI? It could just as well deflect earth from it’s cozy paperclip factory, while observing it’s development. And from anti-natalist viewpoint it would be a disaster to not exterminate humanity. The whole problem is that such kind of uncertainty in humans behaves like other human preferences and just calling it “uncertainty” or “non-fanatical maximization” doesn’t make it more universal.
I would, even if it didn’t.
I would like humanity to have a glorious future. But it must be humanity’s future, not that of some rando such as myself who suddenly has godlike superpowers fall on them. Every intervention I might make would leave a god’s fingerprints on the future. Humanity’s future should consist of humanity’s fingerprints, and not to be just a finger-puppet on the hand of a god. Short of deflecting rogue asteroids beyond humanity’s ability to survive, there is likely very little I would do, beyond observing their development and keeping an eye out for anything that would destroy them.
It is said that God sees the fall of every sparrow; nevertheless the sparrow falls.
But you would spend a star to stop other rando from messing with humanity’s future, right? My point was more about humans not being low-impact, or impact measure depending on values. Because if even humans would destroy stars, I don’t get what people mean by non-fanatical maximization or why it matters.
If gods contend over humanity, it is unlikely to go well for humanity. See the Hundred Years War, and those gods didn’t even exist, and acted only through their believers.
Uncertainty about one’s utility calculations. Descending from godhood to the level of human capacity, if we do have utility functions (which is disputed) we cannot exhibit them, even to ourselves. We have uncertainties that we are unable to quantify as probabilities. Single-mindedly trying to maximise a single thing that happens to have a simple legible formulation leads only to disaster. The greater the power to do so, the worse the disaster.
Furthermore, different people have different goals. What would an anti-natalist do with godlike powers? Exterminate humanity by sterilizing everyone. A political fanatic of any stripe? Kill everyone who disagrees with them and force the remnant to march in step. A hedonist? Wirehead everyone. In the real world, such people do not do these things because they cannot. Is there anyone who would not be an x-risk if given these powers?
Hence my answer to the godhood question. For humanity to flourish, I would have to avoid being an existential threat myself.
Ok, whatever, let it be rogue asteroids—why deflecting them is not fanatical? How the kind of uncertainty that allows for so much power to be used would help with AI? It could just as well deflect earth from it’s cozy paperclip factory, while observing it’s development. And from anti-natalist viewpoint it would be a disaster to not exterminate humanity. The whole problem is that such kind of uncertainty in humans behaves like other human preferences and just calling it “uncertainty” or “non-fanatical maximization” doesn’t make it more universal.