It’s fun to fantasize about transcending the human condition via [contextual proxy for rationality], but I’m skeptical in the extreme that such a thing will happen—at least in a way that is not repugnant to most current value systems.
What would the least repugnant possible future look like? -- keeping in mind that all the details of such a future would have to actually hold together? (Since “least repugnant possible”, taken literally, would mean the details would hold together by coincidence, consider instead a future that were, say, one-in-a-billion for its non-repugnance.) If bringing about the least repugnant future you could were your only goal, what would you do—what actions would you take?
When I imagine those actions, they resemble rationality, including trying to develop formal methods to understand as best you can which parts of the world are value systems which deserve to be taken into account for purposes of defining repugnance, how to avoid missing or persistently disregarding any value systems that deserved to be taken into account, how to take those value systems into account even where they seem to contradict each other, and how to avoid missing or persistently disregarding major implications of those value systems; as well as being very careful not to gloss over flaws in your formal methods or overall approach—especially foundational problems like Gödelian undecidability, unsystematic use of reflection, bounded rationality, and definition of slippery concepts like “repugnant” --, in case the flaws point to a better alternative.
What do the actions of someone whose only goal was to bring about the least repugnant future they could resemble when you imagine them?
(How much repugnantness is there in the “default”/”normal”/”if only it could be normal” future you imagine? Is that amount of repugnantness the amount you take for granted—do you assume that no substantially less repugnant future is achievable, and do you assume that to safely achieve a future at least roughly that non-repugnant would not generally require doing anything unprecedented? How repugnant would a typical future be in which humanity had preventably gone extinct because of irrationality, how repugnant would a future be in which humanity had gone extinct because of a preventable choice for repugnance-insensitive rationality, and how relatively likely would these extinctions be under the two conditions of global irrationality and global repugnance-insensitive rationality? Would a person who cared about the repugnance of the future, when choosing between advocacy of reason and of unreason, try to think over effects like this and try to take them into account, given that the repugnance of the future was at stake?)
What would the least repugnant possible future look like? -- keeping in mind that all the details of such a future would have to actually hold together? (Since “least repugnant possible”, taken literally, would mean the details would hold together by coincidence, consider instead a future that were, say, one-in-a-billion for its non-repugnance.) If bringing about the least repugnant future you could were your only goal, what would you do—what actions would you take?
When I imagine those actions, they resemble rationality, including trying to develop formal methods to understand as best you can which parts of the world are value systems which deserve to be taken into account for purposes of defining repugnance, how to avoid missing or persistently disregarding any value systems that deserved to be taken into account, how to take those value systems into account even where they seem to contradict each other, and how to avoid missing or persistently disregarding major implications of those value systems; as well as being very careful not to gloss over flaws in your formal methods or overall approach—especially foundational problems like Gödelian undecidability, unsystematic use of reflection, bounded rationality, and definition of slippery concepts like “repugnant” --, in case the flaws point to a better alternative.
What do the actions of someone whose only goal was to bring about the least repugnant future they could resemble when you imagine them?
(How much repugnantness is there in the “default”/”normal”/”if only it could be normal” future you imagine? Is that amount of repugnantness the amount you take for granted—do you assume that no substantially less repugnant future is achievable, and do you assume that to safely achieve a future at least roughly that non-repugnant would not generally require doing anything unprecedented? How repugnant would a typical future be in which humanity had preventably gone extinct because of irrationality, how repugnant would a future be in which humanity had gone extinct because of a preventable choice for repugnance-insensitive rationality, and how relatively likely would these extinctions be under the two conditions of global irrationality and global repugnance-insensitive rationality? Would a person who cared about the repugnance of the future, when choosing between advocacy of reason and of unreason, try to think over effects like this and try to take them into account, given that the repugnance of the future was at stake?)