Zubon: “if you do not think you should be a bit more sociopathic, what are the odds you have exactly the right amounts of empathy and altruism?”
that was exactly what I was going to say ; - ) My personal “gut” feeling is that have exactly the right amount of empathy, no more, no less. Intellectually, I realize that this is means I have failed the reversal test, and that I am therefore probably suffering from status quo bias. (assuming, as I believe, that there is an objectively true ethics. If you’re a moral relativist, you get a blank cheque for status quo bias in your ethical views)
Sebastian Hagen: “There might not be enough “yuck” and “yum” factors around to offer direct guidance on every question, but they’re still the basis for abstract rational reasoning. Do you think “paperclip optimizer”-type AIs are impossible? If so, why? There’s nothing incoherent about a “maximize the number of paperclips over time” optimization criterion; if anything, it’s a lot simpler than those in use by humans.
yes, that’s true. As soon as I made that post I realized I had gone too far; let me restate a more reasonable position: any moral agreement or psychological unity that humankind has over and above the unity that all approximately rational minds have will stem from our evolved “yuck” and “yum” factors in our EEA.
As for the “rational economic agents” like the paperclip maximizer: yes, they do exist, but they, together with a wide class of rational agents (i.e. ones that are not utility maximizers) also have a certain axiological unity called universal instrumental values. I’m more interested in this “unity of agent kind”.
One could summarize by saying:
Axiological agreement = (unity of agent kind) + (yuck and yum)
Personally I find it hard to really identify with my genes’ desires. Perhaps one could consider this a character flaw, but I just cannot honestly make myself identify with these arbitrary choices.
Zubon: “if you do not think you should be a bit more sociopathic, what are the odds you have exactly the right amounts of empathy and altruism?”
that was exactly what I was going to say ; - ) My personal “gut” feeling is that have exactly the right amount of empathy, no more, no less. Intellectually, I realize that this is means I have failed the reversal test, and that I am therefore probably suffering from status quo bias. (assuming, as I believe, that there is an objectively true ethics. If you’re a moral relativist, you get a blank cheque for status quo bias in your ethical views)
Sebastian Hagen: “There might not be enough “yuck” and “yum” factors around to offer direct guidance on every question, but they’re still the basis for abstract rational reasoning. Do you think “paperclip optimizer”-type AIs are impossible? If so, why? There’s nothing incoherent about a “maximize the number of paperclips over time” optimization criterion; if anything, it’s a lot simpler than those in use by humans.
yes, that’s true. As soon as I made that post I realized I had gone too far; let me restate a more reasonable position: any moral agreement or psychological unity that humankind has over and above the unity that all approximately rational minds have will stem from our evolved “yuck” and “yum” factors in our EEA.
As for the “rational economic agents” like the paperclip maximizer: yes, they do exist, but they, together with a wide class of rational agents (i.e. ones that are not utility maximizers) also have a certain axiological unity called universal instrumental values. I’m more interested in this “unity of agent kind”.
One could summarize by saying:
Axiological agreement = (unity of agent kind) + (yuck and yum)
Personally I find it hard to really identify with my genes’ desires. Perhaps one could consider this a character flaw, but I just cannot honestly make myself identify with these arbitrary choices.