Any new information about reality, if properly understood… can only cause people to become more ethical
Whether this is true depends on your definition of “ethical.” In any case, your claim here doesn’t weigh against the idea “that certain claims about the nature of reality could cause people to become more immoral” because people do not, in fact, always “properly understand” new information about reality.
Eliezer did say “Doing worse with more knowledge means you are doing something very wrong,” but check what he said in the very next paragraph: “On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge.” The trouble is that current people are indeed only half-rational, or worse.
A particular truth can only hurt someone if he holds a false belief.
A counterexample: Suppose that a human-level AI, Ralph, holds only true beliefs. But Ralph doesn’t yet know that Petunia exists. The superintelligent Omega tortures everyone who knows that Petunia exists. Now, Ralph learns that Petunia exists. But this truth hurts him, even though he doesn’t hold a false belief.
I did it for clarity. I’m not sure what a human with only true beliefs looks like, or whether “a human with only true beliefs” will be a sensible phrase after we leave folk psychology behind.
Any new information about reality, if properly understood (that part is important), can only cause people to become more ethical.
Suppose Alice is a hates tall people with a passion. Then, she learns about a gathering of vulnerable tall people. She properly understands all the relevant consequences of this fact. Including that now she can act on her hatred!
Eliezer did say “Doing worse with more knowledge means you are doing something very wrong,” but check what he said in the very next paragraph: “On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge.”
The solution to this problem is do something Eliezer always argues against for some reason, namely, to compartmentalize.
Also it’s likely that becoming more rational won’t necessarily help the reasons you mentioned here.
The solution to this problem is do something Eliezer always argues against for some reason, namely, to compartmentalize.
Does Eliezer really do that ? I got the impression, than compartmentalization is, at least at some cases, perceived as a functional model to avoid big mistakes. (Just as You suggest).
Whether this is true depends on your definition of “ethical.” In any case, your claim here doesn’t weigh against the idea “that certain claims about the nature of reality could cause people to become more immoral” because people do not, in fact, always “properly understand” new information about reality.
Eliezer did say “Doing worse with more knowledge means you are doing something very wrong,” but check what he said in the very next paragraph: “On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge.” The trouble is that current people are indeed only half-rational, or worse.
A counterexample: Suppose that a human-level AI, Ralph, holds only true beliefs. But Ralph doesn’t yet know that Petunia exists. The superintelligent Omega tortures everyone who knows that Petunia exists. Now, Ralph learns that Petunia exists. But this truth hurts him, even though he doesn’t hold a false belief.
Was there any need for AIs in the example?
I did it for clarity. I’m not sure what a human with only true beliefs looks like, or whether “a human with only true beliefs” will be a sensible phrase after we leave folk psychology behind.
We have even better counterexamples against
Suppose Alice is a hates tall people with a passion. Then, she learns about a gathering of vulnerable tall people. She properly understands all the relevant consequences of this fact. Including that now she can act on her hatred!
The solution to this problem is do something Eliezer always argues against for some reason, namely, to compartmentalize.
Also it’s likely that becoming more rational won’t necessarily help the reasons you mentioned here.
Does Eliezer really do that ? I got the impression, than compartmentalization is, at least at some cases, perceived as a functional model to avoid big mistakes. (Just as You suggest).