The craziest true thing I can imagine right now that Eliezer’s hypothetical inhumanly well-calibrated AI could tell me is that the project of Eliezer and his friends will succeed and the EV defined by Eliezer and his friends coheres and does not care how much suffering exists in the universe.
Maybe I am playing the game wrong.
I interpreted the object of the game to be to minimize the probability that Eliezer currently assigns to my response to Eliezer question (what is the craziest thing that . . .) because Eliezer is blinded by anosognosia or by an “absolute denial macro”.
That is the only interpretation that I could imagine that would assign a sensible motive for Eliezer to ask his question (what is the craziest thing that . . .) and to define the game.
But maybe I am just not smart enough to play this game that Eliezer has defined.
EDIT. Oh wait. I just imagined a second interpretation that gives Eliezer a sensible motive—that motive’s being to cause the reader of Eliezer’s post to do for himself what under my first interpretation I was attempting to do for Eliezer. In other words, I am supposed to imagine what truth I am denying.
A third interpretation is that his motive is for us to respond with a statement that the entire human civilization is denying but is actually true—in which case I stick to my original response, which I will now repeat:
The craziest true thing I can imagine right now that Eliezer’s hypothetical inhumanly well-calibrated AI could tell me is that the project of Eliezer and his friends will succeed and the EV defined by Eliezer and his friends coheres and does not care how much suffering exists in the universe.
The probability that I assign to the event that CEV goes that way is probably higher than any other humans. In addition, two humans I know of probably assign it a probability above 1 or 2%. I cannot rule out the possibility of humans I have not discussed this issue with also assigning it a probability above 1 or 2%, but surely the vast majority of humans are “absolutely denying” this, i.e., assigning it a probability under .01%
But you believe that, don’t you? I certainly place a MUCH higher probability on that than on the sort of claims some people have proposed.
The craziest true thing I can imagine right now that Eliezer’s hypothetical inhumanly well-calibrated AI could tell me is that the project of Eliezer and his friends will succeed and the EV defined by Eliezer and his friends coheres and does not care how much suffering exists in the universe.
Maybe I am playing the game wrong.
I interpreted the object of the game to be to minimize the probability that Eliezer currently assigns to my response to Eliezer question (what is the craziest thing that . . .) because Eliezer is blinded by anosognosia or by an “absolute denial macro”.
That is the only interpretation that I could imagine that would assign a sensible motive for Eliezer to ask his question (what is the craziest thing that . . .) and to define the game.
But maybe I am just not smart enough to play this game that Eliezer has defined.
EDIT. Oh wait. I just imagined a second interpretation that gives Eliezer a sensible motive—that motive’s being to cause the reader of Eliezer’s post to do for himself what under my first interpretation I was attempting to do for Eliezer. In other words, I am supposed to imagine what truth I am denying.
A third interpretation is that his motive is for us to respond with a statement that the entire human civilization is denying but is actually true—in which case I stick to my original response, which I will now repeat:
The craziest true thing I can imagine right now that Eliezer’s hypothetical inhumanly well-calibrated AI could tell me is that the project of Eliezer and his friends will succeed and the EV defined by Eliezer and his friends coheres and does not care how much suffering exists in the universe.
The probability that I assign to the event that CEV goes that way is probably higher than any other humans. In addition, two humans I know of probably assign it a probability above 1 or 2%. I cannot rule out the possibility of humans I have not discussed this issue with also assigning it a probability above 1 or 2%, but surely the vast majority of humans are “absolutely denying” this, i.e., assigning it a probability under .01%