Let me try to rephrase: correct FAI theory shouldn’t have dangerous ideas. If we find that the current version does have dangerous ideas, then this suggests that we are on the wrong track. The “Friendly” in “Friendly AI” should mean friendly.
And so it was, but not an example for other times when it wasn’t. A rare occurrence. I’m pretty sure it didn’t lead to any errors though, in this simple case.
(I wonder why Eliezer pitched in the way he did, with only weak disambiguation between the content of Tetronian’s comment and commentary on correctness of Roko’s post.)
I got the impression that you responded to “FAI Theory” as our theorizing and Eliezer responded to it as the theory making its way to the eventual FAI.
Wrong and stupid.
FYI, this is an excellent example of contempt.
And so it was, but not an example for other times when it wasn’t. A rare occurrence. I’m pretty sure it didn’t lead to any errors though, in this simple case.
(I wonder why Eliezer pitched in the way he did, with only weak disambiguation between the content of Tetronian’s comment and commentary on correctness of Roko’s post.)
I got the impression that you responded to “FAI Theory” as our theorizing and Eliezer responded to it as the theory making its way to the eventual FAI.
Ok...but why?
Edit: If you don’t want to say why publicly, feel free to PM me.
here