FWIW, I think this is an oversensitive frame-control reaction. Like, I agree there is (some) frame control* going on here, and there have been some other Eliezer-pieces that felt more-frame-control-y enough that I think it’s reasonable to be watching out for.
But it seems like you tapped out here at the slightest hint of it, and meanwhile… this term only exists at all because Eliezer thought it was an important concept to crystallize, and it’s only in the public discourse right now because Eliezer started talking about it, and refusing to understand what he actually means when he says it just seems super weird to me.
It was written on arbital which was always kinda in a weird beta state. Having read a fair amount of arbital posts, my sense is Eliezer was sort of privately writing the textbook/background reading that he thought was important for the AI Alignment community he wanted to build. Eliezer didn’t crosspost it to LW as if it were written/ready for the LW audience, I did, so judging it on those terms feels weird.
(* note: I think frame control is moderately common, isn’t automatically bad, I think it might be a good rationalist-norm to acknowledge when you’re doing it but that norm isn’t at all established and definitely wasn’t established in 2015 when this was first written.)
FWIW, I think this is an oversensitive frame-control reaction. Like, I agree there is (some) frame control* going on here, and there have been some other Eliezer-pieces that felt more-frame-control-y enough that I think it’s reasonable to be watching out for.
But it seems like you tapped out here at the slightest hint of it, and meanwhile… this term only exists at all because Eliezer thought it was an important concept to crystallize, and it’s only in the public discourse right now because Eliezer started talking about it, and refusing to understand what he actually means when he says it just seems super weird to me.
It was written on arbital which was always kinda in a weird beta state. Having read a fair amount of arbital posts, my sense is Eliezer was sort of privately writing the textbook/background reading that he thought was important for the AI Alignment community he wanted to build. Eliezer didn’t crosspost it to LW as if it were written/ready for the LW audience, I did, so judging it on those terms feels weird.
(* note: I think frame control is moderately common, isn’t automatically bad, I think it might be a good rationalist-norm to acknowledge when you’re doing it but that norm isn’t at all established and definitely wasn’t established in 2015 when this was first written.)