I feel generally agreeable towards this concept, and also towards the idea of being careful to use phrases as they are defined.
But I feel something else after starting to read the Arbital page. Since you quadruple insisted on it, I went ahead and actually opened the page and started reading it. And several things felt off in quick succession. I’m going to think out loud through those things here.
The first part is the concept of “guarded term”. Here’s part of the definition of that.
stretching it … is an unusually strong discourtesy.
...You can’t just say that something is a discourtesy. I have never heard of “guarded term” and I’m pretty sure it’s a thing that the people writing these pages made up, and is not well-known basically anywhere. So it’s pretty weird to say “if you do thing X, you’re being discourteous”. The way rudeness works is complicated, but it doesn’t work this way. You need bigger social agreement before something is actually rude.
Synonyms include ‘pivotal achievement’ and ‘astronomical achievement’.
It feels pretty weird an unnecessarily confusing to tell the reader about two synonyms right away, especially when I’m pretty sure that all of these terms are obscure. It seems like it would have been a lot better to just declare the title of the page to be the one term for this, and to let any “synonyms” fade into non-use.
The next paragraph defines two other terms, a contrasting term, and a superset term, each with their own abbreviations.
Then the next paragraph tells me about two deprecated terms!
Why on earth are you dumping all these random, extremely similar but different, not-at-all widely used terms on me? You’re both making it weirdly difficult for me to come away using terms you want me to use, and also making it seem like there’s a whole big history of using these terms when there really isn’t.
Next bit:
but AI alignment researchers kept running into the problem
...
Usage has therefore shifted such that (as of late 2021) researchers use...
Okay yeah, this is getting super annoying. Who is speaking for all “AI alignment researchers”? I’m like 95% sure this is all just referring to like half a dozen people having a series of conversations in the MIRI office. But it seems to be making it sound like a whole extant field, as if me using these terms wrong will cause miscommunication with “AI alignment researchers” --
...oooh. This is the feeling of detecting Frame Control. Yeah, that feels clarifying. I am getting increasingly weirded out by this page in part because it seems to be trying to control the frame.
To be clear, I don’t think this is intentional, or that any bad intent was necessarily being executed. And for all I know, maybe the About page of Arbital says something like “here I will write articles as if terms were in established use in my preferred way.” Maybe the whole thing was semi-aspirational/semi-fictional. But I’m not going to go looking for more explanation. My heuristic for dealing with frame control is to leave. You get a certain number of chances to say your thing and make me understand what you’re trying to say, and after a certain number of frame-control-detection strikes, I just leave.
So, I’m not going to finish reading the Arbital page on Pivotal Act even though Raemon quadruple recommended it. And I guess I’ll just go ahead using “pivotal act” the same way I hear other people using it, maybe while vaguely remembering the one-sentence definition I did get, and continuing to independently evaluate the validity of the concept.
Okay, but how do we get technical terms with precise meanings that are analyzable using propositions that can be investigated and decided using logic and observation? If we’re in a context where the meaning of words is automatically eroded by projection into low-dimensional, low-context concepts into whatever the surrounding political forces want, we’re not going to get anywhere without being able to fix the meaning of words we need to have a non-obvious technically important use.
Instead of saying “using this term to mean X is a discourtesy”, one could try “please don’t use this term to mean X, and please encourage your readers not to use it to mean X, and to encourage their readers and so on”.
FWIW, I think this is an oversensitive frame-control reaction. Like, I agree there is (some) frame control* going on here, and there have been some other Eliezer-pieces that felt more-frame-control-y enough that I think it’s reasonable to be watching out for.
But it seems like you tapped out here at the slightest hint of it, and meanwhile… this term only exists at all because Eliezer thought it was an important concept to crystallize, and it’s only in the public discourse right now because Eliezer started talking about it, and refusing to understand what he actually means when he says it just seems super weird to me.
It was written on arbital which was always kinda in a weird beta state. Having read a fair amount of arbital posts, my sense is Eliezer was sort of privately writing the textbook/background reading that he thought was important for the AI Alignment community he wanted to build. Eliezer didn’t crosspost it to LW as if it were written/ready for the LW audience, I did, so judging it on those terms feels weird.
(* note: I think frame control is moderately common, isn’t automatically bad, I think it might be a good rationalist-norm to acknowledge when you’re doing it but that norm isn’t at all established and definitely wasn’t established in 2015 when this was first written.)
I feel generally agreeable towards this concept, and also towards the idea of being careful to use phrases as they are defined.
But I feel something else after starting to read the Arbital page. Since you quadruple insisted on it, I went ahead and actually opened the page and started reading it. And several things felt off in quick succession. I’m going to think out loud through those things here.
The first part is the concept of “guarded term”. Here’s part of the definition of that.
...You can’t just say that something is a discourtesy. I have never heard of “guarded term” and I’m pretty sure it’s a thing that the people writing these pages made up, and is not well-known basically anywhere. So it’s pretty weird to say “if you do thing X, you’re being discourteous”. The way rudeness works is complicated, but it doesn’t work this way. You need bigger social agreement before something is actually rude.
It feels pretty weird an unnecessarily confusing to tell the reader about two synonyms right away, especially when I’m pretty sure that all of these terms are obscure. It seems like it would have been a lot better to just declare the title of the page to be the one term for this, and to let any “synonyms” fade into non-use.
The next paragraph defines two other terms, a contrasting term, and a superset term, each with their own abbreviations.
Then the next paragraph tells me about two deprecated terms!
Why on earth are you dumping all these random, extremely similar but different, not-at-all widely used terms on me? You’re both making it weirdly difficult for me to come away using terms you want me to use, and also making it seem like there’s a whole big history of using these terms when there really isn’t.
Next bit:
Okay yeah, this is getting super annoying. Who is speaking for all “AI alignment researchers”? I’m like 95% sure this is all just referring to like half a dozen people having a series of conversations in the MIRI office. But it seems to be making it sound like a whole extant field, as if me using these terms wrong will cause miscommunication with “AI alignment researchers” --
...oooh. This is the feeling of detecting Frame Control. Yeah, that feels clarifying. I am getting increasingly weirded out by this page in part because it seems to be trying to control the frame.
To be clear, I don’t think this is intentional, or that any bad intent was necessarily being executed. And for all I know, maybe the About page of Arbital says something like “here I will write articles as if terms were in established use in my preferred way.” Maybe the whole thing was semi-aspirational/semi-fictional. But I’m not going to go looking for more explanation. My heuristic for dealing with frame control is to leave. You get a certain number of chances to say your thing and make me understand what you’re trying to say, and after a certain number of frame-control-detection strikes, I just leave.
So, I’m not going to finish reading the Arbital page on Pivotal Act even though Raemon quadruple recommended it. And I guess I’ll just go ahead using “pivotal act” the same way I hear other people using it, maybe while vaguely remembering the one-sentence definition I did get, and continuing to independently evaluate the validity of the concept.
Okay, but how do we get technical terms with precise meanings that are analyzable using propositions that can be investigated and decided using logic and observation? If we’re in a context where the meaning of words is automatically eroded by projection into low-dimensional, low-context concepts into whatever the surrounding political forces want, we’re not going to get anywhere without being able to fix the meaning of words we need to have a non-obvious technically important use.
Instead of saying “using this term to mean X is a discourtesy”, one could try “please don’t use this term to mean X, and please encourage your readers not to use it to mean X, and to encourage their readers and so on”.
FWIW, I think this is an oversensitive frame-control reaction. Like, I agree there is (some) frame control* going on here, and there have been some other Eliezer-pieces that felt more-frame-control-y enough that I think it’s reasonable to be watching out for.
But it seems like you tapped out here at the slightest hint of it, and meanwhile… this term only exists at all because Eliezer thought it was an important concept to crystallize, and it’s only in the public discourse right now because Eliezer started talking about it, and refusing to understand what he actually means when he says it just seems super weird to me.
It was written on arbital which was always kinda in a weird beta state. Having read a fair amount of arbital posts, my sense is Eliezer was sort of privately writing the textbook/background reading that he thought was important for the AI Alignment community he wanted to build. Eliezer didn’t crosspost it to LW as if it were written/ready for the LW audience, I did, so judging it on those terms feels weird.
(* note: I think frame control is moderately common, isn’t automatically bad, I think it might be a good rationalist-norm to acknowledge when you’re doing it but that norm isn’t at all established and definitely wasn’t established in 2015 when this was first written.)