Sometimes people redraw boundaries for reasons of local expediency. For instance, the category of AGI seems to have been expanded implicitly in some contexts to include what might previously have just been called a really good machine learning library that can do many things humans can do. This allows AGI alignment to be a bigger-tent cause, and raise more money, than it would in the counterfactual where the old definitions were preserved.
This article seems to me to be outlining a principled case that such category redefinitions can be systematically distinguished from purely epistemic category redefinitions, with the implication that there’s a legitimate interest in tracking which is which, and sometimes in resisting politicized recategorizations in order to defend the enterprise of shared mapmaking.
I don’t see how this article argues against a wider AGI definition. The wider definition is still a correlational cluster.
The article doesn’t say that it’s worthwhile to keep historical meaning of a term like AGI. It also doesn’t say that it’s good to draw the boundaries in a way that a person can guess where the boundary is based on understanding the words artificial, general and intelligence.
It’s not a thinner boundary so that “38. Your definition draws a boundary around a cluster in an inappropriately ‘thin’ subspace of Thingspace that excludes relevant variables, resulting in fallacies of compression.” might be violated.
The article starts by speaking about ” It is what people should be trying to do ”, say in it’s middle “This leaves aspiring instructors of rationality in something of a predicament: in order to teach people how categories can be more or (ahem) less wrong,” and ends with speaking about what people must do.
That does appear to me like an article that intends to make a case that people should prefer certain definition over other definitions.
If your case is rather that the value of the article is about classification of how boundaries are drawn to distinct ways those boundaries are drawn, it seems to me surprising that you read out of the article that certain claims should be classified as redrawing boundaries for reasons of local expediency that seems odd to me given that the article neither speaks about redrawing boundaries nor redefining boundaries nor about classifying anything under the suggested category of “local expediency”.
Rationality discourse is necessarily about specific contexts and purposes. I don’t think the Sequences imply that a spy should always reveal themselves, or that actors in a play should refuse to perform the same errors with the same predictable bad consequences two nights in a row. Discourse about how to speak the truth efficiently, on a site literally called “Less Wrong,” shouldn’t have to explicitly disclaim that it’s meant as advice within that context every time, even if it’s often helpful to examine what that means and when and how it is useful to prioritize over other desiderata.
I’m not sure what your position happens to be. Is it “This post isn’t advice. It’s wrong for you (ChristianKl) to expect that the author explicitely disclaims giving advice when he doesn’t intent to give advice.”?
If that’s the case, it seems strange to me. This post contains explicit statemensts about what people should/must do. It contains those in the beginning and in the end, which are usually the places where an essay states it’s purpose.
It’s bad to be too vague to be wrong.
Postmodern writing about how to speak truth efficiently that’s to vague to be wrong is problematic and I don’t think having a bunch of LW signaling and cheers for rationalists make it better.
Sometimes people redraw boundaries for reasons of local expediency. For instance, the category of AGI seems to have been expanded implicitly in some contexts to include what might previously have just been called a really good machine learning library that can do many things humans can do. This allows AGI alignment to be a bigger-tent cause, and raise more money, than it would in the counterfactual where the old definitions were preserved.
This article seems to me to be outlining a principled case that such category redefinitions can be systematically distinguished from purely epistemic category redefinitions, with the implication that there’s a legitimate interest in tracking which is which, and sometimes in resisting politicized recategorizations in order to defend the enterprise of shared mapmaking.
I don’t see how this article argues against a wider AGI definition. The wider definition is still a correlational cluster.
The article doesn’t say that it’s worthwhile to keep historical meaning of a term like AGI. It also doesn’t say that it’s good to draw the boundaries in a way that a person can guess where the boundary is based on understanding the words artificial, general and intelligence.
It’s not a thinner boundary so that “38. Your definition draws a boundary around a cluster in an inappropriately ‘thin’ subspace of Thingspace that excludes relevant variables, resulting in fallacies of compression.” might be violated.
The article didn’t “argue against” a wider AGI definition. It implied a more specific claim than “for” or “against.”
The article starts by speaking about ” It is what people should be trying to do ”, say in it’s middle “This leaves aspiring instructors of rationality in something of a predicament: in order to teach people how categories can be more or (ahem) less wrong,” and ends with speaking about what people must do.
That does appear to me like an article that intends to make a case that people should prefer certain definition over other definitions.
If your case is rather that the value of the article is about classification of how boundaries are drawn to distinct ways those boundaries are drawn, it seems to me surprising that you read out of the article that certain claims should be classified as redrawing boundaries for reasons of local expediency that seems odd to me given that the article neither speaks about redrawing boundaries nor redefining boundaries nor about classifying anything under the suggested category of “local expediency”.
Rationality discourse is necessarily about specific contexts and purposes. I don’t think the Sequences imply that a spy should always reveal themselves, or that actors in a play should refuse to perform the same errors with the same predictable bad consequences two nights in a row. Discourse about how to speak the truth efficiently, on a site literally called “Less Wrong,” shouldn’t have to explicitly disclaim that it’s meant as advice within that context every time, even if it’s often helpful to examine what that means and when and how it is useful to prioritize over other desiderata.
I’m not sure what your position happens to be. Is it “This post isn’t advice. It’s wrong for you (ChristianKl) to expect that the author explicitely disclaims giving advice when he doesn’t intent to give advice.”?
If that’s the case, it seems strange to me. This post contains explicit statemensts about what people should/must do. It contains those in the beginning and in the end, which are usually the places where an essay states it’s purpose.
It’s bad to be too vague to be wrong.
Postmodern writing about how to speak truth efficiently that’s to vague to be wrong is problematic and I don’t think having a bunch of LW signaling and cheers for rationalists make it better.