We have both an AI tag and an AI Risk tag. When should one use one or the other? Maybe we should rename AI Risk to AI Risk Strategy or AI Strategy so they’re more clearly differentiated.
I think AI Risk is open to improvement as a name, but it’s definitely a more narrow category than AI. AI includes reviews of AI textbooks, explanation of how certain ML architectures work, and just anything relating to AI. AI risk is about the downside risk and analysis of what the risk looks like.
Productivity seems to include both “improve productivity by fighting akrasia” and “improve productivity by optimizing your workflows”, for example What’s your favorite notetaking system?, so it’s not a full overlap.
Procrastination is the tag that feels most redundant next to Akrasia to me.
Yeah, there’s also Willpower in that cluster too. I think I want a good meta-cluster for that whole bundle but haven’t thought of what it’d be called. There’s overlap between each, but also some differentiation, so I’m not sure, would be interested in proposals for how to carve up and tag that space.
Oh, and Motivation(s). However, that tag has grown kind of huge and I haven’t got to thinking about what it really should be,
FWIW, I’m not a fan of “akrasia”—seems unnecessarily highfalutin to me. Most stuff tagged with “akrasia” is essentially about procrastination, not akrasia as a philosophical problem. (Just found this article on Google.) I think it’s OK for LW to use jargon, but we should recognize jargon comes with a cost, and there’s no reason to pay the cost if we aren’t getting any particular benefit.
(crl826 mentioned that “procrastination” is another related tag in the latest open thread.)
So it sounds like the underlying content categories are:
Technical AI safety
Nontechnical AI safety/AI strategy
AI content unrelated to safety
Is that right?
I guess my complaint is that while “AI content unrelated to safety” always gets tagged “AI”, and “Nontechnical AI safety/AI strategy” always gets tagged “AI Risk”, there doesn’t seem to be a consistent policy for the “Technical AI safety” content.
All of them get tagged AI. Not all of the technical content gets tagged AI risk – for example, when Scott Garrabrant writes curious things like prisoner’s dilemma with costs to modelling, this is related to embedded agency, but it’s not at all clearly relevant to AI risk, only indirectly at best. The ones that are explicitly about AI risk get tagged that way, such as What Failure Looks Like, or The Rocket Alignment Problem get tagged AI risk.
We have both an AI tag and an AI Risk tag. When should one use one or the other? Maybe we should rename AI Risk to AI Risk Strategy or AI Strategy so they’re more clearly differentiated.
I think AI Risk is open to improvement as a name, but it’s definitely a more narrow category than AI. AI includes reviews of AI textbooks, explanation of how certain ML architectures work, and just anything relating to AI. AI risk is about the downside risk and analysis of what the risk looks like.
BTW, “productivity” and “akrasia” are another pair of tags that feel a bit poorly differentiated to me.
Productivity seems to include both “improve productivity by fighting akrasia” and “improve productivity by optimizing your workflows”, for example What’s your favorite notetaking system?, so it’s not a full overlap.
Procrastination is the tag that feels most redundant next to Akrasia to me.
Yeah, there’s also Willpower in that cluster too. I think I want a good meta-cluster for that whole bundle but haven’t thought of what it’d be called. There’s overlap between each, but also some differentiation, so I’m not sure, would be interested in proposals for how to carve up and tag that space.
Oh, and Motivation(s). However, that tag has grown kind of huge and I haven’t got to thinking about what it really should be,
FWIW, I’m not a fan of “akrasia”—seems unnecessarily highfalutin to me. Most stuff tagged with “akrasia” is essentially about procrastination, not akrasia as a philosophical problem. (Just found this article on Google.) I think it’s OK for LW to use jargon, but we should recognize jargon comes with a cost, and there’s no reason to pay the cost if we aren’t getting any particular benefit.
(crl826 mentioned that “procrastination” is another related tag in the latest open thread.)
So it sounds like the underlying content categories are:
Technical AI safety
Nontechnical AI safety/AI strategy
AI content unrelated to safety
Is that right?
I guess my complaint is that while “AI content unrelated to safety” always gets tagged “AI”, and “Nontechnical AI safety/AI strategy” always gets tagged “AI Risk”, there doesn’t seem to be a consistent policy for the “Technical AI safety” content.
All of them get tagged AI. Not all of the technical content gets tagged AI risk – for example, when Scott Garrabrant writes curious things like prisoner’s dilemma with costs to modelling, this is related to embedded agency, but it’s not at all clearly relevant to AI risk, only indirectly at best. The ones that are explicitly about AI risk get tagged that way, such as What Failure Looks Like, or The Rocket Alignment Problem get tagged AI risk.