So it sounds like the underlying content categories are:
Technical AI safety
Nontechnical AI safety/AI strategy
AI content unrelated to safety
Is that right?
I guess my complaint is that while “AI content unrelated to safety” always gets tagged “AI”, and “Nontechnical AI safety/AI strategy” always gets tagged “AI Risk”, there doesn’t seem to be a consistent policy for the “Technical AI safety” content.
All of them get tagged AI. Not all of the technical content gets tagged AI risk – for example, when Scott Garrabrant writes curious things like prisoner’s dilemma with costs to modelling, this is related to embedded agency, but it’s not at all clearly relevant to AI risk, only indirectly at best. The ones that are explicitly about AI risk get tagged that way, such as What Failure Looks Like, or The Rocket Alignment Problem get tagged AI risk.
So it sounds like the underlying content categories are:
Technical AI safety
Nontechnical AI safety/AI strategy
AI content unrelated to safety
Is that right?
I guess my complaint is that while “AI content unrelated to safety” always gets tagged “AI”, and “Nontechnical AI safety/AI strategy” always gets tagged “AI Risk”, there doesn’t seem to be a consistent policy for the “Technical AI safety” content.
All of them get tagged AI. Not all of the technical content gets tagged AI risk – for example, when Scott Garrabrant writes curious things like prisoner’s dilemma with costs to modelling, this is related to embedded agency, but it’s not at all clearly relevant to AI risk, only indirectly at best. The ones that are explicitly about AI risk get tagged that way, such as What Failure Looks Like, or The Rocket Alignment Problem get tagged AI risk.