Some people seem to dislike the recent deluge of AI content on LW, for my part, I often find myself annoyingly scrolling past all the non AI posts. Most of the value I get from LW is for AI Safety discussion from a wider audience (e.g. I don’t have AF access).
I don’t really like trying to suppress LW’s AI flavour.
I dislike the deluge of AI content. Not in its own right, but simply because so much of it is built on background assumptions built on background assumptions built on background assumptions. And math. It’s illegible to me; nothing is ever explained from obvious or familiar premises. And I feel like it’s rude for me to complain because I understand that people need to work on this very fast so we don’t all die, and I can’t reasonably expect alignment communication to be simple because I can’t reasonably expect the territory of alignment to be simple.
I can’t reasonably expect alignment communication to be simple because I can’t reasonably expect the territory of alignment to be simple.
my intuition wants to scream “yes you can” but the rest of my brain isn’t sure I can justify this with sharply grounded reasoning chains.
in general, being an annoying noob in the comments is a great contribution. it might not be the best contribution, but it’s always better than nothing. you might not get upvoted for it, which is fine.
and I really strongly believe that rationality was always ai capabilities work (the natural language code side). rationality is the task of building a brain in a brain using brain stuff like words and habits.
be the bridge between modern ai and modern rationality you want to see in the world! old rationality is stuff like solomonoff inductors, so eg the recent garrabrant sequence may be up your alley.
Some people seem to dislike the recent deluge of AI content on LW, for my part, I often find myself annoyingly scrolling past all the non AI posts. Most of the value I get from LW is for AI Safety discussion from a wider audience (e.g. I don’t have AF access).
I don’t really like trying to suppress LW’s AI flavour.
I dislike the deluge of AI content. Not in its own right, but simply because so much of it is built on background assumptions built on background assumptions built on background assumptions. And math. It’s illegible to me; nothing is ever explained from obvious or familiar premises. And I feel like it’s rude for me to complain because I understand that people need to work on this very fast so we don’t all die, and I can’t reasonably expect alignment communication to be simple because I can’t reasonably expect the territory of alignment to be simple.
my intuition wants to scream “yes you can” but the rest of my brain isn’t sure I can justify this with sharply grounded reasoning chains.
in general, being an annoying noob in the comments is a great contribution. it might not be the best contribution, but it’s always better than nothing. you might not get upvoted for it, which is fine.
and I really strongly believe that rationality was always ai capabilities work (the natural language code side). rationality is the task of building a brain in a brain using brain stuff like words and habits.
be the bridge between modern ai and modern rationality you want to see in the world! old rationality is stuff like solomonoff inductors, so eg the recent garrabrant sequence may be up your alley.