Does LessWrong need link posts for astralcodexten?
Not in general, no.
Aren’t LessWrong readers already pretty aware of Scott’s substack?
I would be surprised if the overlap is > 50%
I’m linkposting it because I think this fits into a larger pattern of understanding cognition that will play an important role in AI safety and AI ethics.
Does LessWrong need link posts for astralcodexten? Aren’t LessWrong readers already pretty aware of Scott’s substack?
Anyway, if you are reading this one, there’s some interesting comments, so check those out too.
Not in general, no.
I would be surprised if the overlap is > 50%
I’m linkposting it because I think this fits into a larger pattern of understanding cognition that will play an important role in AI safety and AI ethics.
(I see what podcasts you listen to.)