The question of how to improve our intuitions around topics with few and slow feedback loops is a central question of rationality, and this post added a lot of helpful explicit models to this problem that I’ve not seen put quite this way anywhere else.
The core ideas seem not only valuable epistemically, but also to underly some promising alignment strategies I’ve seen (that I believe inherit from you, though I’m uncertain about this point).
The biggest hesitation I had with curating this post:
Each section is very detailed, and it took me a surprising amount of work to understand the structure of and successful chunk both the post overall and its subsections. Especially so given its length.
Overall I am excited to sit with these models for a while and integrate them with my current epistemic practices; thank you for writing this post, I hope you write more like it.
I curated this post for the following reasons:
The question of how to improve our intuitions around topics with few and slow feedback loops is a central question of rationality, and this post added a lot of helpful explicit models to this problem that I’ve not seen put quite this way anywhere else.
The core ideas seem not only valuable epistemically, but also to underly some promising alignment strategies I’ve seen (that I believe inherit from you, though I’m uncertain about this point).
The biggest hesitation I had with curating this post:
Each section is very detailed, and it took me a surprising amount of work to understand the structure of and successful chunk both the post overall and its subsections. Especially so given its length.
Overall I am excited to sit with these models for a while and integrate them with my current epistemic practices; thank you for writing this post, I hope you write more like it.