Curated. There’s a lot about Raemon’s feedbackloop-first rationality that doesn’t sit quite right, isn’t quite how I’d theorize about it, but there’s a core here I do like. My model is that “rationality” was something people were much more excited about ~10 years ago until people updated that AGI was much closer than previously thought. Close enough, that rather than sharpen the axe (perfect the art of huma thinking), we better just cut the tree now (AI) with what we’ve got.
I think that might be overall correct, but I like it if not everyone forgets about the Art of Human Rationality. And if enough people pile on the AI Alignment train, I could see it being right to dedicate quite a few of them to the meta of generally thinking better.
Something about the ontology here isn’t quite how I’d frame it, though I think I could translate it. The theory that connects this back to Sequences rationality is perhaps that feedbackloops are iterated empiricism with intervention. An alternative name might be “engineered empiricism”, basically this is just one approach to entangling oneself with the territory. That’s much less of what Raemon’s sketched out, but I think situating feedbackloops within known rationality-theory would help.
I think it’s possible this could help with Alignment research, though I’m pessimistic about that unless Alignment researchers are driving the development process, but maybe it could happen and just be slower.
I’d be pretty glad for a world where we had more Raemons and other people and this could be explored. In general, I like this is for keeping alive the genre of “thinking better is possible”, a core of LessWrong and something I’ve pushed to keep alive even as the bulk of the focus is on concrete AI stuff.
Curated. There’s a lot about Raemon’s feedbackloop-first rationality that doesn’t sit quite right, isn’t quite how I’d theorize about it, but there’s a core here I do like. My model is that “rationality” was something people were much more excited about ~10 years ago until people updated that AGI was much closer than previously thought. Close enough, that rather than sharpen the axe (perfect the art of huma thinking), we better just cut the tree now (AI) with what we’ve got.
I think that might be overall correct, but I like it if not everyone forgets about the Art of Human Rationality. And if enough people pile on the AI Alignment train, I could see it being right to dedicate quite a few of them to the meta of generally thinking better.
Something about the ontology here isn’t quite how I’d frame it, though I think I could translate it. The theory that connects this back to Sequences rationality is perhaps that feedbackloops are iterated empiricism with intervention. An alternative name might be “engineered empiricism”, basically this is just one approach to entangling oneself with the territory. That’s much less of what Raemon’s sketched out, but I think situating feedbackloops within known rationality-theory would help.
I think it’s possible this could help with Alignment research, though I’m pessimistic about that unless Alignment researchers are driving the development process, but maybe it could happen and just be slower.
I’d be pretty glad for a world where we had more Raemons and other people and this could be explored. In general, I like this is for keeping alive the genre of “thinking better is possible”, a core of LessWrong and something I’ve pushed to keep alive even as the bulk of the focus is on concrete AI stuff.