That’s pretty much why I wanted a commitment to certain epistemic rationality projects: to show that it’s possible to train that better (which has high VOI) and to make sure CFAR gets some momentum in that direction.
That’s pretty much why I wanted a commitment to certain epistemic rationality projects: to show that it’s possible to train that better (which has high VOI) and to make sure CFAR gets some momentum in that direction.