Hmm, I agree that Eliezer, MIRI and its precursors did a lot of good work raising the profile of this particular x-risk. However, I am less certain of their theoretical contributions, which you describe as
That doesn’t mean the theorizing was useless. It laid an incredible amount of valuable groundwork. It gave the experimental researchers a server of what they are up against. Laid out the scope of the problem, and made helpful pointers towards important characteristics that good solutions must have.
I guess they did highlight a lot of dead ends, gotta agree with that. I am not sure how much the larger AI/ML community values their theoretical work. Maybe the practitioners haven’t caught up yet.
Theory used to be 95% of the work going into AGI alignment. Now it needs to become more like 5%
Well, whatever the fraction, it certainly seems like it’s time to rebalance it, I agree. I don’t know if MIRI has the know-how to do experimental work at the level of the rapidly advancing field.
Hmm, I agree that Eliezer, MIRI and its precursors did a lot of good work raising the profile of this particular x-risk. However, I am less certain of their theoretical contributions, which you describe as
I guess they did highlight a lot of dead ends, gotta agree with that. I am not sure how much the larger AI/ML community values their theoretical work. Maybe the practitioners haven’t caught up yet.
Well, whatever the fraction, it certainly seems like it’s time to rebalance it, I agree. I don’t know if MIRI has the know-how to do experimental work at the level of the rapidly advancing field.