Did you get the impression from my post that I think MIRI was trying to solve philosophy?
I did get the feeling that your post implies that. Could you help me clear my confusion?
For example here you say:
Yudkowsky believes further study in this tradition can supersede ordinary academic philosophy, which he believes to be conceptually weak and motivated to continue ongoing disputes for more publications.
This is something I agree with Eliezer about. I can clearly see that “executive philosophy” framework is what made his reasoning about philosophy adjecent topics so good campared to the baseline which I encountered before. Solution to free will is a great example.
But you seem to be framing this as a failed project. In what sense did it fail?
Some others, such as Wei Dai and Michel Vassar, had called even earlier the infeasibility of completing the philosophy with a small technical team.
The reading that I got is that executive philosophy framework was trying to complete/solve philosophy and failed.
But if executive philosophy framework wasn’t even systematically applied to the solutions of philosophical problems in the first place, if the methodology wasn’t given a fair shot, in what sense can we say that it failed to complete philosophy? If this has never been the goal in what sense Wei Dai’s and Michel Vassar’s statements are relevant here?
MIRI research topics are philosophical problems. Such as decision theory and logical uncertainty. And they would have to solve more. Ontology identification is a philosophical problem. Really, how would you imagine doing FAI without solving much of philosophy?
I think the post is pretty clear about why I think it failed. MIRI axed the agent foundations team and I can see very very few people continue to work on these problems. Maybe in multiple decades (past many of the relevant people’s median superintelligence timelines) some of the problems will get solved but I don’t see “push harder on doing agent foundations” as a thing people are trying to do.
I did get the feeling that your post implies that. Could you help me clear my confusion?
For example here you say:
This is something I agree with Eliezer about. I can clearly see that “executive philosophy” framework is what made his reasoning about philosophy adjecent topics so good campared to the baseline which I encountered before. Solution to free will is a great example.
But you seem to be framing this as a failed project. In what sense did it fail?
The reading that I got is that executive philosophy framework was trying to complete/solve philosophy and failed.
But if executive philosophy framework wasn’t even systematically applied to the solutions of philosophical problems in the first place, if the methodology wasn’t given a fair shot, in what sense can we say that it failed to complete philosophy? If this has never been the goal in what sense Wei Dai’s and Michel Vassar’s statements are relevant here?
MIRI research topics are philosophical problems. Such as decision theory and logical uncertainty. And they would have to solve more. Ontology identification is a philosophical problem. Really, how would you imagine doing FAI without solving much of philosophy?
I think the post is pretty clear about why I think it failed. MIRI axed the agent foundations team and I can see very very few people continue to work on these problems. Maybe in multiple decades (past many of the relevant people’s median superintelligence timelines) some of the problems will get solved but I don’t see “push harder on doing agent foundations” as a thing people are trying to do.