There is no answer to my question in the “Executable Philosophy” Arbital page.
The fact that Eliezer developed a framework that can be used for systematically solving philosophical problems, and even the fact that it was applied to solve several of such problems, doesn’t mean that MIRI had an explicit goal to solve philosophy.
“Executable philosophy” is Eliezer Yudkowsky’s term for discourse about subjects usually considered to belong to the realm of philosophy, meant to be applied to problems that arise in designing or aligningmachine intelligence.
This seem to support my hypothesis that the goal was to solve problems arising in the creation of FAI, and the fact that some of such problems are considered to belong to the realm of philosophy was only tangential, but I’d like to hear explicit acknowledgment/disproval from someone who had an insider perspective.
It appears Eliezer thinks executable philosophy addresses most philosophical issues worth pursuing:
Most “philosophical issues” worth pursuing can and should be rephrased as subquestions of some primary question about how to design an Artificial Intelligence, even as a matter of philosophy qua philosophy.
“Solving philosophy” is a grander marketing slogan that I don’t think was used, but, clearly, executable philosophy is a philosophically ambitious project.
It appears Eliezer thinks executable philosophy addresses most philosophical issues worth pursuing
Which completely fits the interpretation where he thinks that philosophical issues worth pursuing can be solved in principle via executive philosophy, that executive philosophy is the best known tool for solving such issues, and yet MIRI’s mission is not solving such issues for the sake of philosophy itself, but only as long as they contribute to the creation of FAI and therefore MIRI never had a dedicated task force for philosophy and gave up pursuing these issues as soon as they figured out that they are not going to make the FAI.
There is also a trivial case where Eliezer thinks that any philosophical issue that is unable to contribute to the creation of FAI is not worth pursuing in principle, but I suppose we can omit it for now.
clearly, executable philosophy is a philosophically ambitious project
I don’t see a need for such vague, non-committal statements when we can address the core crux. Did or did not MIRI actively tried to solve philosophical problems for the sake of philosophical problems using the framework of executable philosophy? If it did, what exactly didn’t work?
There might be a confusion. Did you get the impression from my post that I think MIRI was trying to solve philosophy?
I do think other MIRI researchers and I would think of the MIRI problems as philosophical in nature even if they’re different from the usual ones, because they’re more relevant and worth paying attention to, given the mission and so on, and because (MIRI believes) they carve philosophical reality at the joints better than the conventional ones.
Whether it’s “for the sake of solving philosophical problems or not”… clearly they think they would need to solve a lot of them to do FAI.
Did you get the impression from my post that I think MIRI was trying to solve philosophy?
I did get the feeling that your post implies that. Could you help me clear my confusion?
For example here you say:
Yudkowsky believes further study in this tradition can supersede ordinary academic philosophy, which he believes to be conceptually weak and motivated to continue ongoing disputes for more publications.
This is something I agree with Eliezer about. I can clearly see that “executive philosophy” framework is what made his reasoning about philosophy adjecent topics so good campared to the baseline which I encountered before. Solution to free will is a great example.
But you seem to be framing this as a failed project. In what sense did it fail?
Some others, such as Wei Dai and Michel Vassar, had called even earlier the infeasibility of completing the philosophy with a small technical team.
The reading that I got is that executive philosophy framework was trying to complete/solve philosophy and failed.
But if executive philosophy framework wasn’t even systematically applied to the solutions of philosophical problems in the first place, if the methodology wasn’t given a fair shot, in what sense can we say that it failed to complete philosophy? If this has never been the goal in what sense Wei Dai’s and Michel Vassar’s statements are relevant here?
MIRI research topics are philosophical problems. Such as decision theory and logical uncertainty. And they would have to solve more. Ontology identification is a philosophical problem. Really, how would you imagine doing FAI without solving much of philosophy?
I think the post is pretty clear about why I think it failed. MIRI axed the agent foundations team and I can see very very few people continue to work on these problems. Maybe in multiple decades (past many of the relevant people’s median superintelligence timelines) some of the problems will get solved but I don’t see “push harder on doing agent foundations” as a thing people are trying to do.
There is no answer to my question in the “Executable Philosophy” Arbital page.
The fact that Eliezer developed a framework that can be used for systematically solving philosophical problems, and even the fact that it was applied to solve several of such problems, doesn’t mean that MIRI had an explicit goal to solve philosophy.
This seem to support my hypothesis that the goal was to solve problems arising in the creation of FAI, and the fact that some of such problems are considered to belong to the realm of philosophy was only tangential, but I’d like to hear explicit acknowledgment/disproval from someone who had an insider perspective.
It appears Eliezer thinks executable philosophy addresses most philosophical issues worth pursuing:
“Solving philosophy” is a grander marketing slogan that I don’t think was used, but, clearly, executable philosophy is a philosophically ambitious project.
Which completely fits the interpretation where he thinks that philosophical issues worth pursuing can be solved in principle via executive philosophy, that executive philosophy is the best known tool for solving such issues, and yet MIRI’s mission is not solving such issues for the sake of philosophy itself, but only as long as they contribute to the creation of FAI and therefore MIRI never had a dedicated task force for philosophy and gave up pursuing these issues as soon as they figured out that they are not going to make the FAI.
There is also a trivial case where Eliezer thinks that any philosophical issue that is unable to contribute to the creation of FAI is not worth pursuing in principle, but I suppose we can omit it for now.
I don’t see a need for such vague, non-committal statements when we can address the core crux. Did or did not MIRI actively tried to solve philosophical problems for the sake of philosophical problems using the framework of executable philosophy? If it did, what exactly didn’t work?
There might be a confusion. Did you get the impression from my post that I think MIRI was trying to solve philosophy?
I do think other MIRI researchers and I would think of the MIRI problems as philosophical in nature even if they’re different from the usual ones, because they’re more relevant and worth paying attention to, given the mission and so on, and because (MIRI believes) they carve philosophical reality at the joints better than the conventional ones.
Whether it’s “for the sake of solving philosophical problems or not”… clearly they think they would need to solve a lot of them to do FAI.
EDIT: for more on MIRI philosophy, see deconfusion, free will solution.
I did get the feeling that your post implies that. Could you help me clear my confusion?
For example here you say:
This is something I agree with Eliezer about. I can clearly see that “executive philosophy” framework is what made his reasoning about philosophy adjecent topics so good campared to the baseline which I encountered before. Solution to free will is a great example.
But you seem to be framing this as a failed project. In what sense did it fail?
The reading that I got is that executive philosophy framework was trying to complete/solve philosophy and failed.
But if executive philosophy framework wasn’t even systematically applied to the solutions of philosophical problems in the first place, if the methodology wasn’t given a fair shot, in what sense can we say that it failed to complete philosophy? If this has never been the goal in what sense Wei Dai’s and Michel Vassar’s statements are relevant here?
MIRI research topics are philosophical problems. Such as decision theory and logical uncertainty. And they would have to solve more. Ontology identification is a philosophical problem. Really, how would you imagine doing FAI without solving much of philosophy?
I think the post is pretty clear about why I think it failed. MIRI axed the agent foundations team and I can see very very few people continue to work on these problems. Maybe in multiple decades (past many of the relevant people’s median superintelligence timelines) some of the problems will get solved but I don’t see “push harder on doing agent foundations” as a thing people are trying to do.