Your post suggests that the engineering worldview is somehow inferior to the philosophical worldview for predicting qualitative phase transitions, but you don’t present any good evidence for this. In fact all the evidence for qualitative phase transitions you present is from the engineering worldview. At most you present evidence that predicting phase transitions is inherently difficult.
Do you have any evidence of the philosophical worldview making superior predictions on any relevant criteria? If not, I think you should add a clarifying disclaimer.
I don’t think it’s inferior—I think both of them have contrasting strengths and limitations. I think the default view in ML would be to use 95% empiricism, 5% philosophy when making predictions, and I’d advocate for more like 50⁄50, depending on your overall inclinations (I’m 70-30 since I love data, and I think 30-70 is also reasonable, but I think neither 95-5 or 5-95 would be justifiable).
I’m curious what in the post makes you think I’m claiming philosophy is superior. I wrote this:
> Confronting emergence will require adopting mindsets that are less familiar to most ML researchers and utilizing more of the Philosophy worldview (in tandem with Engineering and other worldviews).
This was intended to avoid making a claim of superiority in either direction.
Your post suggests that the engineering worldview is somehow inferior to the philosophical worldview for predicting qualitative phase transitions, but you don’t present any good evidence for this. In fact all the evidence for qualitative phase transitions you present is from the engineering worldview. At most you present evidence that predicting phase transitions is inherently difficult.
Do you have any evidence of the philosophical worldview making superior predictions on any relevant criteria? If not, I think you should add a clarifying disclaimer.
I don’t think it’s inferior—I think both of them have contrasting strengths and limitations. I think the default view in ML would be to use 95% empiricism, 5% philosophy when making predictions, and I’d advocate for more like 50⁄50, depending on your overall inclinations (I’m 70-30 since I love data, and I think 30-70 is also reasonable, but I think neither 95-5 or 5-95 would be justifiable).
I’m curious what in the post makes you think I’m claiming philosophy is superior. I wrote this:
> Confronting emergence will require adopting mindsets that are less familiar to most ML researchers and utilizing more of the Philosophy worldview (in tandem with Engineering and other worldviews).
This was intended to avoid making a claim of superiority in either direction.