I think the physicist and the AI researcher are in different positions.
One can chop a small bit off the world and focus on that, at a certain scale or under certain conditions.
The other has to create something that can navigate the entire world and potentially get to know it in better or very different ways than we do. It is unbounded in what it can know and how it might be best shaped.
It is this asymmetry that I think makes their jobs very different.
It’s also not the case that everyone working on FAI tries the same approach.
Thanks. I had almost written off LW and by extension MIRI completely inimical to proof based AI research. I’m a bit out of the loop have you got any recommendations of people working along the lines I am thinking?
If you look at the CFAR theory of action of first focusing on getting reasoning right to think more clearly about AI risk, that’s not a strategy based on mathematical proofs.
I think the physicist and the AI researcher are in different positions.
One can chop a small bit off the world and focus on that, at a certain scale or under certain conditions.
The other has to create something that can navigate the entire world and potentially get to know it in better or very different ways than we do. It is unbounded in what it can know and how it might be best shaped.
It is this asymmetry that I think makes their jobs very different.
Thanks. I had almost written off LW and by extension MIRI completely inimical to proof based AI research. I’m a bit out of the loop have you got any recommendations of people working along the lines I am thinking?
If you look at the CFAR theory of action of first focusing on getting reasoning right to think more clearly about AI risk, that’s not a strategy based on mathematical proofs.