I thought about Agency Q4 (counterargument to Pearl) recently, but couldn’t come up with anything convincing. Does anyone have a strong view/argument here?
I don’t see any claim that it’s impossible for neural nets to handle causality. Pearl’s complaining about AI researchers being uninterested in that goal.
I suspect that neural nets are better than any other approach at handling the hard parts of causal modeling: distinguishing plausible causal pathways from ridiculous ones.
Neural nets currently look poor at causal modeling for roughly the same reason that High Modernist approaches weren’t willing to touch causal claims: without a world model that’s comprehensive enough to approximate common sense, causal modeling won’t come close to human-level performance.
A participant in Moderna’s vaccine trial was struck by lightning. How much evidence is that for our concern that the vaccine is risky?
If I try to follow the High Modernist approach, I think it says something like we should either be uncertain enough to avoid any conclusion, or we should treat the lightning strike as evidence of vaccine risks.
As far as I can tell, AI approaches other than neural nets perform like scientists who blindly follow a High Modernist approach (assuming the programmers didn’t think to encode common sense about whether vaccines affect behavior in a lightning-strike-seeking way).
Whereas GPT-3 has some hints about human beliefs that make it likely to guess a little bit better than the High Modernist.
GPT-3 wasn’t designed to be good at causality. It’s somewhat close to being a passive observer. If I were designing a neural net to handle causality, I’d give it an ability to influence an environment that resembles what an infant has.
If there are any systems today that are good at handling causality, I’d guess they’re robocar systems. What I’ve read about those suggests they’re limited by the difficulty of common sense, not causality.
I expect that when causal modeling becomes an important aspect of what AI needs for further advances, it will be done with systems that use neural nets as important components. They’ll probably look a bit more like Drexler’s QNR than like GPT-3.
Just a quick logistical thing: do you have any better source of Pearl making that argument? The current quanta magazine link isn’t totally satisfactory, but I’m having trouble replacing it.
I thought about Agency Q4 (counterargument to Pearl) recently, but couldn’t come up with anything convincing. Does anyone have a strong view/argument here?
I don’t see any claim that it’s impossible for neural nets to handle causality. Pearl’s complaining about AI researchers being uninterested in that goal.
I suspect that neural nets are better than any other approach at handling the hard parts of causal modeling: distinguishing plausible causal pathways from ridiculous ones.
Neural nets currently look poor at causal modeling for roughly the same reason that High Modernist approaches weren’t willing to touch causal claims: without a world model that’s comprehensive enough to approximate common sense, causal modeling won’t come close to human-level performance.
A participant in Moderna’s vaccine trial was struck by lightning. How much evidence is that for our concern that the vaccine is risky?
If I try to follow the High Modernist approach, I think it says something like we should either be uncertain enough to avoid any conclusion, or we should treat the lightning strike as evidence of vaccine risks.
As far as I can tell, AI approaches other than neural nets perform like scientists who blindly follow a High Modernist approach (assuming the programmers didn’t think to encode common sense about whether vaccines affect behavior in a lightning-strike-seeking way).
Whereas GPT-3 has some hints about human beliefs that make it likely to guess a little bit better than the High Modernist.
GPT-3 wasn’t designed to be good at causality. It’s somewhat close to being a passive observer. If I were designing a neural net to handle causality, I’d give it an ability to influence an environment that resembles what an infant has.
If there are any systems today that are good at handling causality, I’d guess they’re robocar systems. What I’ve read about those suggests they’re limited by the difficulty of common sense, not causality.
I expect that when causal modeling becomes an important aspect of what AI needs for further advances, it will be done with systems that use neural nets as important components. They’ll probably look a bit more like Drexler’s QNR than like GPT-3.
Just a quick logistical thing: do you have any better source of Pearl making that argument? The current quanta magazine link isn’t totally satisfactory, but I’m having trouble replacing it.