it’s true it’s cool, but I suspect he’s been a bit disheartened by how complicated it’s been to get this to work in real-world settings.
in the book of why, he basically now says it’s impossible to learn causality from data, which is a bit of a confusing message if you come from his previous books.
but now with language models, I think his hopes are up again, since models can basically piggy-back on causal relationships inferred by humans
it’s true it’s cool, but I suspect he’s been a bit disheartened by how complicated it’s been to get this to work in real-world settings.
in the book of why, he basically now says it’s impossible to learn causality from data, which is a bit of a confusing message if you come from his previous books.
but now with language models, I think his hopes are up again, since models can basically piggy-back on causal relationships inferred by humans