Of the agent foundations work from 2020, I think this sequence is my favorite, and I say this without actually understanding it.
The core idea is that Bayesianism is too hard. And so what we ultimately want is to replace probability distributions over all possible things with simple rules that don’t have to put a probability on all possible things. In some ways this is the complement to logical uncertainty—logical uncertainty is about not having to have all possible probability distributions possible, this is about not having to put probability distributions on everything.
I’ve found this a highly productive metaphor for cognition—we sometimes like to think of the brain as a Bayesian engine, but of necessity the brain can’t be laying down probabilities for every single possible thing—we want a perspective that allows the brain to be considering hypotheses that only specify the pattern of some small part of the world while still retaining some sort of Bayesian seal of approval.
That said, this sequence is tricky to understand and I’m bad at it! I look forward to brave souls helping to digest it for the community at large.
Some of the ways in which this framework still relies on physical impossibilities are things like operations over all possible infra-Bayesian hypotheses, and the invocation of worst-case reasoning that relies on global evaluation. I’m super interested in what’s going to come from pushing those boundaries.
That said, this sequence is tricky to understand and I’m bad at it! I look forward to brave souls helping to digest it for the community at large.
I interviewed Vanessa here in an attempt to make this more digestible: it hopefully acts as context for the sequence, rather than a replacement for reading it.
Of the agent foundations work from 2020, I think this sequence is my favorite, and I say this without actually understanding it.
The core idea is that Bayesianism is too hard. And so what we ultimately want is to replace probability distributions over all possible things with simple rules that don’t have to put a probability on all possible things. In some ways this is the complement to logical uncertainty—logical uncertainty is about not having to have all possible probability distributions possible, this is about not having to put probability distributions on everything.
I’ve found this a highly productive metaphor for cognition—we sometimes like to think of the brain as a Bayesian engine, but of necessity the brain can’t be laying down probabilities for every single possible thing—we want a perspective that allows the brain to be considering hypotheses that only specify the pattern of some small part of the world while still retaining some sort of Bayesian seal of approval.
That said, this sequence is tricky to understand and I’m bad at it! I look forward to brave souls helping to digest it for the community at large.
Some of the ways in which this framework still relies on physical impossibilities are things like operations over all possible infra-Bayesian hypotheses, and the invocation of worst-case reasoning that relies on global evaluation. I’m super interested in what’s going to come from pushing those boundaries.
I interviewed Vanessa here in an attempt to make this more digestible: it hopefully acts as context for the sequence, rather than a replacement for reading it.