Understanding Infra-Bayesianism: A Beginner-Friendly Video Series
Click here to see the video series
This video series was produced as part of a project through the 2022 SERI Summer Research Fellowship (SRF) under the mentorship of Diffractor.
Epistemic effort: Before working on these videos, we spent ~400 collective hours working to understand infra-Bayesianism (IB) for ourselves. We built up our own understanding of IB primarily by working together on the original version of Infra-Exercises Part I and subsequently creating a polished version of the problem set in hopes of making it more user-friendly for others.
We then spent ~320 hours writing, shooting, and editing this video series. Part 5 through Part 8 of the video series were checked for accuracy by Vanessa Kosoy, but any mistakes that remain in any of the videos are fully our own.
Goals of this video series
IB appears to have quite a bit of promise. It seems plausible that IB itself or some better framework that builds on and eventually replaces IB could end up playing a significant role in solving the alignment problem (although, as with every proposal in alignment, there is significant disagreement about this). But the original sequence of posts on IB appears to be accessible only to those with a graduate-level understanding of math. Even those with a graduate-level understanding of math would likely be well-served by first getting a gentle overview of IB before plunging into the technical details.
When creating this video series, we had two audiences in mind. Some people just want to know what the heck infra-Bayesianism is at a high level and understand how it’s supposed to help with alignment. We designed this video series to be a one-stop shop for accomplishing this goal. We hope that this will be the kind of video series where viewers won’t ever have to pause a video and go do a search for some word or concept they didn’t understand or that the video assumes knowledge of. To that end, the first four videos go over preliminary topics (which can definitely be skipped depending on how familiar the viewer already is with these topics). Here are the contents of the video series:
Intro to Bayesianism
Intro to Reinforcement Learning
Intro to AIXI and Decision Theory
Intro to Agent Foundations
Vanessa Kosoy’s Alignment Research Agenda
Infra-Bayesianism
Infra-Bayesian Physicalism
Pre-DCA
A Conversation with John Wentworth
A Conversation with Diffractor
A Conversation with Vanessa Kosoy
We found that in order to explain IB effectively, we needed to show how IB is situated within Vanessa Kosoy’s broader research agenda (which itself is situated within the agent foundations class of research agendas). We also wanted to give a concrete example of how IB could be applied to create a concrete protocol for alignment. Pre-DCA is such a protocol. It is very new and is changing quite rapidly as Vanessa tinkers with it more and more. By the time readers of this post watch the Pre-DCA video, it is likely that parts of it will already be out of date. That’s perfectly fine. The purpose of the Pre-DCA video is purely to illustrate how one might go about leveraging IB to brainstorm a solution to alignment.
Our second audience are those who want to gain mastery of the technical details behind IB so that they can apply it to their own alignment research. We hope that the video series will serve as a nice “base camp” for gaining a high-level understanding of IB before delving into more technical sources (such as Infra-Exercises Part I, the original sequence of posts on IB, or Vanessa’s post on infra-Bayesian physicalism).
Why videos?
The primary reason that we chose to create videos instead of a written post is that video is a much more neglected medium for AI alignment pedagogy. Video also allows us to relate to our audience on a more personal level. I (Jack) often find myself pausing in the middle of reading a LessWrong post to look up a video of the author speaking so that I can get a better sense of who they are.
Acknowledgements
Many thanks to Diffractor, Vanessa Kosoy, John Wentworth, Thomas Larsen, Brittany Gelb, and Lukas Melgaard for contributing to this project.
We are grateful also to the SERI SRF organizers who supported us throughout this project: Joe Collman, Voctor Warlop, Sage Bergerson, Ines Fernandez, and Cian Mullarkey.
- AI Safety − 7 months of discussion in 17 minutes by 15 Mar 2023 23:41 UTC; 89 points) (EA Forum;
- EA & LW Forums Weekly Summary (19 − 25 Sep 22′) by 28 Sep 2022 20:13 UTC; 25 points) (EA Forum;
- AI Safety − 7 months of discussion in 17 minutes by 15 Mar 2023 23:41 UTC; 25 points) (
- EA & LW Forums Weekly Summary (19 − 25 Sep 22′) by 28 Sep 2022 20:18 UTC; 16 points) (
- 28 Sep 2022 20:55 UTC; 1 point) 's comment on Strange Loops—Self-Reference from Number Theory to AI by (
- 26 Oct 2022 13:46 UTC; 1 point) 's comment on A Walkthrough of A Mathematical Framework for Transformer Circuits by (
Thanks! I expect this to become the go-to resource to refer people to, and I hope this gets properly rewarded (I’m surprised at the small amount of upvotes: is LessWrong really that video-agnostic?)
I really appreciate it! I hope this ends up being useful to people. We tried to create the resource we would have wanted to have when we first started learning about infra-Bayesianism. Vanessa’s agenda is deeply interesting (especially to a math nerd like me!), and developing formal mathematical theories to help with alignment seems very neglected.
This looks absolutely fantastic! Great job—I look forward to going through the lecture series!
Thanks so much :)
For those of us who may want to go through the problem set ourselves—would watching these videos be spoiling ourselves?
Great question! Definitely not. The video series is way, way less technical than the problem set. The problem set is for people who want to understand the nitty gritty details of IB, and the video series is for those who want a broad overview of what the heck IB even is and how it fits into the alignment research landscape.
In fact, I’d say it would benefit you to go through the video series first to get a high-level conceptual overview of how IB can help with alignment. This understanding will help motivate the problem set. Without this conceptual grounding, the problem set can feel like just a big abstract math problem set disconnected from alignment.
That said, if you aren’t in for watching a long video series, there’s a much more brief (but way less thorough) conceptual description of how IB can help with alignment in the conclusion of the problem set. You could read that first and then work the problems.