Fixed Point Exercises
Sometimes people ask me what math they should study in order to get into agent foundations. My first answer is that I have found the introductory class in every subfield to be helpful, but I have found the later classes to be much less helpful. My second answer is to learn enough math to understand all fixed point theorems. These two answers are actually very similar. Fixed point theorems span all across mathematics, and are central to (my way of) thinking about agent foundations.
This post is the start of a sequence on fixed point theorems. It will be followed by several posts of exercises that use and prove such theorems. While these exercises aren’t directly connected to AI safety, I think they’re quite useful for preparing to think about agent foundations research. Afterwards, I will discuss the core ideas in the theorems and where they’ve shown up in alignment research.
The math involved is not much deeper than a first course in the various subjects (logic, set theory, topology, computability theory, etc). If you don’t know the terms, a bit of googling, wikipedia and math.stackexchange should easily get you most of the way. Note that the posts can be tackled in any order.
Here are some ways you can use these exercises:
You can host a local MIRIx group, and go through the exercises together. This might be useful to give a local group an affordance to work on math rather than only reading papers.
You can work on them by yourself for a while, and post questions when you get stuck. You can also post your solutions to help others, let others see an alternate way of doing a problem, or help you realize that there is a problem with your solution.
You can skip to the discussion (which has some spoilers), learn a bunch of theorems from Wikipedia, and use this as a starting point for trying to understand some MIRI papers.
You can use answering these questions as a goalpost for learning a bunch of introductory math from a large collection of different subfields.
You can show off by pointing out that some of the questions are wrong, and then I will probably fix them and thank you.
The first set of exercises is here.
Thanks to Sam Eisenstat for helping develop these exercises, Ben Pace for helping edit the sequence, and many AISFP participants for testing them and noticing errors.
Meta
Read the following.
Please use the (new) spoilers feature—the symbol ‘>’ followed by ‘!’ followed by space—in your comments to hide all solutions, partial solutions, and other discussions of the math. The comments will be moderated strictly to cover up spoilers!
I recommend putting all the object level points in spoilers and leaving metadata outside of the spoilers, like so:
Here’s my solution / partial solution / confusion for question #5:
And put your idea in here! (reminder: LaTex is cmd-4 / ctrl-4)
- Topological Fixed Point Exercises by 17 Nov 2018 1:40 UTC; 71 points) (
- MIRI’s 2018 Fundraiser by 27 Nov 2018 5:30 UTC; 60 points) (
- Challenges in Scaling EA Organizations by 21 Dec 2018 10:53 UTC; 48 points) (EA Forum;
- Diagonalization Fixed Point Exercises by 18 Nov 2018 0:31 UTC; 40 points) (
- Iteration Fixed Point Exercises by 22 Nov 2018 0:35 UTC; 33 points) (
- Alignment Newsletter #33 by 19 Nov 2018 17:20 UTC; 23 points) (
- MIRI’s 2018 Fundraiser by 27 Nov 2018 6:22 UTC; 20 points) (EA Forum;
- Infra-Miscellanea by 22 Apr 2022 2:09 UTC; 17 points) (
- 15 Mar 2019 5:36 UTC; 14 points) 's comment on Question: MIRI Corrigbility Agenda by (
- A few thoughts on my self-study for alignment research by 30 Dec 2022 22:05 UTC; 6 points) (
- 16 Mar 2019 18:28 UTC; 4 points) 's comment on Question: MIRI Corrigbility Agenda by (
How long would it take somebody to go from basic algebra/stats to being able to understand a technical MIRI paper?
Suppose the person:
- is a decent programmer
- is experienced with effective learning and productivity methods
- can dedicate two hours of focused study everyday—
has consumed a lot of non-technical resources, e.g. SuperIntelligence, 80K Hours interviews, FHI podcast, AI-Zombies, GEB, etc.
Sounds like me at the beginning of this year; I’m now able to make my way through logical induction. I’d be happy to help, by the way—feel free to message me.
Note: In markdown, the spoiler syntax is as follows:
Which should render like this:
This is some spoiler text
Note to GreaterWrong users: GW now has full support for spoiler blocks (they will render correctly—mouse over one, or select its text, to reveal—and there’s a new button in the editor that will insert the correct spoiler syntax for you.)
The current implementation of spoiler tags is pretty experimental, and we will probably change how it renders, but the syntax should continue working for the indefinite future.
At the time of writing, for the two spoilers in the main post, hovering over either will reveal both. Is that intentional? It does not seem desirable.
Nope, not intentional. Will see whether I can get around to fixing that today.
Do you have any recommended reading for learning enough math to do these exercises? I’m sort of using these as a textbook-list-by-proxy (e.g. google “Intermediate value theorem”, check which area of math it’s from, oh hey it’s Analysis, get an introductory textbook in Analysis, repeat), though I also have little knowledge of the field and don’t want to wander down suboptimal paths.