I have participated in SERI MATS 2.0 in John’s stream. Here is some advice based on my experience.
Be Nice
The AI alignment community is pretty small. If you are an ass, everybody will know that you are an ass. The same holds to a lesser extent for being nice. When I was visiting Edinburgh to attend a talk by David Krueger, there were several people there, that I had first met at Lightcone. When I was visiting Trajan House, the same thing happened. You never know when you might be talking to a grantmaker over dinner.
Epistemic status: I did not actually behave like an ass. I expect this to be true, based on how many people I ran into that I’ve seen before, in different parts of the world.
Use Lunch and Dinner at Lightcone
During MATS 2.0 lunch and dinner were both served at Lightcone every day of the week. There were always many cool people around, and the conversations were unusually insightful. My favorite heuristic is to just join whatever conversation John is in. I am pretty sure that at least 15% of the value of SERI MATS came from eating lunch and dinner at Lightcone. Probably much more than that.
Epistemic status: It feels like this was very useful, but it is hard to quantify.
Take care of yourself
At the beginning of SERI MATS, there were many social events (mostly just general Berkeley EA/Rationalist events). They were all happening pretty late. For some reason, I need to sleep 10:30 to 12:00 hours every day or I will be tired. My team was meeting at 10:00 every day. For the first 3 weeks, I was basically sleep-deprived almost every day. John’s workshops are pretty great, and being sleep-deprived during them destroyed probably more than 20% of the value. That being said, at least one of the socials was high-value, and it was probably worth the cost.
The worst thing was that I got used to being sleep-deprived. I sleep-deprived myself, even when there were no socials happening. I made similar mistakes with doing sports and eating healthily. Somehow it’s hard to keep up all the good habits when you change your environment.
Epistemic status: It’s hard to evaluate the counterfactual where I was not sleep-deprived. I estimate I could have gotten 5-35% more value by not making the mistakes I listed.
Do you have a crazy idea about how to improve the office? Ask, or implement it yourself (after getting permission)! (The Lightcone ops team is very competent and cool. John had a loft bed in his office when I was there. I am not sure about the situation in the new SERI MATS offices.)
Choose how you spend your time. If you are in a conversation, notice when you would rather do something else. I recommend that you act on this feeling. Get back to work, join that other discussion that seems more interesting, or do whatever else seems higher value. I think being able to do this is great. Building up this skill is probably easier when talking to rationalists. They won’t punish you for this kind of initiative.
In general, being agentic seems highly related to making sure that you thought all your high-value thoughts. I recommend sitting down for at least 5 minutes by the clock every day, and trying to come up with high-value directions to think in. The second step is then to always do what you think is best. Which is not easy.
Think about AI alignment from scratch
Reading somebodies work is different from discovering the underlying insights for yourself. Many details will be omitted in a write-up. Especially details on the research process. When I thought about AI alignment from scratch, I was thinking thoughts I had not thought of before. It seems likely that these thoughts occurred to people like Nick Bostrom, but did not make it into e.g. Superintelligence. Or at least I did not get these thoughts out of Superintelligence by just reading it.
It is easy to read someone’s work and regurgitate it whenever it seems relevant. You might even be really good at this and impress other people. But that does not mean that you understood all the important details. It certainly does not mean that you understand the underlying ideas as well as the original author.
I recommend any AI alignment researcher think about the problem of how to solve the AI alignment problem from scratch. At least for a couple of hours every month. And while doing that, try hard to not propose solutions, at least initially. Force your mind to not autocomplete your thinking with solutions you have thought about in the past. Or write them down to get them out of your mind. If you are just starting out, I expect that spending more time thinking about the problem from scratch to be valuable.
Epistemic status: Intuitively this seems very important. I have only limited empirical data on this producing insight. Maybe 5 ideas in myself after thinking for a couple of hours.
Get as independent as possible
I expect the following advice will work best in John’s stream. As far as I understand John’s goal is to create researchers that can discover new fruitful directions on their own and make progress on them. It might be useful to people not in John’s stream.
Right now we don’t have a single approach that obviously will lead to a technical solution to the AI alignment problem. I expect there to be many promising directions that nobody has thought of so far. So it seems high value to get people to work on research that is orthogonal to existing research agendas. That means people that can think for themselves and can come up with their own research directions are highly valuable.
Epistemic status: I am pretty uncertain to which extent research directions are underexplored. My intuition tells me that it would be pretty bad if everybody would just work on existing agendas.
Attend the events
In my experience, the SERI MATS events (e.g. talks, and workshops), were all pretty good. I did not attend 2 or 3, and in retrospect, it seems like a mistake. I recommend you attend at least the first 10 minutes of every event, and then decide if you want to leave. We had a wide range of researchers give presentations about their work, which was good for getting a sense of what other people are working on.
Focus on social in the beginning
Spending time getting to know the other people around seems valuable. I am mainly thinking about other SERI MATS scholars and people in the Lightcone offices. Doing this, in the beginning, is probably better for obvious reasons. E.g. you will get to know who knows what, so later on you know who might be able to answer a particular question you have.
Epistemic status: I did do this, though I did not really plan it out. I am somewhat uncertain how useful this is, though I am pretty sure it is positive.
My Advice for Incoming SERI MATS Scholars
I have participated in SERI MATS 2.0 in John’s stream. Here is some advice based on my experience.
Be Nice
The AI alignment community is pretty small. If you are an ass, everybody will know that you are an ass. The same holds to a lesser extent for being nice. When I was visiting Edinburgh to attend a talk by David Krueger, there were several people there, that I had first met at Lightcone. When I was visiting Trajan House, the same thing happened. You never know when you might be talking to a grantmaker over dinner.
Epistemic status: I did not actually behave like an ass. I expect this to be true, based on how many people I ran into that I’ve seen before, in different parts of the world.
Use Lunch and Dinner at Lightcone
During MATS 2.0 lunch and dinner were both served at Lightcone every day of the week. There were always many cool people around, and the conversations were unusually insightful. My favorite heuristic is to just join whatever conversation John is in. I am pretty sure that at least 15% of the value of SERI MATS came from eating lunch and dinner at Lightcone. Probably much more than that.
Epistemic status: It feels like this was very useful, but it is hard to quantify.
Take care of yourself
At the beginning of SERI MATS, there were many social events (mostly just general Berkeley EA/Rationalist events). They were all happening pretty late. For some reason, I need to sleep 10:30 to 12:00 hours every day or I will be tired. My team was meeting at 10:00 every day. For the first 3 weeks, I was basically sleep-deprived almost every day. John’s workshops are pretty great, and being sleep-deprived during them destroyed probably more than 20% of the value. That being said, at least one of the socials was high-value, and it was probably worth the cost.
The worst thing was that I got used to being sleep-deprived. I sleep-deprived myself, even when there were no socials happening. I made similar mistakes with doing sports and eating healthily. Somehow it’s hard to keep up all the good habits when you change your environment.
Epistemic status: It’s hard to evaluate the counterfactual where I was not sleep-deprived. I estimate I could have gotten 5-35% more value by not making the mistakes I listed.
Learn to detach yourself from your ideas
Check out this comment.
Be Agentic
If something doesn’t fit right, try to fix it.
Do you have a crazy idea about how to improve the office? Ask, or implement it yourself (after getting permission)! (The Lightcone ops team is very competent and cool. John had a loft bed in his office when I was there. I am not sure about the situation in the new SERI MATS offices.)
Choose how you spend your time. If you are in a conversation, notice when you would rather do something else. I recommend that you act on this feeling. Get back to work, join that other discussion that seems more interesting, or do whatever else seems higher value. I think being able to do this is great. Building up this skill is probably easier when talking to rationalists. They won’t punish you for this kind of initiative.
In general, being agentic seems highly related to making sure that you thought all your high-value thoughts. I recommend sitting down for at least 5 minutes by the clock every day, and trying to come up with high-value directions to think in. The second step is then to always do what you think is best. Which is not easy.
Think about AI alignment from scratch
Reading somebodies work is different from discovering the underlying insights for yourself. Many details will be omitted in a write-up. Especially details on the research process. When I thought about AI alignment from scratch, I was thinking thoughts I had not thought of before. It seems likely that these thoughts occurred to people like Nick Bostrom, but did not make it into e.g. Superintelligence. Or at least I did not get these thoughts out of Superintelligence by just reading it.
It is easy to read someone’s work and regurgitate it whenever it seems relevant. You might even be really good at this and impress other people. But that does not mean that you understood all the important details. It certainly does not mean that you understand the underlying ideas as well as the original author.
I recommend any AI alignment researcher think about the problem of how to solve the AI alignment problem from scratch. At least for a couple of hours every month. And while doing that, try hard to not propose solutions, at least initially. Force your mind to not autocomplete your thinking with solutions you have thought about in the past. Or write them down to get them out of your mind. If you are just starting out, I expect that spending more time thinking about the problem from scratch to be valuable.
Epistemic status: Intuitively this seems very important. I have only limited empirical data on this producing insight. Maybe 5 ideas in myself after thinking for a couple of hours.
Get as independent as possible
I expect the following advice will work best in John’s stream. As far as I understand John’s goal is to create researchers that can discover new fruitful directions on their own and make progress on them. It might be useful to people not in John’s stream.
Right now we don’t have a single approach that obviously will lead to a technical solution to the AI alignment problem. I expect there to be many promising directions that nobody has thought of so far. So it seems high value to get people to work on research that is orthogonal to existing research agendas. That means people that can think for themselves and can come up with their own research directions are highly valuable.
Epistemic status: I am pretty uncertain to which extent research directions are underexplored. My intuition tells me that it would be pretty bad if everybody would just work on existing agendas.
Attend the events
In my experience, the SERI MATS events (e.g. talks, and workshops), were all pretty good. I did not attend 2 or 3, and in retrospect, it seems like a mistake. I recommend you attend at least the first 10 minutes of every event, and then decide if you want to leave. We had a wide range of researchers give presentations about their work, which was good for getting a sense of what other people are working on.
Focus on social in the beginning
Spending time getting to know the other people around seems valuable. I am mainly thinking about other SERI MATS scholars and people in the Lightcone offices. Doing this, in the beginning, is probably better for obvious reasons. E.g. you will get to know who knows what, so later on you know who might be able to answer a particular question you have.
Epistemic status: I did do this, though I did not really plan it out. I am somewhat uncertain how useful this is, though I am pretty sure it is positive.