Refine: An Incubator for Conceptual Alignment Research Bets
I’m opening an incubator called Refine for conceptual alignment research in London, which will be hosted by Conjecture. The program is a three-month fully-paid fellowship for helping aspiring independent researchers find, formulate, and get funding for new conceptual alignment research bets, ideas that are promising enough to try out for a few months to see if they have more potential.
If this sounds like something you’d be interested in, you can apply here!
Why?
I see a gaping hole in the alignment training ecosystem: there are no programs dedicated specifically to creating new independent conceptual researchers and helping them build original research agendas.
The programs that do exist (AI Safety Camp, SERI MATS) tend to focus on an apprenticeship (or “accelerated PhD”) model in which participants work under researchers on already-established research directions. And while there are avenues for independent alignment researchers to get started on their own, it is fraught with many risks, slowing down progress considerably.
So I feel the need for a program geared specifically towards conceptual alignment researchers that are interested in doing their own research and making their own research bets.
Who?
This program is for self-motivated and curious people who want to become independent conceptual alignment researchers and expand the portfolio of alignment bets and research ideas available.
When I look at great conceptual researchers like John Wentworth, Paul Christiano, Evan Hubinger, Steve Byrnes, Vanessa Kosoy, and others, as well as at the good (famous and not) researchers I know from my PhD, they all have the same thing in common: they ask a question and keep looking for the answer. They tolerate confusion, not in the sense that they accept it, but in that they are able to work with it and not hide away behind premature formalization. They don’t give up on the problem; they search for different angles and approaches until it yields. Paul Graham calls this being relentlessly resourceful.
(Relentlessly Resourceful, Paul Graham, 2009)
I was writing a talk for investors, and I had to explain what to look for in founders. What would someone who was the opposite of hapless be like? They’d be relentlessly resourceful. Not merely relentless. That’s not enough to make things go your way except in a few mostly uninteresting domains. In any interesting domain, the difficulties will be novel. Which means you can’t simply plow through them, because you don’t know initially how hard they are; you don’t know whether you’re about to plow through a block of foam or granite. So you have to be resourceful. You have to keep trying new things.
This is one of the main traits I’m looking for in an applicant — someone who will lead a new research agenda and morph it proactively, as needed.
Another point that matters is being curious about different topics and ideas than the ones traditionally discussed in alignment. As I wrote in a recent post and plan to discuss more in an upcoming sequence, I think we need to be more pluralist in our approach to alignment, and explore far more directions, from novel ideas to old approaches that may have been discarded too soon. And new ideas often come from unexpected places.
As one example, here is what Jesse Schell writes about his experience speaking to a professional juggler who performed tricks no one else could do:
(The Art of Game Design, Jesse Schell, 2008)
“The secret is: don’t look to other jugglers for inspiration—look everywhere else.” He proceeded to do a beautiful looping pattern, where his arms kind of spiraled, and he turned occasional pirouettes. “I learned that one watching a ballet in New York. and this one...” he did a move that involved the balls popping up and down as his hands fluttered delicately back and forth. “I learned that from a flock of geese I saw take off from a lake up in Maine. And this,” he did a weird mechanical looking movement where the balls almost appeared to move at right angles. “I learned that from a paper punch machine on Long Island.” He laughed a little and stopped juggling for a minute. “People try to copy these moves, but they can’t. They always try… yeah, look at that fella, over there!” He pointed to a juggler with a long ponytail across the gym who was doing the “ballet” move, but it just looked dumb. Something was missing, but I couldn’t say what.
“See, these guys can copy my moves, but they can’t copy my inspiration.”
As for previous experience with alignment research, it can both be a blessing and a curse. While familiarity with alignment concepts can help bootstrap the learning and idea generation process, it also risks clogging the babble process by constraining “what makes sense”. For those it would be helpful for, the program includes some initial teaching on core alignment ideas (according to me) and the mental moves necessary for good alignment research.
Some concrete details
We plan to invite the first cohort of 4-5 fellows from July/August through September/October (wiggle room depending on some ops details), though exact dates will be determined by their availability. We anticipate that other cohorts will follow, so if you miss the first round but are still interested, please apply.
This is a full-time position in London where fellows will work out of Conjecture’s offices. The program includes:
Travel and Housing: Round-trip plane/train tickets to and from London, housing for the duration of the program, as well as public transportation within London.
Stipend: A stipend of ~$3,000/month (after tax) to cover meals and discretionary expenses.
Office Infrastructure: A desk in the Conjecture office (and tech setup when needed) and access to Conjecture’s conference rooms and other amenities.
Collaboration: Formal opportunities to discuss research directions with other conceptual and applied alignment researchers and engineers at Conjecture, and opportunities to meet and share ideas with other London-based alignment researchers.
Funding Assistance: Help in finding funding opportunities and in writing grant proposals for continuing to study research bets after the incubator.
During the first month of the program, participants will spend their time discussing abstract models of alignment, what the problem is about, and the different research approaches that have been pursued. The focus will be on understanding the assumptions and constraints behind the different takes and research programs, to get a high-level map of the field.
The next ~two months of the program will focus on helping fellows babble new research bets on alignment, refine them, test them, and either throw them away or change them. By the end, the goal is for fellows to narrow in on a research bet that could be further investigated in the following 6 months, and is promising enough to warrant funding.
It’s worth noting that while the incubator is being housed by Conjecture, fellows do not have any constraints imposed by the company. Fellows will not have to work on Conjecture’s research agendas or be obligated to collaborate after the program is over. Similarly, I’m not looking for people to work on my own research ideas, but for new exciting research bets I wouldn’t have thought about.
How can I apply?
We will review applications on a rolling-basis, with a usual delay of 1 week before response and a month before a decision (with a work task in the middle). The application is open now!
- (My understanding of) What Everyone in Technical Alignment is Doing and Why by 29 Aug 2022 1:23 UTC; 413 points) (
- So, geez there’s a lot of AI content these days by 6 Oct 2022 21:32 UTC; 257 points) (
- Conjecture: a retrospective after 8 months of work by 23 Nov 2022 17:10 UTC; 180 points) (
- 7 traps that (we think) new alignment researchers often fall into by 27 Sep 2022 23:13 UTC; 176 points) (
- Critiques of prominent AI safety labs: Conjecture by 12 Jun 2023 5:52 UTC; 150 points) (EA Forum;
- Understanding Conjecture: Notes from Connor Leahy interview by 15 Sep 2022 18:37 UTC; 107 points) (
- How to Diversify Conceptual Alignment: the Model Behind Refine by 20 Jul 2022 10:44 UTC; 87 points) (
- 7 traps that (we think) new alignment researchers often fall into by 27 Sep 2022 23:13 UTC; 73 points) (EA Forum;
- Alignment Org Cheat Sheet by 20 Sep 2022 17:36 UTC; 70 points) (
- Some advice on independent research by 8 Nov 2022 14:46 UTC; 65 points) (EA Forum;
- So you want to save the world? An account in paladinhood by 22 Nov 2023 17:40 UTC; 65 points) (
- Possible miracles by 9 Oct 2022 18:17 UTC; 64 points) (
- my current outlook on AI risk mitigation by 3 Oct 2022 20:06 UTC; 63 points) (
- Some advice on independent research by 8 Nov 2022 14:46 UTC; 55 points) (
- 2022 (and All Time) Posts by Pingback Count by 16 Dec 2023 21:17 UTC; 53 points) (
- Refine: An Incubator for Conceptual Alignment Research Bets by 15 Apr 2022 8:59 UTC; 47 points) (EA Forum;
- the Insulated Goal-Program idea by 13 Aug 2022 9:57 UTC; 46 points) (
- How to Diversify Conceptual AI Alignment: the Model Behind Refine by 20 Jul 2022 10:44 UTC; 43 points) (EA Forum;
- AI Safety Europe Retreat 2023 Retrospective by 14 Apr 2023 9:05 UTC; 43 points) (
- AI Safety Europe Retreat 2023 Retrospective by 14 Apr 2023 9:05 UTC; 41 points) (EA Forum;
- United We Align: Harnessing Collective Human Intelligence for AI Alignment Progress by 20 Apr 2023 23:19 UTC; 41 points) (
- Possible miracles by 9 Oct 2022 18:17 UTC; 38 points) (EA Forum;
- Thoughts on AI Safety Camp by 13 May 2022 7:16 UTC; 33 points) (
- goal-program bricks by 13 Aug 2022 10:08 UTC; 31 points) (
- An extended rocket alignment analogy by 13 Aug 2022 18:22 UTC; 28 points) (
- My AI Alignment Research Agenda and Threat Model, right now (May 2023) by 28 May 2023 3:23 UTC; 25 points) (
- Steelmining via Analogy by 13 Aug 2022 9:59 UTC; 24 points) (
- Thoughts on AI Safety Camp by 13 May 2022 7:47 UTC; 18 points) (EA Forum;
- Next steps after AGISF at UMich by 25 Jan 2023 20:57 UTC; 18 points) (EA Forum;
- How to parallelize “inherently” serial theory work? by 7 Apr 2023 0:08 UTC; 16 points) (
- Critiques of prominent AI safety labs: Conjecture by 12 Jun 2023 1:32 UTC; 12 points) (
- Next steps after AGISF at UMich by 25 Jan 2023 20:57 UTC; 10 points) (
- More Academic Diversity in Alignment? by 27 Nov 2022 17:52 UTC; 7 points) (EA Forum;
- My AI Alignment Research Agenda and Threat Model, right now (May 2023) by 28 May 2023 3:23 UTC; 6 points) (EA Forum;
- 16 Oct 2022 23:43 UTC; 1 point) 's comment on Private alignment research sharing and coordination by (
Guarding against the habits that hide positive feedback: Thanks. I mean it.
Thanks for making your positive feedback visible. ;)
Great news! I have to change the post I was drafting about unfilled niches :)
Sorry to make you work more, but happy to fill a much needed niche. ^^
Wholeheartedly agree, and I think it’s great that you’re doing this.
I’ll be very interested in what you learn along the way w.r.t. more/less effective processes.
(Bonus points for referencing the art of game design—one of my favourite books.)
Thanks! Yes, this is very much an experiment, and even if it fails, I expect it to be a productive mistake we can learn from. ;)
I’m really excited for the outcomes you describe: more relentlessly resourceful independent researchers exploring a wider range of options. I do feel a bit concerned that your search for good applicants is up against a challenge. I think that both the intelligence necessary to produce good results and the personality trait of agentiveness such that they can become relentlessly resourceful with training are rare and largely determined early in life. And I think that, given this, a lot of such people will already be quite absorbed in profitable paths by the time they are college graduates. So, it makes me wonder if you should look for young people who are already being remarkably successful in life, and try to recruit them in particular...
Random thought I had about this: IIRC the science of skill transfer between fields shows it doesn’t really happen except in people with a high degree of mastery. (Cite: Ultralearning or Peak mentions this I think?)
Might be something to look into for Refine, a master of X could be significantly better at transferring insights from X to Y.
Are you accepting minors for this program?
I think this is something we will have to address on a case by case basis. By default I would say probably no, but for really brilliant minors, there might be an option.
Not promising anything, but if you know anyone in this situation they should apply, it’s not long at all.
The form at this link <https://docs.google.com/forms/d/e/1FAIpQLSdU5IXFCUlVfwACGKAmoO2DAbh24IQuaRIgd9vgd1X8x5f3EQ/closedform> says “The form Refine Incubator Application is no longer accepting responses.
Try contacting the owner of the form if you think this is a mistake.”
so I suggested changing the parts where it says to sign up, to a note about applications not being accepted anymore.