I think this is actually wrong, because of synthetic data letting us control what the AI learns and what they value, and in particular we can place honeypots that are practically indistinguishable from the real world
This sounds less like the notion of the first critical try is wrong, and more like you think synthetic data will allow us to confidently resolve the alignment problem beforehand. Does that scan?
Or is the position stronger, more like we don’t need to solve the alignment problem in general, due to our ability to run simulations and use synthetic data?
This sounds less like the notion of the first critical try is wrong, and more like you think synthetic data will allow us to confidently resolve the alignment problem beforehand. Does that scan?
but my point is this shifts us from a one-shot problem in the real world to a many-shot problem in simulations based on synthetic data before the AI gets unimaginably powerful.
We do still need to solve it, but it’s a lot easier to solve problems when you can turn them into many-shot problems.
Cool post. I agree with the many-shot part in principle. It strikes me that in a few years (hopefully not months?), this will look naive in a similar way that all the thinking on ways a well boxed AI might be controlled look naive now. If I understand correctly, these kinds of simulations would require a certain level of slowing down and doing things that are slightly inconvenient once you hit a certain capability level. I don’t trust labs like OpenAI, Deepmind, (Anthropic maybe?) to execute such a strategy well.
I think a crux here is that I think that the synthetic data path is actually pretty helpful even from a capabilities perspective, because it lets you get much, much higher quality data than existing data, and most importantly in domains where you can abuse self-play like math or coding, you can get very, very high amounts of capability from synthetic data sources, so I think the synthetic data strategy has less capabilities taxes than a whole lot of alignment proposals on LW.
Importantly, we may well be able to automate the synthetic data alignment process in the near future, which would make it even less of a capabilities tax.
To be clear, just because it’s possible and solvable doesn’t mean it’s totally easy, we do still have our work cut out for us, it’s just that we’ve transformed it into a process where normal funding and science can actually solve the problem without further big breakthroughs/insights.
Then again, I do fear you might be right that they are under such competitive pressure, or at least value racing so highly that they will not slow down even a little, or at least not do any alignment work once superintelligence is reached.
This sounds less like the notion of the first critical try is wrong, and more like you think synthetic data will allow us to confidently resolve the alignment problem beforehand. Does that scan?
Or is the position stronger, more like we don’t need to solve the alignment problem in general, due to our ability to run simulations and use synthetic data?
This is kind of correct:
but my point is this shifts us from a one-shot problem in the real world to a many-shot problem in simulations based on synthetic data before the AI gets unimaginably powerful.
We do still need to solve it, but it’s a lot easier to solve problems when you can turn them into many-shot problems.
Cool post. I agree with the many-shot part in principle. It strikes me that in a few years (hopefully not months?), this will look naive in a similar way that all the thinking on ways a well boxed AI might be controlled look naive now. If I understand correctly, these kinds of simulations would require a certain level of slowing down and doing things that are slightly inconvenient once you hit a certain capability level. I don’t trust labs like OpenAI, Deepmind, (Anthropic maybe?) to execute such a strategy well.
I think a crux here is that I think that the synthetic data path is actually pretty helpful even from a capabilities perspective, because it lets you get much, much higher quality data than existing data, and most importantly in domains where you can abuse self-play like math or coding, you can get very, very high amounts of capability from synthetic data sources, so I think the synthetic data strategy has less capabilities taxes than a whole lot of alignment proposals on LW.
Importantly, we may well be able to automate the synthetic data alignment process in the near future, which would make it even less of a capabilities tax.
To be clear, just because it’s possible and solvable doesn’t mean it’s totally easy, we do still have our work cut out for us, it’s just that we’ve transformed it into a process where normal funding and science can actually solve the problem without further big breakthroughs/insights.
Then again, I do fear you might be right that they are under such competitive pressure, or at least value racing so highly that they will not slow down even a little, or at least not do any alignment work once superintelligence is reached.