Personally, I’m preparing for AI fizzle the same way I’m preparing for winning the lottery. Not expecting it, but very occasionally allowing myself a little what-if daydreaming.
My life would be a lot more comfortable, emotionally and financially, if I wasn’t pushing myself so hard to try to help with AI safety.
Like, a lot. I could go back to having an easy well-paying job that I focused on 9-5, my wife and I could start a family, I could save for retirement instead of blowing through my savings trying to save the world.
“You’re working so hard to diffuse this bomb! What will you do if you succeed?!”
“Go back to living my life without existing in a state of fear and intense scrambling to save us all! What do you even mean?”
It’s obviously easier to make claims now about what I would do than to actually make hard choices when the situation arises. But twenty years ago the field was what, a couple people for the most part? None of them were working for very much money, and my guess is that they would’ve done so for even less if they could get by. What I hope I would do if all funding for the work I think is important (if p(AI goes well) hasn’t significantly changed and timelines are short) dries up, is try to get by from savings.
I know this isn’t realistic for everyone, nor do I know whether I would actually endorse this as a strategy for other people, but to me it seems clearly the more right choice under those conditions[1].
Modulo considerations like, for instance, if all TAI-relevant research is being done by labs that don’t care about alignment at all, or if independent work were completely futile for other reasons.
Then I say “phew” and go back to working a normal job while doing alignment research as a hobby. I personally have lots of non-tech skills, so it doesn’t worry me.
If you are smart and agentic enough to be helping meaningfully with AI safety, you are smart enough to respec into a new career as need be.
I think you misread the comment you’re replying to? I think the idea was that there’s a crash in companies commercialising AI, but TAI timelines are still short
Oh, yes, I think you’re right, I did misunderstand. Yeah, my current worries have a probability peak around “random coder in basement lucks into a huge algorithmic efficiency gain”. This could happen despite the AI tech industry crashing, or could lead to a crash (via loss of moat).
What then?
All the scenarios that come after that, if the finding gets published, seem dark and chaotic. A dangerous multipolar race between a huge number of competitors, an ecosystem in which humanity is very much disempowered.
I’m not sure there’s any point in preparing for that, since I’m pretty sure it’s out of our hands at that point.
I do think we can work to prevent that though. The best defense I can think of against such a situation is to check to make sure there are no surprises like that awaiting us, as much as we can.
Which is a strategy that brings it’s own dangers. If you have people checking for the existence of game-changing algorithmic breakthroughs, what happens after the search team finds something?
I think you need to have trustworthy people doing the search in cautious way, and have a censored simulation in a sandbox for studying the candidate models.
Personally, I’m preparing for AI fizzle the same way I’m preparing for winning the lottery. Not expecting it, but very occasionally allowing myself a little what-if daydreaming.
My life would be a lot more comfortable, emotionally and financially, if I wasn’t pushing myself so hard to try to help with AI safety.
Like, a lot. I could go back to having an easy well-paying job that I focused on 9-5, my wife and I could start a family, I could save for retirement instead of blowing through my savings trying to save the world.
“You’re working so hard to diffuse this bomb! What will you do if you succeed?!”
“Go back to living my life without existing in a state of fear and intense scrambling to save us all! What do you even mean?”
What if there’s an industry crash despite TAI being near?
I’ve thought about this question a fair amount.
It’s obviously easier to make claims now about what I would do than to actually make hard choices when the situation arises. But twenty years ago the field was what, a couple people for the most part? None of them were working for very much money, and my guess is that they would’ve done so for even less if they could get by. What I hope I would do if all funding for the work I think is important (if p(AI goes well) hasn’t significantly changed and timelines are short) dries up, is try to get by from savings.
I know this isn’t realistic for everyone, nor do I know whether I would actually endorse this as a strategy for other people, but to me it seems clearly the more right choice under those conditions[1].
Modulo considerations like, for instance, if all TAI-relevant research is being done by labs that don’t care about alignment at all, or if independent work were completely futile for other reasons.
Then I say “phew” and go back to working a normal job while doing alignment research as a hobby. I personally have lots of non-tech skills, so it doesn’t worry me. If you are smart and agentic enough to be helping meaningfully with AI safety, you are smart enough to respec into a new career as need be.
I think you misread the comment you’re replying to? I think the idea was that there’s a crash in companies commercialising AI, but TAI timelines are still short
Oh, yes, I think you’re right, I did misunderstand. Yeah, my current worries have a probability peak around “random coder in basement lucks into a huge algorithmic efficiency gain”. This could happen despite the AI tech industry crashing, or could lead to a crash (via loss of moat). What then? All the scenarios that come after that, if the finding gets published, seem dark and chaotic. A dangerous multipolar race between a huge number of competitors, an ecosystem in which humanity is very much disempowered. I’m not sure there’s any point in preparing for that, since I’m pretty sure it’s out of our hands at that point. I do think we can work to prevent that though. The best defense I can think of against such a situation is to check to make sure there are no surprises like that awaiting us, as much as we can. Which is a strategy that brings it’s own dangers. If you have people checking for the existence of game-changing algorithmic breakthroughs, what happens after the search team finds something? I think you need to have trustworthy people doing the search in cautious way, and have a censored simulation in a sandbox for studying the candidate models.