Then I say “phew” and go back to working a normal job while doing alignment research as a hobby. I personally have lots of non-tech skills, so it doesn’t worry me.
If you are smart and agentic enough to be helping meaningfully with AI safety, you are smart enough to respec into a new career as need be.
I think you misread the comment you’re replying to? I think the idea was that there’s a crash in companies commercialising AI, but TAI timelines are still short
Oh, yes, I think you’re right, I did misunderstand. Yeah, my current worries have a probability peak around “random coder in basement lucks into a huge algorithmic efficiency gain”. This could happen despite the AI tech industry crashing, or could lead to a crash (via loss of moat).
What then?
All the scenarios that come after that, if the finding gets published, seem dark and chaotic. A dangerous multipolar race between a huge number of competitors, an ecosystem in which humanity is very much disempowered.
I’m not sure there’s any point in preparing for that, since I’m pretty sure it’s out of our hands at that point.
I do think we can work to prevent that though. The best defense I can think of against such a situation is to check to make sure there are no surprises like that awaiting us, as much as we can.
Which is a strategy that brings it’s own dangers. If you have people checking for the existence of game-changing algorithmic breakthroughs, what happens after the search team finds something?
I think you need to have trustworthy people doing the search in cautious way, and have a censored simulation in a sandbox for studying the candidate models.
Then I say “phew” and go back to working a normal job while doing alignment research as a hobby. I personally have lots of non-tech skills, so it doesn’t worry me. If you are smart and agentic enough to be helping meaningfully with AI safety, you are smart enough to respec into a new career as need be.
I think you misread the comment you’re replying to? I think the idea was that there’s a crash in companies commercialising AI, but TAI timelines are still short
Oh, yes, I think you’re right, I did misunderstand. Yeah, my current worries have a probability peak around “random coder in basement lucks into a huge algorithmic efficiency gain”. This could happen despite the AI tech industry crashing, or could lead to a crash (via loss of moat). What then? All the scenarios that come after that, if the finding gets published, seem dark and chaotic. A dangerous multipolar race between a huge number of competitors, an ecosystem in which humanity is very much disempowered. I’m not sure there’s any point in preparing for that, since I’m pretty sure it’s out of our hands at that point. I do think we can work to prevent that though. The best defense I can think of against such a situation is to check to make sure there are no surprises like that awaiting us, as much as we can. Which is a strategy that brings it’s own dangers. If you have people checking for the existence of game-changing algorithmic breakthroughs, what happens after the search team finds something? I think you need to have trustworthy people doing the search in cautious way, and have a censored simulation in a sandbox for studying the candidate models.