Me, too! My reasons are a bit more complex, because I think much progress will continue, and overhangs do increase risk. But in sum, I’d support a global scaling pause, or pretty much any slowdown. I think a lot of people in the middle would too. That’s why I suggested this as a possible compromise position. I meant to say that installing an off switch is also a great idea that almost anyone who’s thought about it would support.
I had been against slowdown because it would create both hardware and algorithmic overhang, making takeoff faster, and re-rolling the dice on who gets there first and how many projects reach it roughly at the same time.
But I think slowdowns would focus effort on developing language model agents into full cognitive architectures on a trajectory to ASI. And that’s the easiest alignment challenge we’re likely to get. Slowdown would prevent jumping to the next, more opaque type of AI.
Me, too! My reasons are a bit more complex, because I think much progress will continue, and overhangs do increase risk. But in sum, I’d support a global scaling pause, or pretty much any slowdown. I think a lot of people in the middle would too. That’s why I suggested this as a possible compromise position. I meant to say that installing an off switch is also a great idea that almost anyone who’s thought about it would support.
I had been against slowdown because it would create both hardware and algorithmic overhang, making takeoff faster, and re-rolling the dice on who gets there first and how many projects reach it roughly at the same time.
But I think slowdowns would focus effort on developing language model agents into full cognitive architectures on a trajectory to ASI. And that’s the easiest alignment challenge we’re likely to get. Slowdown would prevent jumping to the next, more opaque type of AI.