I see people stretching “pivotal act” to mean “things that delay AGI for a few years or decades”, which isn’t what the term is meant to mean.
Well, that can be a pivotal act if you pair “thing that delays AGI for a few years or decades” with some mechanism for leveraging a few years or decades into awesome long-term outcomes. E.g., if you have a second plan that produces an existential win iff humanity survives for 6 years post-AGI, then 6 years suffices.
But you do actually need that second component! Or you need a plan that lets you delay as long as needed (e.g., long enough to solve the full alignment problem for sovereign Friendly AI). The EY pivotal acts I’ve seen fall in the “this lets you delay as long as needed” bucket.
(Which I like because “we’ll be able to do very novel thing X within n years” feels to me like the kind of assumption that reality often violates....)
Yeah, the people saying this are definitely not doing the “pair with actual solution”, and when I’ve previously brought this up in person with them they kinda had an “oh....” reaction, like it was a real update to them that this was also required.
Specifically, the pivotal act I most often default to is something like:
Delay AGI by 10 years
Use those ten years to solve the alignment problem
Since I expect that the Alignment problem will be a lot easier once we know what AGI looks like. But not so easy that you can solve it in in the ~6 month lead-time that OpenAI currently has over the rest of the world.
Well, that can be a pivotal act if you pair “thing that delays AGI for a few years or decades” with some mechanism for leveraging a few years or decades into awesome long-term outcomes. E.g., if you have a second plan that produces an existential win iff humanity survives for 6 years post-AGI, then 6 years suffices.
But you do actually need that second component! Or you need a plan that lets you delay as long as needed (e.g., long enough to solve the full alignment problem for sovereign Friendly AI). The EY pivotal acts I’ve seen fall in the “this lets you delay as long as needed” bucket.
(Which I like because “we’ll be able to do very novel thing X within n years” feels to me like the kind of assumption that reality often violates....)
Yeah, the people saying this are definitely not doing the “pair with actual solution”, and when I’ve previously brought this up in person with them they kinda had an “oh....” reaction, like it was a real update to them that this was also required.
Came here to say this.
Specifically, the pivotal act I most often default to is something like:
Delay AGI by 10 years
Use those ten years to solve the alignment problem
Since I expect that the Alignment problem will be a lot easier once we know what AGI looks like. But not so easy that you can solve it in in the ~6 month lead-time that OpenAI currently has over the rest of the world.