What is your AI Capabilities Red Line Personal Statement? It should read something like “when AI can do X in Y way, then I think we should be extremely worried / advocate for a Pause*”.
I think it would be valuable if people started doing this; we can’t feel when they’re on an exponential, so its likely we will have powerful AI creep up on us.
@Greg_Colbourn just posted this and I have an intuition that people are going to read it and say “while it can do Y it still can’t do X”
A Pause doesn’t stop capabilities at the currently demonstrated level. In that sense, GPT-2 might’ve been a prudent threshold (in a saner civilization) for when to coordinate to halt improvement in semiconductor technology and to limit production capacity for better nodes (similarly to not building too many centrifuges for enriching uranium, as a first line of defense against a nuclear winter). If not GPT-2, when the power of scaling wasn’t yet obvious to most, then certainly GPT-3.
In our world, I don’t see it happening other than in response to a catastrophe where there are survivors, the good outcomes our civilization might be capable of reaching lack dignity and don’t involve a successful Pause. A Pause must end, so conditions for a Pause need to be sensitive to new developments that make Pausing no longer necessary, which could also be a problem in a survivable catastrophe world.
What is your AI Capabilities Red Line Personal Statement? It should read something like “when AI can do X in Y way, then I think we should be extremely worried / advocate for a Pause*”.
I think it would be valuable if people started doing this; we can’t feel when they’re on an exponential, so its likely we will have powerful AI creep up on us.
@Greg_Colbourn just posted this and I have an intuition that people are going to read it and say “while it can do Y it still can’t do X”
*in the case you think a Pause is ever optimal.
A Pause doesn’t stop capabilities at the currently demonstrated level. In that sense, GPT-2 might’ve been a prudent threshold (in a saner civilization) for when to coordinate to halt improvement in semiconductor technology and to limit production capacity for better nodes (similarly to not building too many centrifuges for enriching uranium, as a first line of defense against a nuclear winter). If not GPT-2, when the power of scaling wasn’t yet obvious to most, then certainly GPT-3.
In our world, I don’t see it happening other than in response to a catastrophe where there are survivors, the good outcomes our civilization might be capable of reaching lack dignity and don’t involve a successful Pause. A Pause must end, so conditions for a Pause need to be sensitive to new developments that make Pausing no longer necessary, which could also be a problem in a survivable catastrophe world.