I don’t mean “Discussion of timelines is not useful”. I mean it is not the central point nor should it be the main part of conversation.
Here’s three quick variables that would naturally go into a timelines model, but in fact are more important because of their strategic implications.
Can ML produce human-level AGI?
Timelines implications: If the current research paradigm will continue to AGI, then we can do fairly basic extrapolation to determine timelines.
Strategic implications: If the current research paradigm will continue to AGI, this tells us important things about what alignment strategies to pursue, what sorts of places to look for alignment researchers in, and what sort of relationship to build with academia.
How flexible is are the key governments (US, Russia, China, etc)?
Timelines implications: This will let us know how much speed-up they can give to timelines (e.g. by funding, by pushing on race dynamics).
Strategic implications: This has a lot of impact in terms of how much we should start collaborating with governments, what information we should actively try to propagate through government, whether some of us should take governmental roles, etc.
Will an intelligence explosion be local or dispersed?
Timelines implications: If intelligence explosions can be highly local it could be that a take-off is happening right now that we just can’t see, and so our timelines should be shorter.
Strategic Implications: The main reason I might want to know about local v dispersed is because I need to know what sorts of information flows to set up between government, industry, and academia.
The word I’d object to in the sentence “Well, they are the decision-relevant question” is the word ‘the’. They are ‘a’ decision-relevant question, but not at all obviously ‘the’, nor even obviously one of the first five most important ones (I don’t have a particular list in mind, but I expect the top five are mostly questions about alignment agendas and the structure of intelligence).
---
(Also, I personally have never generated a timeline using a model; I do something like Focusing on the felt senses of which numbers feel right. This is the final step in Eliezer’s “do the math, then burn the math and go with your gut” thing.)
Yeah, I agree that it’s hard to make the most subtle intuitions explicit, and that nonetheless you should trust them. I also want to say that the do-the-math part first is pretty useful ;-)
I don’t mean “Discussion of timelines is not useful”. I mean it is not the central point nor should it be the main part of conversation.
Here’s three quick variables that would naturally go into a timelines model, but in fact are more important because of their strategic implications.
Can ML produce human-level AGI?
Timelines implications: If the current research paradigm will continue to AGI, then we can do fairly basic extrapolation to determine timelines.
Strategic implications: If the current research paradigm will continue to AGI, this tells us important things about what alignment strategies to pursue, what sorts of places to look for alignment researchers in, and what sort of relationship to build with academia.
How flexible is are the key governments (US, Russia, China, etc)?
Timelines implications: This will let us know how much speed-up they can give to timelines (e.g. by funding, by pushing on race dynamics).
Strategic implications: This has a lot of impact in terms of how much we should start collaborating with governments, what information we should actively try to propagate through government, whether some of us should take governmental roles, etc.
Will an intelligence explosion be local or dispersed?
Timelines implications: If intelligence explosions can be highly local it could be that a take-off is happening right now that we just can’t see, and so our timelines should be shorter.
Strategic Implications: The main reason I might want to know about local v dispersed is because I need to know what sorts of information flows to set up between government, industry, and academia.
The word I’d object to in the sentence “Well, they are the decision-relevant question” is the word ‘the’. They are ‘a’ decision-relevant question, but not at all obviously ‘the’, nor even obviously one of the first five most important ones (I don’t have a particular list in mind, but I expect the top five are mostly questions about alignment agendas and the structure of intelligence).
---
Yeah, I agree that it’s hard to make the most subtle intuitions explicit, and that nonetheless you should trust them. I also want to say that the do-the-math part first is pretty useful ;-)