This is my current take about where we’re at in the world:
Deep learning, scaled up, might be basically enough to get AGI. There might be some additional conceptual work necessary, but the main difference between 2020 and the year in which we have transformative AI is that in that year, the models are much bigger.
If this is the case, then the most urgent problem is strong AI alignment + wise deployment of strong AI.
We’ll know if this is the case in the next 10 years or so, because either we’ll continue to see incredible gains from increasingly bigger Deep Learning systems or we’ll see those gains level off, as we start seeing decreasing marginal returns to more compute / training.
If deep learning is basically not sufficient, then all bets are off. In that case, it isn’t clear when transformative AI will arrive.
This may shift meaningfully shift priorities, for two reasons:
It may mean that some other countdown will reach a critical point before the “AGI clock” does. Genetic engineering, or synthetic biology, or major geopolitical upheaval (like a nuclear war), or some strong form of civilizational collapse will upset the game-board before we get to AGI.
There is more time to pursue “foundational strategies” that only pay off in the medium term (30 to 100 years). Things like, improving the epistemic mechanism design of human institutions, including governmental reform, human genetic engineering projects, or plans to radically detraumatize large fractions of the population.
This suggests to me that I should, in this decade, be planning and steering for how to robustly-positively intervene on the AI safety problem, while tracking the sideline of broader Civilizational Sanity interventions, that might take longer to payoff. While planning to reassess every few years, to see if it looks like we’re getting diminishing marginal returns to Deep Learning yet.
I was wondering if I would get comment on that part in particular. ; )
I don’t have a strong belief about your points one through three, currently. But it is an important hypothesis in my hypothesis space, and I’m hoping that I can get to the bottom of it in the next year or two.
I do confidently think that one of the “forces for badness” in the world is that people regularly feel triggered or threatened by all kinds of different proposals, reflexively act to defend themselves. I think this is among the top three problems in having good discourse and cooperative politics. Systematically reducing that trigger response would be super high value, if it were feasible.
My best guess is that that propensity to be triggered is not mostly the result of infant or childhood trauma. It seems more parsimonious to posit that it is basic tribal stuff. But I could imagine it having its root in something like “trauma” (meaning it is the result of specific experiences, not just general dispositions, and it is practically feasible, if difficult, to clear or heal the underlying problem in a way completely prevents the symptoms).
I think there is no canonical resource on trauma-stuff because 1) the people on twitter are less interested on average, in that kind of theory building than we are on lesswong and 2) because mostly those people are (I think) extrapolating from their own experience, in which some practices unlocked subjectively huge breakthroughs in personal well-being / freedom of thought and action.
I plan to blog more about how I understand some of these trigger states and how it relates to trauma. I do think there’s a decent amount of written work, not sure how “canonical”, but I’ve read some great stuff that from sources I’m surprised I haven’t heard more hype about. The most useful stuff I’ve read so far is the first three chapters of this book. It has hugely sharpened my thinking.
I agree that a lot of trauma discourse on our chunk of twitter is more for used on the personal experience/transformation side, and doesn’t let itself well to bigger Theory of Change type scheming.
Yes, it definitely does–you just created the resource I will will link people to. Thank you!
Especially the third paragraph is cruxy. As far as I can tell, there are many people who have (to some extent) defused this propensity to get triggered for themselves. At least for me, LW was a resource to achieve that.
This is my current take about where we’re at in the world:
Deep learning, scaled up, might be basically enough to get AGI. There might be some additional conceptual work necessary, but the main difference between 2020 and the year in which we have transformative AI is that in that year, the models are much bigger.
If this is the case, then the most urgent problem is strong AI alignment + wise deployment of strong AI.
We’ll know if this is the case in the next 10 years or so, because either we’ll continue to see incredible gains from increasingly bigger Deep Learning systems or we’ll see those gains level off, as we start seeing decreasing marginal returns to more compute / training.
If deep learning is basically not sufficient, then all bets are off. In that case, it isn’t clear when transformative AI will arrive.
This may shift meaningfully shift priorities, for two reasons:
It may mean that some other countdown will reach a critical point before the “AGI clock” does. Genetic engineering, or synthetic biology, or major geopolitical upheaval (like a nuclear war), or some strong form of civilizational collapse will upset the game-board before we get to AGI.
There is more time to pursue “foundational strategies” that only pay off in the medium term (30 to 100 years). Things like, improving the epistemic mechanism design of human institutions, including governmental reform, human genetic engineering projects, or plans to radically detraumatize large fractions of the population.
This suggests to me that I should, in this decade, be planning and steering for how to robustly-positively intervene on the AI safety problem, while tracking the sideline of broader Civilizational Sanity interventions, that might take longer to payoff. While planning to reassess every few years, to see if it looks like we’re getting diminishing marginal returns to Deep Learning yet.
(This question is only related to a small point)
You write that one possible foundational strategy could be to “radically detraumatize large fractions of the population”. Do you believe that
A large part of the population is traumatized
That trauma is reversible
Removing/reversing that trauma would improve the development of humanity drastically?
If yes, why? I’m happy to get a 1k page PDF thrown at me.
I know that this has been a relatively popular talking point on twitter, but without a canonical resource, and I also haven’t seen it discussed on LW.
I was wondering if I would get comment on that part in particular. ; )
I don’t have a strong belief about your points one through three, currently. But it is an important hypothesis in my hypothesis space, and I’m hoping that I can get to the bottom of it in the next year or two.
I do confidently think that one of the “forces for badness” in the world is that people regularly feel triggered or threatened by all kinds of different proposals, reflexively act to defend themselves. I think this is among the top three problems in having good discourse and cooperative politics. Systematically reducing that trigger response would be super high value, if it were feasible.
My best guess is that that propensity to be triggered is not mostly the result of infant or childhood trauma. It seems more parsimonious to posit that it is basic tribal stuff. But I could imagine it having its root in something like “trauma” (meaning it is the result of specific experiences, not just general dispositions, and it is practically feasible, if difficult, to clear or heal the underlying problem in a way completely prevents the symptoms).
I think there is no canonical resource on trauma-stuff because 1) the people on twitter are less interested on average, in that kind of theory building than we are on lesswong and 2) because mostly those people are (I think) extrapolating from their own experience, in which some practices unlocked subjectively huge breakthroughs in personal well-being / freedom of thought and action.
Does that help at all?
I plan to blog more about how I understand some of these trigger states and how it relates to trauma. I do think there’s a decent amount of written work, not sure how “canonical”, but I’ve read some great stuff that from sources I’m surprised I haven’t heard more hype about. The most useful stuff I’ve read so far is the first three chapters of this book. It has hugely sharpened my thinking.
I agree that a lot of trauma discourse on our chunk of twitter is more for used on the personal experience/transformation side, and doesn’t let itself well to bigger Theory of Change type scheming.
http://www.traumaandnonviolence.com/chapter1.html
Thanks for the link! I’m going to take a look!
Yes, it definitely does–you just created the resource I will will link people to. Thank you!
Especially the third paragraph is cruxy. As far as I can tell, there are many people who have (to some extent) defused this propensity to get triggered for themselves. At least for me, LW was a resource to achieve that.