The current 80,000 Hours list of the world’s most pressing problems ranks AI safety as the number one cause in the highest priority area section.
AI safety is not the world’s most pressing problem. It is a symptom of the world’s most pressing problem, our unwillingness and/or inability to learn how to manage the pace of the knowledge explosion.
Our outdated relationship with knowledge is the problem. Nuclear weapons, AI, genetic engineering and other technological risks are symptoms of that problem. EA writers insist on continually confusing sources and symptoms.
To make this less abstract, consider a factory assembly line. The factory is the source. The products rolling off the end of the assembly line are the symptoms.
EA writers (and the rest of the culture) insist on focusing on each product as it comes off the end of the assembly line, while the assembly line keeps accelerating faster and faster. While you’re focused on the latest shiny product to emerge off the assembly line, the assembly line is ramping up to overwhelm you with a tsunami of other new products.
(I clicked through to see your other comments after disagreeing with one. Generally, I like your comments!)
I think that EA writers and culture are less “lost” than you think, on this axis. I think that most EA/rationalist/ex-risk-focused people in this subculture would basically agree with you that the knowledge explosion/recursive acceleration of technological development is the core problem, and when they talk about “AI safety” and so forth, they’re somewhat shorthanding this.
Like, I think most of the people around here are, in fact, worried about some of the products rolling off the end of the assembly line, but would also pretty much immediately concur with you that the assembly line itself is the root problem, or at least equally important.
I can’t actually speak for everybody, of course, but I think you might be docking people more points than you should.
I think that EA writers and culture are less “lost” than you think, on this axis. I think that most EA/rationalist/ex-risk-focused people in this subculture would basically agree with you that the knowledge explosion/recursive acceleration of technological development is the core problem
Ok, where are their articles on the subject? What I see so far are a ton of articles about AI, and nothing about the knowledge explosion unless I wrote it. I spent almost all day every day for a couple weeks on the EA forum, and observed the same thing there.
That said, I’m here because the EA community is far more interested in X risk than the general culture and the vast majority of intellectual elites, and I think that’s great. I’m hoping to contribute by directing some attention away from symptoms and towards sources. This is obviously a debatable proposition and I’m happy to see it debated, no problem.
AI safety is not the world’s most pressing problem. It is a symptom of the world’s most pressing problem, our unwillingness and/or inability to learn how to manage the pace of the knowledge explosion.
Our outdated relationship with knowledge is the problem. Nuclear weapons, AI, genetic engineering and other technological risks are symptoms of that problem. EA writers insist on continually confusing sources and symptoms.
To make this less abstract, consider a factory assembly line. The factory is the source. The products rolling off the end of the assembly line are the symptoms.
EA writers (and the rest of the culture) insist on focusing on each product as it comes off the end of the assembly line, while the assembly line keeps accelerating faster and faster. While you’re focused on the latest shiny product to emerge off the assembly line, the assembly line is ramping up to overwhelm you with a tsunami of other new products.
(I clicked through to see your other comments after disagreeing with one. Generally, I like your comments!)
I think that EA writers and culture are less “lost” than you think, on this axis. I think that most EA/rationalist/ex-risk-focused people in this subculture would basically agree with you that the knowledge explosion/recursive acceleration of technological development is the core problem, and when they talk about “AI safety” and so forth, they’re somewhat shorthanding this.
Like, I think most of the people around here are, in fact, worried about some of the products rolling off the end of the assembly line, but would also pretty much immediately concur with you that the assembly line itself is the root problem, or at least equally important.
I can’t actually speak for everybody, of course, but I think you might be docking people more points than you should.
Hi Duncan, thanks for engaging.
Ok, where are their articles on the subject? What I see so far are a ton of articles about AI, and nothing about the knowledge explosion unless I wrote it. I spent almost all day every day for a couple weeks on the EA forum, and observed the same thing there.
That said, I’m here because the EA community is far more interested in X risk than the general culture and the vast majority of intellectual elites, and I think that’s great. I’m hoping to contribute by directing some attention away from symptoms and towards sources. This is obviously a debatable proposition and I’m happy to see it debated, no problem.