The (obvious) counter is that this doesn’t seem competitive, especially in the long run, but plausibly not even today. E.g. where would o1[-preview] fit? It doesn’t seem obvious how to build very confident quantitative safety guarantees for it (and especially for successors, for which incapability arguments will stop holding), so should it be banned / should this tech tree be abandoned (e.g. by the West)? OTOH, putting it under the ‘tool AI’ category seems like a bit of a stretch.
In conclusion, the potential of tool AI is absolutely stunning and, in my opinion, dramatically underrated. In contrast, AGI does not add much value at the present time beyond what tool AI will be able to deliver
This seems very unlikely to me, e.g. if LLMs/LM agents are excluded from the tool AI category.
I predict that if the pro-replacement school ran a global referendum on this question, they would be disappointed by the result.
I’m not sure this line of reasoning has the force some people seem to assume. What would you expect the results of hypothetical, similar referendums would have been e.g. before the industrial revolution and before the agricultural revolution, on those changes?
If you disagree with my assertion, I challenge you to cite or openly publish an actual plan for aligning or controlling a hybrid AGI system.
I’m not sure this line of reasoning has the force some people seem to assume. What would you expect the results of hypothetical, similar referendums would have been e.g. before the industrial revolution and before the agricultural revolution, on those changes?
I’m somewhat horrified by this comment. This hypothetical referendum is about replacing all biological humans by machines, whereas the agricultural and industrial revolutions did no such thing. If you believes in democracy, then why would you allow a tiny minority to decide to kill off everyone else against their will? I find such lackadaisical support for democratic ideals particularly hypocritical from people who say we should rush to AGI to defend democracy against authoritarian governments,
I’m somewhat horrified by this comment. This hypothetical referendum is about replacing all biological humans by machines, whereas the agricultural and industrial revolutions did no such thing.
To clarify, I wouldn’t personally condone ‘replacing all biological humans by machines’ and I have found related e/acc suggestions quite inappropriate/repulsive.
If you believes in democracy, then why would you allow a tiny minority to decide to kill off everyone else against their will?
I don’t think there are easy answers here, to be honest. On the one hand, yes, allowing tiny minorities to take risks for all of [including future] humanity doesn’t seem right. On the other, I’m not sure it would have necessarily been right either to e.g. stop the industrial revolution if a global referendum in the 17th century had come with that answer. This is what I was trying to get at.
I find such lackadaisical support for democratic ideals particularly hypocritical from people who say we should rush to AGI to defend democracy against authoritarian governments
I don’t think ‘lackadaisical support for democratic ideals’ is what’s going on here (FWIW, I feel incredibly grateful to have been living in liberal democracies, knowing the past tragedies of undemocratic regimes, including in my home country not-so-long-ago), nor am I (necessarily) advocating for a rush to AGI. I just think it’s complicated, and it will probably take nuanced cost-benefit analyses based on (ideally quantitative) risk estimates. If I could have it my way, my preferred global policy would probably look something like a coordinated, international pause during which a lot of automated safety research can be produced safely, combined with something like Paretotopian Goal Alignment. (Even beyond the vagueness) I’m not sure how tractable this mix is, though, and how it might trade-off e.g. extinction risk from AI vs. risks from (potentially global, stable) authoritarianism. Which is why I think it’s not that obvious.
I don’t think that’s what Bogdan meant. I think if we took a referendum on AI replacing humans entirely, the population would be 99.99% against—far higher than the consensus that might’ve voted against the industrial revolution (and actually I suspect that referendum might’ve been in favor—job loss only affected minorities of the population at any one point I think).
Even the e/acc people accused of wanting to replace humanity with machines mostly don’t want that, when they’re read in detail. I did this with “Beff Jezos” writings since he’s commonly accused of being anti-human. He’s really not—he thinks humans will be preserved or else machines will carry on humans values. There are definitely a few people who actually think intelligence is the most important thing to preserve (Sutton), but they’re very rare compared to those who want humans to persist. Most of those like Jezos who say it’s fine to be replaced by machines are still thinking those machines would be a lot like humans, including have a lot of our values. And even those are quite rare. For the most part, e/acc, d/acc, and doomers all share a love of humanity and its positive potential. We just disagree on how to get there. And given how new and complex this discussion is, I hold hope that we can mostly converge as we sort through the complex logic and evidence.
The (obvious) counter is that this doesn’t seem competitive, especially in the long run, but plausibly not even today. E.g. where would o1[-preview] fit? It doesn’t seem obvious how to build very confident quantitative safety guarantees for it (and especially for successors, for which incapability arguments will stop holding), so should it be banned / should this tech tree be abandoned (e.g. by the West)? OTOH, putting it under the ‘tool AI’ category seems like a bit of a stretch.
This seems very unlikely to me, e.g. if LLMs/LM agents are excluded from the tool AI category.
I’m not sure this line of reasoning has the force some people seem to assume. What would you expect the results of hypothetical, similar referendums would have been e.g. before the industrial revolution and before the agricultural revolution, on those changes?
Not claiming to speak on behalf of the relevant authors/actors, but quite a few (sketches) of such plans have been proposed, e.g. The Checklist: What Succeeding at AI Safety Will Involve, The case for ensuring that powerful AIs are controlled.
Salut Boghdan!
I’m somewhat horrified by this comment. This hypothetical referendum is about replacing all biological humans by machines, whereas the agricultural and industrial revolutions did no such thing. If you believes in democracy, then why would you allow a tiny minority to decide to kill off everyone else against their will? I find such lackadaisical support for democratic ideals particularly hypocritical from people who say we should rush to AGI to defend democracy against authoritarian governments,
Salut Max!
To clarify, I wouldn’t personally condone ‘replacing all biological humans by machines’ and I have found related e/acc suggestions quite inappropriate/repulsive.
I don’t think there are easy answers here, to be honest. On the one hand, yes, allowing tiny minorities to take risks for all of [including future] humanity doesn’t seem right. On the other, I’m not sure it would have necessarily been right either to e.g. stop the industrial revolution if a global referendum in the 17th century had come with that answer. This is what I was trying to get at.
I don’t think ‘lackadaisical support for democratic ideals’ is what’s going on here (FWIW, I feel incredibly grateful to have been living in liberal democracies, knowing the past tragedies of undemocratic regimes, including in my home country not-so-long-ago), nor am I (necessarily) advocating for a rush to AGI. I just think it’s complicated, and it will probably take nuanced cost-benefit analyses based on (ideally quantitative) risk estimates. If I could have it my way, my preferred global policy would probably look something like a coordinated, international pause during which a lot of automated safety research can be produced safely, combined with something like Paretotopian Goal Alignment. (Even beyond the vagueness) I’m not sure how tractable this mix is, though, and how it might trade-off e.g. extinction risk from AI vs. risks from (potentially global, stable) authoritarianism. Which is why I think it’s not that obvious.
I don’t think that’s what Bogdan meant. I think if we took a referendum on AI replacing humans entirely, the population would be 99.99% against—far higher than the consensus that might’ve voted against the industrial revolution (and actually I suspect that referendum might’ve been in favor—job loss only affected minorities of the population at any one point I think).
Even the e/acc people accused of wanting to replace humanity with machines mostly don’t want that, when they’re read in detail. I did this with “Beff Jezos” writings since he’s commonly accused of being anti-human. He’s really not—he thinks humans will be preserved or else machines will carry on humans values. There are definitely a few people who actually think intelligence is the most important thing to preserve (Sutton), but they’re very rare compared to those who want humans to persist. Most of those like Jezos who say it’s fine to be replaced by machines are still thinking those machines would be a lot like humans, including have a lot of our values. And even those are quite rare. For the most part, e/acc, d/acc, and doomers all share a love of humanity and its positive potential. We just disagree on how to get there. And given how new and complex this discussion is, I hold hope that we can mostly converge as we sort through the complex logic and evidence.