I’m somewhat horrified by this comment. This hypothetical referendum is about replacing all biological humans by machines, whereas the agricultural and industrial revolutions did no such thing.
To clarify, I wouldn’t personally condone ‘replacing all biological humans by machines’ and I have found related e/acc suggestions quite inappropriate/repulsive.
If you believes in democracy, then why would you allow a tiny minority to decide to kill off everyone else against their will?
I don’t think there are easy answers here, to be honest. On the one hand, yes, allowing tiny minorities to take risks for all of [including future] humanity doesn’t seem right. On the other, I’m not sure it would have necessarily been right either to e.g. stop the industrial revolution if a global referendum in the 17th century had come with that answer. This is what I was trying to get at.
I find such lackadaisical support for democratic ideals particularly hypocritical from people who say we should rush to AGI to defend democracy against authoritarian governments
I don’t think ‘lackadaisical support for democratic ideals’ is what’s going on here (FWIW, I feel incredibly grateful to have been living in liberal democracies, knowing the past tragedies of undemocratic regimes, including in my home country not-so-long-ago), nor am I (necessarily) advocating for a rush to AGI. I just think it’s complicated, and it will probably take nuanced cost-benefit analyses based on (ideally quantitative) risk estimates. If I could have it my way, my preferred global policy would probably look something like a coordinated, international pause during which a lot of automated safety research can be produced safely, combined with something like Paretotopian Goal Alignment. (Even beyond the vagueness) I’m not sure how tractable this mix is, though, and how it might trade-off e.g. extinction risk from AI vs. risks from (potentially global, stable) authoritarianism. Which is why I think it’s not that obvious.
Salut Max!
To clarify, I wouldn’t personally condone ‘replacing all biological humans by machines’ and I have found related e/acc suggestions quite inappropriate/repulsive.
I don’t think there are easy answers here, to be honest. On the one hand, yes, allowing tiny minorities to take risks for all of [including future] humanity doesn’t seem right. On the other, I’m not sure it would have necessarily been right either to e.g. stop the industrial revolution if a global referendum in the 17th century had come with that answer. This is what I was trying to get at.
I don’t think ‘lackadaisical support for democratic ideals’ is what’s going on here (FWIW, I feel incredibly grateful to have been living in liberal democracies, knowing the past tragedies of undemocratic regimes, including in my home country not-so-long-ago), nor am I (necessarily) advocating for a rush to AGI. I just think it’s complicated, and it will probably take nuanced cost-benefit analyses based on (ideally quantitative) risk estimates. If I could have it my way, my preferred global policy would probably look something like a coordinated, international pause during which a lot of automated safety research can be produced safely, combined with something like Paretotopian Goal Alignment. (Even beyond the vagueness) I’m not sure how tractable this mix is, though, and how it might trade-off e.g. extinction risk from AI vs. risks from (potentially global, stable) authoritarianism. Which is why I think it’s not that obvious.