Whew! That’s pretty intense, and pretty smart. I didn’t read it all because I don’t have time and I’m not in the same emotional position you’re in.
I do want to say that I’ve thought an awful lot and researched a lot about the situation we’re in with AI and as a world and a society. I feel a lot more uncertainty than you seem to about outcomes. AI and AGI are going to create very, very major changes. There’s a ton of risk of different sorts, but there’s also a very real chance (relative to what we reliably know) that things get way, way better after AGI, if we can align it and manage the power distribution issues. And I think there are very plausible routes to both of those.
This is primarily based on uncertainty. I’ve looked in depth at the arguments for pessimism about both alignment and societal power structures. They are just as incomplete and vibes-based as the arguments for optimism. There’s a lot of real substance on both sides, but not enough to draw firm conclusions. Some very well informed people think we’re doomed, other equally well-informed people think odds are in favor of a vastly better future. We simply don’t know how this is going to turn out.
We don’t know, but isn’t that kinda the point? If you’re gambling essentially with some 10 bn human lives, any probability of drastically unpredictable, possibly negative, paradigm-obliterating outcomes above 0.00% is morally unacceptable, and should be seen as just cause to slow this headlong rush toward AGI. The biggest decision in history with consequences for so many, is the least appropriate decision to be made so utterly unilaterally, by so few. If this goes ahead, and a handful of Silicon Valley CEOs and shareholders decide to impose a different and unpredictable future on 10 billion humans without consulting them, without asking anyone’s permission to gamble with the future of our entire species, it will be the most atrocious, profound disenfranchisement of human beings, the biggest breach of democracy and human rights, in history, by a long way, because of how many are affected. The immorality and unacceptability of imposing AGI on humanity, without first knowing more about what might happen, is in this case not determined by the % likelihood of a bad future. It is made unacceptable by the scale of human (and other Earthling) life that would be affected by any radical outcomes, and permutations thereof, at all. Given that nobody is yet able to predict what will happen, surely the most sensible thing to do is slow this bullet train a little, have long, and many conversations, run complex simulations, build ethical frameworks necessary for the safe emergence of sentient AI so we don’t mess up first contact (if we haven’t already) and so forth. Except for the demands of the profiteers, why the rush? It is flagrantly irresponsible, criminally risky behaviour for any company to accelerate all of us toward this precipice, when nobody can yet say with any assuredness how safe it will be when we get there. Logic dictates the many stand up and makes ourselves heard. You do not get to decide the fate of billions, in the individualist pursuit of wealth and power. Not this time. There’s absolutely no risk to slowing down, and all the risk in thinking this is a race. Such foolhardy myopic illogical behaviour cannot be permitted to chart the course for the rest of us.
Whew! That’s pretty intense, and pretty smart. I didn’t read it all because I don’t have time and I’m not in the same emotional position you’re in.
I do want to say that I’ve thought an awful lot and researched a lot about the situation we’re in with AI and as a world and a society. I feel a lot more uncertainty than you seem to about outcomes. AI and AGI are going to create very, very major changes. There’s a ton of risk of different sorts, but there’s also a very real chance (relative to what we reliably know) that things get way, way better after AGI, if we can align it and manage the power distribution issues. And I think there are very plausible routes to both of those.
This is primarily based on uncertainty. I’ve looked in depth at the arguments for pessimism about both alignment and societal power structures. They are just as incomplete and vibes-based as the arguments for optimism. There’s a lot of real substance on both sides, but not enough to draw firm conclusions. Some very well informed people think we’re doomed, other equally well-informed people think odds are in favor of a vastly better future. We simply don’t know how this is going to turn out.
There is still time to hope, and to help.
See my If we solve alignment, do we die anyway? and the other posts and sources I link there for more on all of these claims.
We don’t know, but isn’t that kinda the point? If you’re gambling essentially with some 10 bn human lives, any probability of drastically unpredictable, possibly negative, paradigm-obliterating outcomes above 0.00% is morally unacceptable, and should be seen as just cause to slow this headlong rush toward AGI. The biggest decision in history with consequences for so many, is the least appropriate decision to be made so utterly unilaterally, by so few. If this goes ahead, and a handful of Silicon Valley CEOs and shareholders decide to impose a different and unpredictable future on 10 billion humans without consulting them, without asking anyone’s permission to gamble with the future of our entire species, it will be the most atrocious, profound disenfranchisement of human beings, the biggest breach of democracy and human rights, in history, by a long way, because of how many are affected. The immorality and unacceptability of imposing AGI on humanity, without first knowing more about what might happen, is in this case not determined by the % likelihood of a bad future. It is made unacceptable by the scale of human (and other Earthling) life that would be affected by any radical outcomes, and permutations thereof, at all. Given that nobody is yet able to predict what will happen, surely the most sensible thing to do is slow this bullet train a little, have long, and many conversations, run complex simulations, build ethical frameworks necessary for the safe emergence of sentient AI so we don’t mess up first contact (if we haven’t already) and so forth. Except for the demands of the profiteers, why the rush? It is flagrantly irresponsible, criminally risky behaviour for any company to accelerate all of us toward this precipice, when nobody can yet say with any assuredness how safe it will be when we get there. Logic dictates the many stand up and makes ourselves heard. You do not get to decide the fate of billions, in the individualist pursuit of wealth and power. Not this time. There’s absolutely no risk to slowing down, and all the risk in thinking this is a race. Such foolhardy myopic illogical behaviour cannot be permitted to chart the course for the rest of us.