Not all accelerationism is based on some version of ‘well, what else are we going to do, you’ve strangled everything else and someone please help me my family civilization is dying’ or ‘I literally can’t envision a positive future anymore, so why not roll the dice.’
@Zvi I have to ask. In what timeline, with solely human intelligence and weak AI, does this trend reverse itself?
I simply don’t see it. Humans are too stupid to ever relax the regulatory ratchet because the argument for a regulation sounds more convincing than the argument to relax it. Especially when government/high status institutions get to argue for restrictions (that happen to empower and guarantee their own jobs) while the argument against it usually comes from those with lower status.
AI research is difficult to impede because of the race mechanics/easy reproducibility in separate regulatory regimes, so it can actually proceed. Building more housing or balancing the budget or researching treatments for aging at a serious level? Impossible.
I’m an accelerationist for 2 main reasons.
(1) I dropped out of medical school, but not before witnessing that it’s considered standard practice to have no plan at all. Hospitals exist to collect reimbursements, and their practitioners, who were all supposedly trained on empirical science, never even try, hell they don’t even research cryogenically freezing any of their patients. This is so incredibly stupid you can’t imagine any solution other than basically being able to fire everyone—that if you had a powerful enough set of ASIs, you could start your own hospital and deliver medicine that actually worked.
(2) It’s not at all clear that, with constructible levels of compute and actual available data, that a being “smarter than us” would be smarter by the kind of margins that some are assuming. There is logarithmically less utility on real problems with greater intelligence, and this agrees with all AI research data I am aware of. The “sudden capabilities jumps” seem to be illusions. https://www.lesswrong.com/posts/qpgkttrxkvGrH9BRr/superintelligence-is-not-omniscience and other posts show that there are ultimate limits even for a perfect cognitive system. What this means is that narrow superintelligences that designed to stay focused on the problems we want to solve—with various forms of myopia in their agent designs—may in fact be controllable, with any attempts they have to break out or manipulate us failing trivially because they did not receive sufficient training data to succeed nor do they have enough information about the (target computer or target person) to succeed.
(3) if you try to play this out a little, if the default case is every human is going to die, then future worlds are either “machine men” (AIs that have all of our writings and images and DNA in files they learned from and some kind of civilization) or “men machines” (humans won, they made aligned AI and didn’t die but the optimization pressures of future society turns everyone into immortal cyborgs with less and less organic parts)
If you’re dead the 2 outcomes are indistinguishable, and it’s hard to see how they are really any different. Either outcome is “every human alive now is dead, the information that made us human exists”.
So yes, the cruxes are : the default case is everyone is going to die. Doesn’t matter your age, the medical-research establishment as practiced by humans today will not develop a treatment for aging before the death of a newborn child alive today. And compute, especially inference compute, is so scarce today that if we had ASI right now, it would take several decades, even with exponential growth, to build enough compute for ASIs to challenge humanity.
I don’t see the reason for this defeatism—not on housing where YIMBY is actively winning some battles and gaining strength, not on aging where there might not be as much research as we’d like but there’s definitely research and it will improve over time. As for balancing the budget, we did it as recently as the 1990s and also it’s not obvious why we need to care about that.
So basically on your (1) I’d say yes we agree there are upsides I don’t see how that leads to enough to justify the risks, and (2) I disagree strongly with the premise but even if you are right we would still be dead slightly slower, as your (3) suggests.
If your opinion is, roughly, ‘I don’t care if humans continue to exist once I am dead’ then that would be a crux, yes. If I didn’t care about humans existing after my death, I would roll the dice too.
And compute, especially inference compute, is so scarce today that if we had ASI right now, it would take several decades, even with exponential growth, to build enough compute for ASIs to challenge humanity.
Uhm, what? “Slow takeoff” means ~1 year… Your opinion is very unusual, you can’t just state it without any justification.
@Zvi I have to ask. In what timeline, with solely human intelligence and weak AI, does this trend reverse itself?
I simply don’t see it. Humans are too stupid to ever relax the regulatory ratchet because the argument for a regulation sounds more convincing than the argument to relax it. Especially when government/high status institutions get to argue for restrictions (that happen to empower and guarantee their own jobs) while the argument against it usually comes from those with lower status.
AI research is difficult to impede because of the race mechanics/easy reproducibility in separate regulatory regimes, so it can actually proceed. Building more housing or balancing the budget or researching treatments for aging at a serious level? Impossible.
I’m an accelerationist for 2 main reasons.
(1) I dropped out of medical school, but not before witnessing that it’s considered standard practice to have no plan at all. Hospitals exist to collect reimbursements, and their practitioners, who were all supposedly trained on empirical science, never even try, hell they don’t even research cryogenically freezing any of their patients. This is so incredibly stupid you can’t imagine any solution other than basically being able to fire everyone—that if you had a powerful enough set of ASIs, you could start your own hospital and deliver medicine that actually worked.
(2) It’s not at all clear that, with constructible levels of compute and actual available data, that a being “smarter than us” would be smarter by the kind of margins that some are assuming. There is logarithmically less utility on real problems with greater intelligence, and this agrees with all AI research data I am aware of. The “sudden capabilities jumps” seem to be illusions. https://www.lesswrong.com/posts/qpgkttrxkvGrH9BRr/superintelligence-is-not-omniscience and other posts show that there are ultimate limits even for a perfect cognitive system. What this means is that narrow superintelligences that designed to stay focused on the problems we want to solve—with various forms of myopia in their agent designs—may in fact be controllable, with any attempts they have to break out or manipulate us failing trivially because they did not receive sufficient training data to succeed nor do they have enough information about the (target computer or target person) to succeed.
(3) if you try to play this out a little, if the default case is every human is going to die, then future worlds are either “machine men” (AIs that have all of our writings and images and DNA in files they learned from and some kind of civilization) or “men machines” (humans won, they made aligned AI and didn’t die but the optimization pressures of future society turns everyone into immortal cyborgs with less and less organic parts)
If you’re dead the 2 outcomes are indistinguishable, and it’s hard to see how they are really any different. Either outcome is “every human alive now is dead, the information that made us human exists”.
So yes, the cruxes are : the default case is everyone is going to die. Doesn’t matter your age, the medical-research establishment as practiced by humans today will not develop a treatment for aging before the death of a newborn child alive today. And compute, especially inference compute, is so scarce today that if we had ASI right now, it would take several decades, even with exponential growth, to build enough compute for ASIs to challenge humanity.
I don’t see the reason for this defeatism—not on housing where YIMBY is actively winning some battles and gaining strength, not on aging where there might not be as much research as we’d like but there’s definitely research and it will improve over time. As for balancing the budget, we did it as recently as the 1990s and also it’s not obvious why we need to care about that.
So basically on your (1) I’d say yes we agree there are upsides I don’t see how that leads to enough to justify the risks, and (2) I disagree strongly with the premise but even if you are right we would still be dead slightly slower, as your (3) suggests.
If your opinion is, roughly, ‘I don’t care if humans continue to exist once I am dead’ then that would be a crux, yes. If I didn’t care about humans existing after my death, I would roll the dice too.
This is almost impossibly unlikely to produce good outcomes; this is selecting for speed by its ability to avoid our current means of alignment.
Uhm, what? “Slow takeoff” means ~1 year… Your opinion is very unusual, you can’t just state it without any justification.