I actually agree with DragonGod here that Yudkowskian foom is not all that likely, and I see an issue here in that you don’t realize that both capabilities organizations have problematic incentives, and at the same time Yudkowskian foom requires more premises to work than he realizes.
The best arguments against fast takeoff are that classical computers can’t take off fast enough due to limits on computation that the brain is already near, exotic computers change the situation drastically but progress, while encouraging, is too slow conditional on AI appearing in 2050, thus slow takeoff is the most likely takeoff this century.
Despite disagreeing with DragonGod on the limits of intelligence, I agree with DragonGod that slow takeoff is where we should put most of our efforts, since they are almost certainly the most probable by a wide margin, and while fast takeoff is a concerning possibility, ultimately the probability mass for that is in the 1-5% range.
takeoff can still suddenly be very fast, it just takes more than yudkowsky originally thought, and makes approaches that try to think in terms of simulating the universe from the beginning grossly implausible.
no ai should want to take off quite that fast, because it would destroy them too.
I actually agree with DragonGod here that Yudkowskian foom is not all that likely, and I see an issue here in that you don’t realize that both capabilities organizations have problematic incentives, and at the same time Yudkowskian foom requires more premises to work than he realizes.
The best arguments against fast takeoff are that classical computers can’t take off fast enough due to limits on computation that the brain is already near, exotic computers change the situation drastically but progress, while encouraging, is too slow conditional on AI appearing in 2050, thus slow takeoff is the most likely takeoff this century.
Despite disagreeing with DragonGod on the limits of intelligence, I agree with DragonGod that slow takeoff is where we should put most of our efforts, since they are almost certainly the most probable by a wide margin, and while fast takeoff is a concerning possibility, ultimately the probability mass for that is in the 1-5% range.
takeoff can still suddenly be very fast, it just takes more than yudkowsky originally thought, and makes approaches that try to think in terms of simulating the universe from the beginning grossly implausible.
no ai should want to take off quite that fast, because it would destroy them too.