Re my own updates, I’d say that my own probability of 50-55% chance of singularity by 2030, using the knowledge about AI, alignment and governance we have now:
Faster takeoff is correct, but nowhere near as fast as Eliezer’s usual stories.
Somewhat less competence, but only somewhat, because of the MNM effect and the ridiculously strong control system that was essentially a collective intelligence that operated for at least several months, and more generally I believe that governments will respond harder as the problem gets more severe.
IMO, we are probably going to get fairly concentrated takeoffs, but not totally unipolar takeoffs.
Politics and coordination will be reasonably effective by default, because I expect the government and the public to wake up hard once AIs start automating a lot of stuff.
IMO, most of the value of alignment and interpretability research will be gotten very near into the singularity, or even right on the event horizon of the transition from human to AI, for almost the same reasons why a whole lot of the percentage of capability research will be gotten, but also it’s quite surprising how much we got the low-hanging fruit of alignment such that we could well dream for bigger targets.
Useful warning shots will definitely be less, but I also expect governments to wake up a lot more than they have right now once they realize that AI is automating everything.
Re my own updates, I’d say that my own probability of 50-55% chance of singularity by 2030, using the knowledge about AI, alignment and governance we have now:
Faster takeoff is correct, but nowhere near as fast as Eliezer’s usual stories.
Somewhat less competence, but only somewhat, because of the MNM effect and the ridiculously strong control system that was essentially a collective intelligence that operated for at least several months, and more generally I believe that governments will respond harder as the problem gets more severe.
IMO, we are probably going to get fairly concentrated takeoffs, but not totally unipolar takeoffs.
Politics and coordination will be reasonably effective by default, because I expect the government and the public to wake up hard once AIs start automating a lot of stuff.
IMO, most of the value of alignment and interpretability research will be gotten very near into the singularity, or even right on the event horizon of the transition from human to AI, for almost the same reasons why a whole lot of the percentage of capability research will be gotten, but also it’s quite surprising how much we got the low-hanging fruit of alignment such that we could well dream for bigger targets.
Useful warning shots will definitely be less, but I also expect governments to wake up a lot more than they have right now once they realize that AI is automating everything.