Try this post. Basically, everything is too easy recently, with many roads leading to progress. And very recently, there are plausible roadmaps that don’t require any new discoveries (which would otherwise take unknown amount of time), just some engineering. It’s no longer insane (though not yet likely, I’d give it 8%) for AGI to be done in two years even without secret projects or unexpected technical breakthroughs. (By AGI I mean a point where AI becomes able to teach itself any skills that don’t fall out of its low level learning algorithm on their own, but it hasn’t yet learned much in this way.)
Alignment looks relatively hopeless, but at the same time if AGI is something along the lines of ChatGPT, it’s more likely to be somewhat human-like and possibly won’t cause outright extinction, even if it takes away most of the resources in future lightcone that humanity could’ve gotten for itself if it was magically much better at coordination and took its time to figure out alignment or uploading.
Try this post. Basically, everything is too easy recently, with many roads leading to progress. And very recently, there are plausible roadmaps that don’t require any new discoveries (which would otherwise take unknown amount of time), just some engineering. It’s no longer insane (though not yet likely, I’d give it 8%) for AGI to be done in two years even without secret projects or unexpected technical breakthroughs. (By AGI I mean a point where AI becomes able to teach itself any skills that don’t fall out of its low level learning algorithm on their own, but it hasn’t yet learned much in this way.)
Alignment looks relatively hopeless, but at the same time if AGI is something along the lines of ChatGPT, it’s more likely to be somewhat human-like and possibly won’t cause outright extinction, even if it takes away most of the resources in future lightcone that humanity could’ve gotten for itself if it was magically much better at coordination and took its time to figure out alignment or uploading.