My timelines are roughly 50% probability on something like transformative AI by 2030, 90% by 2045, and a long tail afterward. I don’t hold this strongly either, and my views on alignment are mostly decoupled from these beliefs. But if we do get an AI winter longer than that (through means other than by government intervention, which I haven’t accounted for), I should lose some Bayes points, and it seems worth saying so publicly.
to be clear, a “winter/slowdown” in my typology is more about the vibes and could only be a few years counterfactual slowdown. like the dot-com crash didn’t take that long for companies like Amazon or Google to recover from, but it was still a huge vibe shift
My timelines are roughly 50% probability on something like transformative AI by 2030, 90% by 2045, and a long tail afterward. I don’t hold this strongly either, and my views on alignment are mostly decoupled from these beliefs. But if we do get an AI winter longer than that (through means other than by government intervention, which I haven’t accounted for), I should lose some Bayes points, and it seems worth saying so publicly.
to be clear, a “winter/slowdown” in my typology is more about the vibes and could only be a few years counterfactual slowdown. like the dot-com crash didn’t take that long for companies like Amazon or Google to recover from, but it was still a huge vibe shift