i think we die probly this decade, and if not then probly next decade.
i partly explain my short timelines here. the short version is that i think recursive self-improvement (RSI) is not very hard to build. to kill everyone, you don’t need lots of compute (though it helps), which means you don’t need to be in a lab, which means you’re not affected by regulation unless the regulation is “all computers are banned”, which it’s not going to be. you don’t need to build “general” AI (whatever that means), you just need to build RSI. the hardest variable to predict is how many people are gonna be trying to build something like RSI, which is why my prediction is as vague as it is, but i think doom from RSI could happen any day, it just gets more likely on future days than on, say, tomorrow, because more people are trying with more powerful computers and more powerful available AI tech they can use.
i think we die probly this decade, and if not then probly next decade.
i partly explain my short timelines here. the short version is that i think recursive self-improvement (RSI) is not very hard to build. to kill everyone, you don’t need lots of compute (though it helps), which means you don’t need to be in a lab, which means you’re not affected by regulation unless the regulation is “all computers are banned”, which it’s not going to be. you don’t need to build “general” AI (whatever that means), you just need to build RSI. the hardest variable to predict is how many people are gonna be trying to build something like RSI, which is why my prediction is as vague as it is, but i think doom from RSI could happen any day, it just gets more likely on future days than on, say, tomorrow, because more people are trying with more powerful computers and more powerful available AI tech they can use.