Argumenting to our ignorance can be used to support a position that we should have huge error margins in our predictions of AI extinction possibility. Say, that our probability of AI doom should be somewhere between 25% and 75% because something comepletely unpredictable can happen and through out all our calculations.
If this is your position, I think I mosty agree. Though, even 25% chance for extinction is a lot so it makes all the sense to thread carefully.
Extraordinary claims require extraordinary evidence. I can easily accept that by scaling up capabilities with the current architecture we may end up creating something that can accidentally or intentionally kill a lot of people. Maybe a majority of people. But this is different from extinction, where no one survives because of an engineered virus, nanobots turning us into goo, because the Earth is converted into computronium, or whatever. Total extinction is an extraordinary claim. That is definitely possible, but is a very large extrapolation from where we are now and what we can see from where we are into the future. Sure, species have been intentionally and accidentally made extinct before, on the regular, and many are going extinct all the time, due to human activity, and for other reasons. Humans are pretty well adapted buggers though, hard to exterminate completely without actually focusing on it. MaddAddam-style event is a possibility without RSI and superintelligence, but I don’t think this is what the doomers mean.
Argumenting to our ignorance can be used to support a position that we should have huge error margins in our predictions of AI extinction possibility. Say, that our probability of AI doom should be somewhere between 25% and 75% because something comepletely unpredictable can happen and through out all our calculations.
If this is your position, I think I mosty agree. Though, even 25% chance for extinction is a lot so it makes all the sense to thread carefully.
Extraordinary claims require extraordinary evidence. I can easily accept that by scaling up capabilities with the current architecture we may end up creating something that can accidentally or intentionally kill a lot of people. Maybe a majority of people. But this is different from extinction, where no one survives because of an engineered virus, nanobots turning us into goo, because the Earth is converted into computronium, or whatever. Total extinction is an extraordinary claim. That is definitely possible, but is a very large extrapolation from where we are now and what we can see from where we are into the future. Sure, species have been intentionally and accidentally made extinct before, on the regular, and many are going extinct all the time, due to human activity, and for other reasons. Humans are pretty well adapted buggers though, hard to exterminate completely without actually focusing on it. MaddAddam-style event is a possibility without RSI and superintelligence, but I don’t think this is what the doomers mean.