“I’ve come to the conclusion that it is impossible to make an accurate prediction about an event that’s going to happen more than three years from the present, including predictions about humanity’s end.”
Correct. Eliezer has said this himself, check out his outstanding post “There is no fire alarm for AGI”. However, you can still assign a probability distribution to it. Say, I’m 80% certain that dangerous/transformative AI (I dislike the term AGI) will happen in the next couple of decades.
So the matter turns out to be just as urgent, even if you can’t predict the future. Perhaps such uncertainty only makes it more urgent.
″. I believe that the most important conversation will start when we actually get close to developing early AGIs (and we are not quite there yet), this is when the real safety protocols and regulations will be put in place, and when the rationalist community will have the best chance at making a difference. This is probably when the fate of humanity will be decided, and until then everything is up in the air.”
Well, first, like I said, you can’t predict the future, i.e. There’s No Fire Alarm for AGI. So we might never know that we’re close till we get there. Happened with other transformative technologies before.
Second, even if we could, we might not have enough time by then. Alignment seems to be pretty hard. Perhaps intractable. Perhaps straight impossible. The time to start thinking of solutions and implementing them is now. In fact, I’d even say that we’re already too late. Given such monumental task, I’d say that we would need centuries, and not the few decades that we might have.
You’re like the 3rd person I respond to in this post saying that “we can’t predict the future, so let’s not panic and let’s do nothing until the future is nearer”. The sociologist in me tells me that this might be one of the crucial aspects of why people aren’t more concerned about AI safety. And I don’t blame them. If I hadn’t been exposed to key concepts myself like intelligence explosion, orthogonality thesis, basic AI drives, etc etc, I guess I’d have the same view.
So the matter turns out to be just as urgent, even if you can’t predict the future. Perhaps such uncertainty only makes it more urgent.
You may not be predicting an exact future, but by claiming it is urgent, you are inherently predicting a probability distribution with a high expected value for catastrophic damage. (And as such, the more urgent your prediction, the more that failure of the prediction to come true should lower your confidence that you understand the issue.)
I most certainly do not think that we should do nothing right now. I think that important work is being done right now. We want to be prepared for transformative AI when the time comes. We absolutely should be concerned about AI safety. What I am saying is, it’s pretty hard to calculate our chances of success at this point in time due to so many unknown about the timeline and the form the future AI will take.
“I’ve come to the conclusion that it is impossible to make an accurate prediction about an event that’s going to happen more than three years from the present, including predictions about humanity’s end.”
Correct. Eliezer has said this himself, check out his outstanding post “There is no fire alarm for AGI”. However, you can still assign a probability distribution to it. Say, I’m 80% certain that dangerous/transformative AI (I dislike the term AGI) will happen in the next couple of decades. So the matter turns out to be just as urgent, even if you can’t predict the future. Perhaps such uncertainty only makes it more urgent.
″. I believe that the most important conversation will start when we actually get close to developing early AGIs (and we are not quite there yet), this is when the real safety protocols and regulations will be put in place, and when the rationalist community will have the best chance at making a difference. This is probably when the fate of humanity will be decided, and until then everything is up in the air.”
Well, first, like I said, you can’t predict the future, i.e. There’s No Fire Alarm for AGI. So we might never know that we’re close till we get there. Happened with other transformative technologies before.
Second, even if we could, we might not have enough time by then. Alignment seems to be pretty hard. Perhaps intractable. Perhaps straight impossible. The time to start thinking of solutions and implementing them is now. In fact, I’d even say that we’re already too late. Given such monumental task, I’d say that we would need centuries, and not the few decades that we might have.
You’re like the 3rd person I respond to in this post saying that “we can’t predict the future, so let’s not panic and let’s do nothing until the future is nearer”. The sociologist in me tells me that this might be one of the crucial aspects of why people aren’t more concerned about AI safety. And I don’t blame them. If I hadn’t been exposed to key concepts myself like intelligence explosion, orthogonality thesis, basic AI drives, etc etc, I guess I’d have the same view.
You may not be predicting an exact future, but by claiming it is urgent, you are inherently predicting a probability distribution with a high expected value for catastrophic damage. (And as such, the more urgent your prediction, the more that failure of the prediction to come true should lower your confidence that you understand the issue.)
I most certainly do not think that we should do nothing right now. I think that important work is being done right now. We want to be prepared for transformative AI when the time comes. We absolutely should be concerned about AI safety. What I am saying is, it’s pretty hard to calculate our chances of success at this point in time due to so many unknown about the timeline and the form the future AI will take.