“I’ve come to the conclusion that it is impossible to make an accurate prediction about an event that’s going to happen more than three years from the present, including predictions about humanity’s end.”
Correct. Eliezer has said this himself, check out his outstanding post “There is no fire alarm for AGI”. However, you can still assign a probability distribution to it. Say, I’m 80% certain that dangerous/transformative AI (I dislike the term AGI) will happen in the next couple of decades. So the matter turns out to be just as urgent, even if you can’t predict the future. Perhaps such uncertainty only makes it more urgent.
″. I believe that the most important conversation will start when we actually get close to developing early AGIs (and we are not quite there yet), this is when the real safety protocols and regulations will be put in place, and when the rationalist community will have the best chance at making a difference. This is probably when the fate of humanity will be decided, and until then everything is up in the air.”
Well, first, like I said, you can’t predict the future, i.e. There’s No Fire Alarm for AGI. So we might never know that we’re close till we get there. Happened with other transformative technologies before.
Second, even if we could, we might not have enough time by then. Alignment seems to be pretty hard. Perhaps intractable. Perhaps straight impossible. The time to start thinking of solutions and implementing them is now. In fact, I’d even say that we’re already too late. Given such monumental task, I’d say that we would need centuries, and not the few decades that we might have.
You’re like the 3rd person I respond to in this post saying that “we can’t predict the future, so let’s not panic and let’s do nothing until the future is nearer”. The sociologist in me tells me that this might be one of the crucial aspects of why people aren’t more concerned about AI safety. And I don’t blame them. If I hadn’t been exposed to key concepts myself like intelligence explosion, orthogonality thesis, basic AI drives, etc etc, I guess I’d have the same view.
“I mean, I agree that we’ve failed at our goal. But “haven’t done a very good job” implies to me something like “it was possible to not fail”, which, unclear?”
Of course it was. Was it difficult? Certainly. So difficult that I don’t blame anyone for failing, like I’ve stated in my comment reply to this post.
It’s an extremely difficult problem both technically and politically/socially. The difference is that I don’t see any technical solutions, and have as well heard very convincing arguments by the likes of Roman Yalmpolskiy that such thing might not even exist. But we can all agree that there is at least one political solution—to not build advanced AIs before we’ve solved the alignment problem. No matter how extremely difficult such solution might seem, it actually exists and seems possible.
So we’ve failed, but I’m not blaming anyone because it’s damn difficult. In fact I have nothing but the deepest admiration for the likes of Eliezer, Bostrom and Russell. But my critique still stands: such failure (to get the leaders to care, not the technical failure to solve alignment) COULD be IN PART because most prominent figures like these 3 only talk about AI x-risk and not worse outcomes.
“unless they decide to take it as seriously as they take nuclear proliferation”
That’s precisely what we need. I’d assume that most in this community are quite solidly convinced that “AI is far more dangerous than nukes” (to quote our friend Elon). If leaders could adopt our reasoning, it could be done.
“the actual result will be companies need large compliance departments in order to develop AI systems, and those compliance departments won’t be able to tell the difference between dangerous and non-dangerous AI.”
There are other regulation alternatives. Like restricting access to supercomputers. Or even stoping AI research altogether until we’ve made much more progress on alignment. But your concern is still completely legitimate. But where’s the technical solutions in sight, as an alternative? Should we rather risk dying (again, that’s not even the worse risk) because political solutions seem intractable and only try technical solutions when those seem even way more intractable?
Contingency measures, both technical and political, could also be more effective than both full alignment and political solutions.