If we’re in a situation where it’s an open secret that a certain specific research area leads to general artificial intelligence, we’re doomed. If we get into a position where compute is the only limiting factor, we’re doomed. There’s no arguing there. The goal is to prevent us from getting into that situation.
As it stands now, certainly lots of companies are practicing machine learning. I have secondhand descriptions of a lot of the “really advanced” NSA programs and they fit that bill. Not a lot of organizations I know of however are actually pushing the clock hand forward on AGI and meta. Even fewer are doing that consistently, or getting over the massive intellectual hurdles that require a team like DeepMind’s. Completing AGI will probably be the result of a marriage of increased computing power, which we can’t really control, and insights pioneered and published by top labs, which I legitimately think we could to some degree by modifying their goals and talking to their members. OpenAI is a nonprofit. At the absolute bare minimum, none of these companies’ publish their meta research for money. The worst things they seem to do at this stage aren’t achieved when they’re reaching for power so much as playing an intellectual & status game amongst themselves, and fulfilling their science fiction protagonist syndromes.
I don’t doubt that it would be better for us to have AI alignment solved than to rely on these speculations about how AI will be engineered, but I do not see any good argument as to why it’s a bad strategy.
If we were “doomed” in this way, would you agree that the thing to do—for those who could do it—is to keep trying to solve the problem of alignment? i.e. trying to identify an AI design that could be autonomous, and smarter than human, and yet still safe?
Let me articulate my intuitions in a little bit more of a refined way: “If we ever get to a point where there are few secrets left, or that it’s common knowledge one can solve AGI with ~1000-10,000 million dollars, then delaying tactics probably wouldn’t work, because there’s nothing left for DeepMind to publish that speeds up the timeline.”
Inside those bounds, yes. I still think that people should keep working on alignment today, I just think other dumber people like me should try the delaying tactics I articulated in addition to funding alignment research.
If we’re in a situation where it’s an open secret that a certain specific research area leads to general artificial intelligence, we’re doomed. If we get into a position where compute is the only limiting factor, we’re doomed. There’s no arguing there. The goal is to prevent us from getting into that situation.
As it stands now, certainly lots of companies are practicing machine learning. I have secondhand descriptions of a lot of the “really advanced” NSA programs and they fit that bill. Not a lot of organizations I know of however are actually pushing the clock hand forward on AGI and meta. Even fewer are doing that consistently, or getting over the massive intellectual hurdles that require a team like DeepMind’s. Completing AGI will probably be the result of a marriage of increased computing power, which we can’t really control, and insights pioneered and published by top labs, which I legitimately think we could to some degree by modifying their goals and talking to their members. OpenAI is a nonprofit. At the absolute bare minimum, none of these companies’ publish their meta research for money. The worst things they seem to do at this stage aren’t achieved when they’re reaching for power so much as playing an intellectual & status game amongst themselves, and fulfilling their science fiction protagonist syndromes.
I don’t doubt that it would be better for us to have AI alignment solved than to rely on these speculations about how AI will be engineered, but I do not see any good argument as to why it’s a bad strategy.
If we were “doomed” in this way, would you agree that the thing to do—for those who could do it—is to keep trying to solve the problem of alignment? i.e. trying to identify an AI design that could be autonomous, and smarter than human, and yet still safe?
Let me articulate my intuitions in a little bit more of a refined way: “If we ever get to a point where there are few secrets left, or that it’s common knowledge one can solve AGI with ~1000-10,000 million dollars, then delaying tactics probably wouldn’t work, because there’s nothing left for DeepMind to publish that speeds up the timeline.”
Inside those bounds, yes. I still think that people should keep working on alignment today, I just think other dumber people like me should try the delaying tactics I articulated in addition to funding alignment research.