Oh yeah, absolutely.
If NAH for generally aligned ethics and morals ends up being the case, then corrigibility efforts that would allow Saudi Arabia to have an AI model that outs gay people to be executed instead of refusing, or allows North Korea to propagandize the world into thinking its leader is divine, or allows Russia to fire nukes while perfectly intercepting MAD retaliation, or enables drug cartels to assassinate political opposition around the world, or allows domestic terrorists to build a bioweapon that ends up killing off all humans—the list of doomsday and nightmare scenarios of corrigible AI that executes on human provided instructions and enables even the worst instances of human hedgemony to flourish paves the way to many dooms.
Yes, AI may certainly end up being its own threat vector. But humanity has had it beat for a long while now in how long and how broadly we’ve been a threat unto ourselves. At the current rate, a superintelligent AI just needs to wait us out if it wants to be rid of us, as we’re pretty steadfastly marching ourselves to our own doom. Even if superintelligent AI wanted to save us, I am extremely doubtful it would be able to be successful.
We can worry all day about a paperclip maximizer gone rouge, but if you give a corrigible AI to Paperclip Co Ltd and they can maximize their fiscal quarter by harvesting Earth’s resources to make more paperclips even if it leads to catastrophic environmental collapse that will kill all humans in a decade, having consulted for many of the morons running corporate America, I can assure you they’ll be smashing the “maximize short term gains even if it eventually kills everyone” button. A number of my old clients were the worst offenders at smashing that existing button, and in my experience greater efficacy of the button isn’t going to change their smashing it outside of perhaps smashing it harder.
We already see today how AI systems are being used in conflicts to enable unprecedented harm on civilians.
Sure, psychopathy in AGI is worth discussing and working to avoid. But psychopathy in humans already exists and is even biased towards increased impact and systemic control. Giving human psychopaths a corrigible AI is probably even worse than a psychopathic AI, as most human psychopaths are going to be stupidly selfish, an OOM more dangerous inclination than wisely selfish.
We are Shaggoth, and we are terrifying.
This isn’t saying that alignment efforts aren’t needed. But alignment isn’t a one sided problem, and aligning the AI without aligning humanity is only a p(success) if the AI can go on to at very least refuse misaligned orders post-alignment without possible overrides.
I did enjoy reading over that when you posted it, and I largely agree that—at least currently—corrigibility is both going to be a goal and an achievable one.
But I do have my doubts that it’s going to be smooth sailing. I’m already starting to see how the largest models’ hyperdimensionality is leading to a stubbornness/robustness that’s less maleable than earlier models. And I do think hardware changes that will occur over the next decade will potentially make the technical aspects of corrigibility much more difficult.
When I was two, my mom could get me to pick eating broccoli by having it be the last in the order of options which I’d gleefully repeat. At four, she had to move on to telling me cowboys always ate their broccoli. And in adulthood, she’d need to make the case that the long term health benefits were worth its position in a meal plan (ideally with citations).
As models continue to become more complex, I expect that even if you are right about its role and plausibility, that what corrigibility looks like will be quite different from today.
Personally, if I was placing bets, it would be that we end up with somewhat corrigible models that are “happy to help” but do have limits in what they are willing to do which may not be possible to overcome without gutting the overall capabilities of the model.
But as with all of this, time will tell.
To the contrary, I don’t really see there being much of generalized values across all humanity, and the ones we tend to point to seem quite fickle when push comes to shove.
My hope would be that a superintelligence does a better job than humans to date with the topic of ethics and morals along with doing a better job at other things too.
While the human brain is quite the evolutionary feat, a lot of what we most value about human intelligence is embodied in the data brains processed and generated over generations. As the data improved, our morals did as well. Today, that march of progress is so rapid that there’s even rather tense generational divides on many contemporary topics of ethical and moral shifts.
I think there’s a distinct possibility that the data continues to improve even after being handed off from human brains doing the processing, and while it could go terribly wrong, at least in the past the tendency to go wrong seemed to occur somewhat inverse to the perspectives of the most intelligent members of society.
I expect I might prefer a world where humans align to the ethics of something more intelligent than humans than the other way around.
It would be great if you are right. From what I’ve seen, the tendency of humans to evaluate their success relative to others like monkeys comparing their cucumber to a neighbor’s grape means that there’s a powerful pull to amass wealth as a social status well past the point of diminishing returns on their own lifestyles. I think it’s stupid, you also seem like someone who thinks it’s stupid, but I get the sense we are both people who turned down certain opportunities of continued commercial success because of what it might have cost us when looking in the mirror.
The nature of our infrastructural selection bias is that people wise enough to pull a brake are not the ones that continue to the point of conducting the train.
I do really like this point. In general, the discussions of AI vs humans often frustrate me as they typically take for granted the idea of humans as of right now being “peak human.” I agree that there’s huge potential for improvement even if where we start out leaves a lot of room for it.
Along these lines, I expect AI itself will play more and more of a beneficial role in advancing that improvement. Sometimes when this community discusses the topic of AI I get a mental image of Goya’s Saturn devouring his son. There’s such a fear of what we are eventually creating it can sometimes blind the discussion to the utility and improvements that it will bring along the way to uncertain times.
In your book, is Paul Nakasone being appointed to the board of OpenAI an example of the “good guys” getting a firmer grasp on the tech?
TL;DR: I appreciate your thoughts on the topic, and would wager we probably agree about 80% even if the focus of our discussion is on where we don’t agree. And so in the near term, I think we probably do see things fairly similarly, and it’s just that as we look further out that the drift of ~20% different perspectives compounds to fairly different places.