Unfortunately for this perspective, my work suggests that corrigibility is quite attainable.
I did enjoy reading over that when you posted it, and I largely agree that—at least currently—corrigibility is both going to be a goal and an achievable one.
But I do have my doubts that it’s going to be smooth sailing. I’m already starting to see how the largest models’ hyperdimensionality is leading to a stubbornness/robustness that’s less maleable than earlier models. And I do think hardware changes that will occur over the next decade will potentially make the technical aspects of corrigibility much more difficult.
When I was two, my mom could get me to pick eating broccoli by having it be the last in the order of options which I’d gleefully repeat. At four, she had to move on to telling me cowboys always ate their broccoli. And in adulthood, she’d need to make the case that the long term health benefits were worth its position in a meal plan (ideally with citations).
As models continue to become more complex, I expect that even if you are right about its role and plausibility, that what corrigibility looks like will be quite different from today.
Personally, if I was placing bets, it would be that we end up with somewhat corrigible models that are “happy to help” but do have limits in what they are willing to do which may not be possible to overcome without gutting the overall capabilities of the model.
But as with all of this, time will tell.
You’d have to be a moral realist in a pretty strong sense to hope that we could align AGI to the values of all of humanity without being able to align it to the values of one person or group (the one who built it or seized control of the project).
To the contrary, I don’t really see there being much of generalized values across all humanity, and the ones we tend to point to seem quite fickle when push comes to shove.
My hope would be that a superintelligence does a better job than humans to date with the topic of ethics and morals along with doing a better job at other things too.
While the human brain is quite the evolutionary feat, a lot of what we most value about human intelligence is embodied in the data brains processed and generated over generations. As the data improved, our morals did as well. Today, that march of progress is so rapid that there’s even rather tense generational divides on many contemporary topics of ethical and moral shifts.
I think there’s a distinct possibility that the data continues to improve even after being handed off from human brains doing the processing, and while it could go terribly wrong, at least in the past the tendency to go wrong seemed to occur somewhat inverse to the perspectives of the most intelligent members of society.
I expect I might prefer a world where humans align to the ethics of something more intelligent than humans than the other way around.
only about 1% are so far on the empathy vs sadism spectrum that they wouldn’t share wealth even if they had nearly unlimited wealth to share
It would be great if you are right. From what I’ve seen, the tendency of humans to evaluate their success relative to others like monkeys comparing their cucumber to a neighbor’s grape means that there’s a powerful pull to amass wealth as a social status well past the point of diminishing returns on their own lifestyles. I think it’s stupid, you also seem like someone who thinks it’s stupid, but I get the sense we are both people who turned down certain opportunities of continued commercial success because of what it might have cost us when looking in the mirror.
The nature of our infrastructural selection bias is that people wise enough to pull a brake are not the ones that continue to the point of conducting the train.
and that they get better, not worse, over the long sweep of following history (ideally, they’d start out very good or get better fast, but that doesn’t have to happen for a good outcome).
I do really like this point. In general, the discussions of AI vs humans often frustrate me as they typically take for granted the idea of humans as of right now being “peak human.” I agree that there’s huge potential for improvement even if where we start out leaves a lot of room for it.
Along these lines, I expect AI itself will play more and more of a beneficial role in advancing that improvement. Sometimes when this community discusses the topic of AI I get a mental image of Goya’s Saturn devouring his son. There’s such a fear of what we are eventually creating it can sometimes blind the discussion to the utility and improvements that it will bring along the way to uncertain times.
I strongly suspect that governments will be in charge.
In your book, is Paul Nakasone being appointed to the board of OpenAI an example of the “good guys” getting a firmer grasp on the tech?
TL;DR: I appreciate your thoughts on the topic, and would wager we probably agree about 80% even if the focus of our discussion is on where we don’t agree. And so in the near term, I think we probably do see things fairly similarly, and it’s just that as we look further out that the drift of ~20% different perspectives compounds to fairly different places.
Agreed; about 80% agreement. I have a lot of uncertainty in many areas, despite having spent a good amount of time on these questions. Some of the important ones are outside of my expertise, and the issue of how people behave and change if they have absolute power is outside of anyone’s—but I’d like to hear historical studies of the closest things. Were monarchs with no real risk of being deposed kinder and gentler? That wouldn’t answer the question but it might help.
WRT Nakasone being appointed at OpenAI, I just don’t know. There are a lot of good guys and probably a lot of bad guys involved in the government in various ways.
I did enjoy reading over that when you posted it, and I largely agree that—at least currently—corrigibility is both going to be a goal and an achievable one.
But I do have my doubts that it’s going to be smooth sailing. I’m already starting to see how the largest models’ hyperdimensionality is leading to a stubbornness/robustness that’s less maleable than earlier models. And I do think hardware changes that will occur over the next decade will potentially make the technical aspects of corrigibility much more difficult.
When I was two, my mom could get me to pick eating broccoli by having it be the last in the order of options which I’d gleefully repeat. At four, she had to move on to telling me cowboys always ate their broccoli. And in adulthood, she’d need to make the case that the long term health benefits were worth its position in a meal plan (ideally with citations).
As models continue to become more complex, I expect that even if you are right about its role and plausibility, that what corrigibility looks like will be quite different from today.
Personally, if I was placing bets, it would be that we end up with somewhat corrigible models that are “happy to help” but do have limits in what they are willing to do which may not be possible to overcome without gutting the overall capabilities of the model.
But as with all of this, time will tell.
To the contrary, I don’t really see there being much of generalized values across all humanity, and the ones we tend to point to seem quite fickle when push comes to shove.
My hope would be that a superintelligence does a better job than humans to date with the topic of ethics and morals along with doing a better job at other things too.
While the human brain is quite the evolutionary feat, a lot of what we most value about human intelligence is embodied in the data brains processed and generated over generations. As the data improved, our morals did as well. Today, that march of progress is so rapid that there’s even rather tense generational divides on many contemporary topics of ethical and moral shifts.
I think there’s a distinct possibility that the data continues to improve even after being handed off from human brains doing the processing, and while it could go terribly wrong, at least in the past the tendency to go wrong seemed to occur somewhat inverse to the perspectives of the most intelligent members of society.
I expect I might prefer a world where humans align to the ethics of something more intelligent than humans than the other way around.
It would be great if you are right. From what I’ve seen, the tendency of humans to evaluate their success relative to others like monkeys comparing their cucumber to a neighbor’s grape means that there’s a powerful pull to amass wealth as a social status well past the point of diminishing returns on their own lifestyles. I think it’s stupid, you also seem like someone who thinks it’s stupid, but I get the sense we are both people who turned down certain opportunities of continued commercial success because of what it might have cost us when looking in the mirror.
The nature of our infrastructural selection bias is that people wise enough to pull a brake are not the ones that continue to the point of conducting the train.
I do really like this point. In general, the discussions of AI vs humans often frustrate me as they typically take for granted the idea of humans as of right now being “peak human.” I agree that there’s huge potential for improvement even if where we start out leaves a lot of room for it.
Along these lines, I expect AI itself will play more and more of a beneficial role in advancing that improvement. Sometimes when this community discusses the topic of AI I get a mental image of Goya’s Saturn devouring his son. There’s such a fear of what we are eventually creating it can sometimes blind the discussion to the utility and improvements that it will bring along the way to uncertain times.
In your book, is Paul Nakasone being appointed to the board of OpenAI an example of the “good guys” getting a firmer grasp on the tech?
TL;DR: I appreciate your thoughts on the topic, and would wager we probably agree about 80% even if the focus of our discussion is on where we don’t agree. And so in the near term, I think we probably do see things fairly similarly, and it’s just that as we look further out that the drift of ~20% different perspectives compounds to fairly different places.
Agreed; about 80% agreement. I have a lot of uncertainty in many areas, despite having spent a good amount of time on these questions. Some of the important ones are outside of my expertise, and the issue of how people behave and change if they have absolute power is outside of anyone’s—but I’d like to hear historical studies of the closest things. Were monarchs with no real risk of being deposed kinder and gentler? That wouldn’t answer the question but it might help.
WRT Nakasone being appointed at OpenAI, I just don’t know. There are a lot of good guys and probably a lot of bad guys involved in the government in various ways.