It doesn’t seem particularly likely to me: I don’t notice a strong correlation between intelligence and empathy in my daily life, perhaps there are a few more intelligent people who are unusually kind, but that may just be the people I like to hang out with, or a result of more privilege/less abuse growing up leading to better education and also higher levels of empathy. Certainly less smart people may be kind or cruel and I don’t see a pattern in it.
Regardless, I would expect genetically engineered humans to still have the same circuits which handle empathy and caring, and I’d expect them to be a lot safer than an AGI, perhaps even a bit safer than a regular human, since they’re less likely to cause damage due to misconceptions or human error since they’re able to make more accurate models of the world.
If you’re worried about more intelligent people considering themselves a new species, and then not caring about humans, there’s some evidence against this in that more intelligent people are more likely to choose vegetarianism, which would indicate that they’re more empathetic toward other species.
Alongside that though, I think the next biggest leverage point would be something like nationalising social media and retargeting development/design toward connection and flourishing (as opposed to engagement and profit).
This is one area where, if we didn’t have multiple catastrophic time pressures, I’d be pretty optimistic about the future. These are incredibly high impact and tractable levers for changing the world for the better; part of the whole bucket of ‘just stop doing the most stupid thing’ stuff.
Raising children better doesn’t scale well. Neither in how much ooomph you get out of it per person, nor in how many people you can reach with this special treatment.
Most promising way is just raising children better.
I highly doubt this would be very helpful in resolving the particular concerns Habryka has in mind. Namely, a world in which:
very short AI timelines (3-15 years) happen by default unless aggressive regulation is put in place, but even if it is, the likelihood of full compliance is not 100% and the development of AGI can be realistically delayed by at most ~ 1⁄2 generations before the risk of at least one large-scale defection having appeared becomes too high, so you don’t have time for slow cultural change that takes many decades to take effect
the AI alignment problem turns out to be very hard and basically unsolvable by unenhanced humans, no matter how smart they may be, so you need improvements that quickly generate a bunch of ultra-geniuses that are far smarter than their “parents” could ever be
I believe that we could raise children much better, however, even in the article you linked:
An important factor to acknowledge is that these children did not only receive an exceptional education; they were also exceptionally gifted.
Unfortunately, in current political climate, discussing intelligence is a taboo. I believe that optimal education for gifted children would be different from optimal education for average children (however, both could—and should—be greatly improved over what we have now), which unfortunately means that debates about improving education in general are somewhat irrelevant for improving the education of the brightest (who presumably could solve AI alignment one day).
just stop doing the most stupid thing
Sometimes this is a chicken-and-egg problem: the stupid things happen because people are stupid (the ones who do the things, or make decisions about how the things should be done), but as long as the stupid things keep happening, people will remain stupid.
For example, we have a lot of superstition, homeopathy, conspiracy theories, and similar, which if it could somehow magically disappear overnight, people probably wouldn’t reinvent them, or at least not quickly. These memes persist, because they spread from one generation to another. Here, the reason we do the stupid thing, is that there are many people who sincerely and passionately believe that the stupid thing is actually the smart and right thing.
Another source of problem is that with average people, you can’t expect extraordinary results. For example, most math teachers suck at math and at teaching. As a result, we get another generation that sucks at math. The problem is, we need so many math teachers (at elementary and high schools), that you can’t simply decide to only hire the competent ones—there would be not enough teachers to keep the schools running.
Then we have all kinds of political mindkilling and corruption, when stupid things happen because they provide some political advantage for someone, or because the person who is supposed to keep things running is actually more interested in extracting as much rent as possible.
Yeah, I wish we could stop doing the stupid things… but that turns out to be quite difficult. Merely explaining why some thing is stupid would not work—you would get a lot of people yelling at you, some of them because they believe the stupid thing, others because they derive some benefit from the stupid thing, and some are simply incompetent to do it better.
Curious about the ‘delay the development’ via regulation bit.
What is your sense of what near-term passable regulations would be that are actually enforceable? It’s been difficult for large stakeholder groups facing threatening situations to even enforce established international treaties, such as the Geneva convention or the Berne three-step test.
Here are dimensions I’ve been thinking need to be constrained over time:
Input bandwidth to models (ie. available training and run-time data, including from sensors).
Multi-domain work by/through models (ie. preventing an automation race-to-the-bottom)
Output bandwidth (incl. by having premarket approval for allowable safety-tested uses as happens in other industries).
Compute bandwidth (through caps/embargos put on already resource-intensive supply chains).
(I’ll skip the ‘make humans smarter’ part, which I worry increases problems around techno-solutionist initiatives we’ve seen).
I think if the problem turns out to be too difficult to solve for humanity right now, the right strategy seems pretty straightforward:
Delay the development of AGI, probably via regulation (and multinational agreements)
Make humans smarter
My current model is that there are roughly two promising ways to make people smarter:
Use pre-AGI technology to make people more competent
Use genetic engineering to make smarter humans
Both of these seem pretty promising and I am in favor of work on both of these.
Wouldn’t way 2 likely create a new species unaligned with humans?
It doesn’t seem particularly likely to me: I don’t notice a strong correlation between intelligence and empathy in my daily life, perhaps there are a few more intelligent people who are unusually kind, but that may just be the people I like to hang out with, or a result of more privilege/less abuse growing up leading to better education and also higher levels of empathy. Certainly less smart people may be kind or cruel and I don’t see a pattern in it.
Regardless, I would expect genetically engineered humans to still have the same circuits which handle empathy and caring, and I’d expect them to be a lot safer than an AGI, perhaps even a bit safer than a regular human, since they’re less likely to cause damage due to misconceptions or human error since they’re able to make more accurate models of the world.
If you’re worried about more intelligent people considering themselves a new species, and then not caring about humans, there’s some evidence against this in that more intelligent people are more likely to choose vegetarianism, which would indicate that they’re more empathetic toward other species.
Re: 2
Most promising way is just raising children better.
See (which I’m sure you’ve already read): https://www.lesswrong.com/posts/CYN7swrefEss4e3Qe/childhoods-of-exceptional-people
Alongside that though, I think the next biggest leverage point would be something like nationalising social media and retargeting development/design toward connection and flourishing (as opposed to engagement and profit).
This is one area where, if we didn’t have multiple catastrophic time pressures, I’d be pretty optimistic about the future. These are incredibly high impact and tractable levers for changing the world for the better; part of the whole bucket of ‘just stop doing the most stupid thing’ stuff.
Raising children better doesn’t scale well. Neither in how much ooomph you get out of it per person, nor in how many people you can reach with this special treatment.
I highly doubt this would be very helpful in resolving the particular concerns Habryka has in mind. Namely, a world in which:
very short AI timelines (3-15 years) happen by default unless aggressive regulation is put in place, but even if it is, the likelihood of full compliance is not 100% and the development of AGI can be realistically delayed by at most ~ 1⁄2 generations before the risk of at least one large-scale defection having appeared becomes too high, so you don’t have time for slow cultural change that takes many decades to take effect
the AI alignment problem turns out to be very hard and basically unsolvable by unenhanced humans, no matter how smart they may be, so you need improvements that quickly generate a bunch of ultra-geniuses that are far smarter than their “parents” could ever be
I believe that we could raise children much better, however, even in the article you linked:
Unfortunately, in current political climate, discussing intelligence is a taboo. I believe that optimal education for gifted children would be different from optimal education for average children (however, both could—and should—be greatly improved over what we have now), which unfortunately means that debates about improving education in general are somewhat irrelevant for improving the education of the brightest (who presumably could solve AI alignment one day).
Sometimes this is a chicken-and-egg problem: the stupid things happen because people are stupid (the ones who do the things, or make decisions about how the things should be done), but as long as the stupid things keep happening, people will remain stupid.
For example, we have a lot of superstition, homeopathy, conspiracy theories, and similar, which if it could somehow magically disappear overnight, people probably wouldn’t reinvent them, or at least not quickly. These memes persist, because they spread from one generation to another. Here, the reason we do the stupid thing, is that there are many people who sincerely and passionately believe that the stupid thing is actually the smart and right thing.
Another source of problem is that with average people, you can’t expect extraordinary results. For example, most math teachers suck at math and at teaching. As a result, we get another generation that sucks at math. The problem is, we need so many math teachers (at elementary and high schools), that you can’t simply decide to only hire the competent ones—there would be not enough teachers to keep the schools running.
Then we have all kinds of political mindkilling and corruption, when stupid things happen because they provide some political advantage for someone, or because the person who is supposed to keep things running is actually more interested in extracting as much rent as possible.
Yeah, I wish we could stop doing the stupid things… but that turns out to be quite difficult. Merely explaining why some thing is stupid would not work—you would get a lot of people yelling at you, some of them because they believe the stupid thing, others because they derive some benefit from the stupid thing, and some are simply incompetent to do it better.
Curious about the ‘delay the development’ via regulation bit.
What is your sense of what near-term passable regulations would be that are actually enforceable? It’s been difficult for large stakeholder groups facing threatening situations to even enforce established international treaties, such as the Geneva convention or the Berne three-step test.
Here are dimensions I’ve been thinking need to be constrained over time:
Input bandwidth to models (ie. available training and run-time data, including from sensors).
Multi-domain work by/through models (ie. preventing an automation race-to-the-bottom)
Output bandwidth (incl. by having premarket approval for allowable safety-tested uses as happens in other industries).
Compute bandwidth (through caps/embargos put on already resource-intensive supply chains).
(I’ll skip the ‘make humans smarter’ part, which I worry increases problems around techno-solutionist initiatives we’ve seen).