I haven’t really been following the reading group, but there’s something that’s been in my head and this seems like a pretty relevant section for bringing it up. I thought about writing a discussion post about it in the past but I wasn’t sure about it.
By the principle of differential technological development, would it be valuable to make an effort to advance fields with low risks that the public associates with popular preconceptions of AI risk? I imagine that the poster child for this would be robotics. The progress has been slower than most people would intuitively expect, even more so than narrow AI, and I think that visible progress in robotics would make the public more inclined to take AI risk seriously, even though it’s probably pretty tangential. Yes, it’s Not Technically Lying, but I can’t see how any of the negative consequences of Not Technically Lying would apply in this context.
I see problems with this of course. My argument would suggest that social awareness of AGI is unconditionally good, but I wonder if it is. I wonder if there is a question of what is the optimal amount of awareness. More awareness seems to increase the probability of multipolar scenarios, and smaller, less safety conscious AGI projects. There’s less uncertainty in working on robotics but conceivably less reward as well. For this reason, the utility of working on fields that indirectly spread awareness would seem to depend on how far off AGI is, which is very uncertain. It also might not make much of a difference; awareness of AI risk actually seems to have made a huge leap since the beginning of this year, if not earlier, with the Open Letter, Elon Musk’s donation to the Future of Life Institute and his general efforts to spread awareness, and the recent series of articles on Wait But Why, among other things.
There might be other examples besides robotics; probably low risk subfields in narrow AI, which also has been making superficially scary leaps recently.
Somehow I doubt that there will all of a sudden be huge donations to the field of robotics based on this comment, but there’s little cost to writing it, so I wrote it.
I want to point to elephants. Not only because it is easy, since they are well endowed with volume, but because they are very intelligent animals that do not hunt, do not make war, and are compulsory herbivores.
When we think of failure modes of creating emulations based on the human brain, we are worried that humans are evil sometimes. Part of our evil, and the kind of evil we have, would hardly be exerted by elephants.
My general point is that it seems that part of the fragility of emulating us comes from our carnivore, hunter, warrior lifestyle, and strategies to ameliorate that might take in consideration intelligent animals that don’t hunt, such as elephants, and some whales.
“Very intelligent” is a relative term. Elephants aren’t very intelligent compared to humans, even though they’re very intelligent compared to the whole animal kingdom.
I somewhat disagree. In terms of neuron counts, the elephant brain is larger, although admittedly most of that is in the elephant’s much larger cerebellum—presumably as a brute force solution to the unique control complexity of the trunk appendage.
The elephant’s cortex (which seems key to human style general intelligence) is roughly 1⁄3 of our neuron count (I’m guessing from memory) - comparable to that of the chimp. There is some recent evidence that elephants may have sophisticated low frequency communication. They even have weird death ‘burial’ rituals. They can solve complex puzzles in captivity. There are even a few cases of elephants learning to speak some simple human words with their trunks.
So in short, elephants seem to have just about as much general intelligence as you’d expect given their cortical neuron count. The upper range of the elephant’s raw brain capability probably comes close to the lower range of human capability.
The large apparent gap in actual adult intelligence is due to the enormous nonlinear amplification effects of human culture and education.
Elephants kill hundreds, if not thousands, of human beings per year. Considering there are no more than 30,000 elephants alive, that’s an amazing feat of evilness. I believe the average elephant kills orders of magnitudes more than the average human, and probably kill more violently as well.
Do you have further interesting pointers to material relating to this week’s reading?
I haven’t really been following the reading group, but there’s something that’s been in my head and this seems like a pretty relevant section for bringing it up. I thought about writing a discussion post about it in the past but I wasn’t sure about it.
By the principle of differential technological development, would it be valuable to make an effort to advance fields with low risks that the public associates with popular preconceptions of AI risk? I imagine that the poster child for this would be robotics. The progress has been slower than most people would intuitively expect, even more so than narrow AI, and I think that visible progress in robotics would make the public more inclined to take AI risk seriously, even though it’s probably pretty tangential. Yes, it’s Not Technically Lying, but I can’t see how any of the negative consequences of Not Technically Lying would apply in this context.
I see problems with this of course. My argument would suggest that social awareness of AGI is unconditionally good, but I wonder if it is. I wonder if there is a question of what is the optimal amount of awareness. More awareness seems to increase the probability of multipolar scenarios, and smaller, less safety conscious AGI projects. There’s less uncertainty in working on robotics but conceivably less reward as well. For this reason, the utility of working on fields that indirectly spread awareness would seem to depend on how far off AGI is, which is very uncertain. It also might not make much of a difference; awareness of AI risk actually seems to have made a huge leap since the beginning of this year, if not earlier, with the Open Letter, Elon Musk’s donation to the Future of Life Institute and his general efforts to spread awareness, and the recent series of articles on Wait But Why, among other things.
There might be other examples besides robotics; probably low risk subfields in narrow AI, which also has been making superficially scary leaps recently.
Somehow I doubt that there will all of a sudden be huge donations to the field of robotics based on this comment, but there’s little cost to writing it, so I wrote it.
I want to point to elephants. Not only because it is easy, since they are well endowed with volume, but because they are very intelligent animals that do not hunt, do not make war, and are compulsory herbivores. When we think of failure modes of creating emulations based on the human brain, we are worried that humans are evil sometimes. Part of our evil, and the kind of evil we have, would hardly be exerted by elephants. My general point is that it seems that part of the fragility of emulating us comes from our carnivore, hunter, warrior lifestyle, and strategies to ameliorate that might take in consideration intelligent animals that don’t hunt, such as elephants, and some whales.
“Very intelligent” is a relative term. Elephants aren’t very intelligent compared to humans, even though they’re very intelligent compared to the whole animal kingdom.
I somewhat disagree. In terms of neuron counts, the elephant brain is larger, although admittedly most of that is in the elephant’s much larger cerebellum—presumably as a brute force solution to the unique control complexity of the trunk appendage.
The elephant’s cortex (which seems key to human style general intelligence) is roughly 1⁄3 of our neuron count (I’m guessing from memory) - comparable to that of the chimp. There is some recent evidence that elephants may have sophisticated low frequency communication. They even have weird death ‘burial’ rituals. They can solve complex puzzles in captivity. There are even a few cases of elephants learning to speak some simple human words with their trunks.
So in short, elephants seem to have just about as much general intelligence as you’d expect given their cortical neuron count. The upper range of the elephant’s raw brain capability probably comes close to the lower range of human capability.
The large apparent gap in actual adult intelligence is due to the enormous nonlinear amplification effects of human culture and education.
Elephants kill hundreds, if not thousands, of human beings per year. Considering there are no more than 30,000 elephants alive, that’s an amazing feat of evilness. I believe the average elephant kills orders of magnitudes more than the average human, and probably kill more violently as well.