It’s always seemed strange to me what preferences people have for things well outside their own individual experiences, or at least outside their sympathized experiences of beings they consider similar to themselves.
Why would one particularly prefer unthinking terrestrial biology (moss, bugs, etc.) over actual thinking being(s) like a super-AI? It’s not like bacteria are any more aligned than this hypothetical destroyer.
The space of values is large, and many people have crystalized into liking nature for fairly clear reasons (positive experiences in natural environments, memetics in many subcultures idealizing nature, etc). Also, misaligned, optimizing AI easily maps to the destructive side of humanity, which many memeplexes demonize.
Humans instinctively like things like flowers and birdsong, because it meant a fertile area with food to our ancestors. We literally depended on Nature for our survival, and despite intensive agriculture, we aren’t independent from it yet.
Bugs could potentially result in a new sentient species many millions of years down the line. With super-AI that happens to be non-sentient, there is no such hope.
If it’s possible for super-intelligent AI to be non-sentient, wouldn’t it be possible for insects to evolve non-sentient intelligence as well? I guess I didn’t assume “non-sentient” in the definition of “unaligned”.
In principle, I prefer sentient AI over non-sentient bugs. But the concern that is if non-sentient superintelligent AI is developed, it’s an attractor state, that is hard or impossible to get out of. Bugs certainly aren’t bound to evolve into sentient species, but at least there’s a chance.
It’s always seemed strange to me what preferences people have for things well outside their own individual experiences, or at least outside their sympathized experiences of beings they consider similar to themselves.
Why would one particularly prefer unthinking terrestrial biology (moss, bugs, etc.) over actual thinking being(s) like a super-AI? It’s not like bacteria are any more aligned than this hypothetical destroyer.
The space of values is large, and many people have crystalized into liking nature for fairly clear reasons (positive experiences in natural environments, memetics in many subcultures idealizing nature, etc). Also, misaligned, optimizing AI easily maps to the destructive side of humanity, which many memeplexes demonize.
Humans instinctively like things like flowers and birdsong, because it meant a fertile area with food to our ancestors. We literally depended on Nature for our survival, and despite intensive agriculture, we aren’t independent from it yet.
Bugs could potentially result in a new sentient species many millions of years down the line. With super-AI that happens to be non-sentient, there is no such hope.
If it’s possible for super-intelligent AI to be non-sentient, wouldn’t it be possible for insects to evolve non-sentient intelligence as well? I guess I didn’t assume “non-sentient” in the definition of “unaligned”.
In principle, I prefer sentient AI over non-sentient bugs. But the concern that is if non-sentient superintelligent AI is developed, it’s an attractor state, that is hard or impossible to get out of. Bugs certainly aren’t bound to evolve into sentient species, but at least there’s a chance.