Here’s some blue-sky speculation about one way alien sapients’ civilizations might develop differently from our own. Alternatively, you can consider it conworlding. Content note: torture, slavery.
Looking at human history, after we developed electronics, we painstakingly constructed machines that can perform general computation, then built software which approximates the workings of the human brain. For instance, we nowadays use in-silico reinforcement learning and neural nets to solve various “messy” problems like computer vision and robot movement. In the future, we might scan brains and then emulate them on computers. This all seems like a very circuitous course of development—those algorithms have existed all around us for thousands of years in the form of brains. Putting them on computers requires an extra layer of technology.
Suppose that some alien species’s biology is a lot more robust than ours—their homeostatic systems are less failure-prone than our own, due to some difference in their environment or evolutionary history. They don’t get brain-damaged just from holding their breath for a couple minutes, and open wounds don’t easily get infected.
Now suppose that after they invent agriculture but before they invent electronics, they study biology and neurology. Combined with their robust biology, this leads to a world where things that are electronic in our world are instead controlled by vat-grown brains. For instance, a car-building robot could be constructed by growing a brain in a vat, hooking it up to some actuators and sensors, then dosing it with happy chemicals when it correctly builds a car, and stimulating its nociceptors when it makes mistakes. This rewarding and punishing can be done by other lab-grown “overseer” brains trained specifically for the job, which are in turn manually rewarded at the end of the day by their owner for the total number of cars successfully built. Custom-trained brains could control chemical plants, traffic lights, surveillance systems, etc. The actuators and sensors could be either biologically-based (lab-grown eyes, muscles, etc., powered with liquefied food) or powered with combustion engines or steam engines or even wound springs.
Obviously this is a pretty terrible world, because many minds will live lives with very little meaning, never grasping the big picture, at the mercy of unmerciful human or vat-brain overseers, without even the option of suicide. Brains wouldn’t necessarily be designed or drugged to be happy overall—maybe a brain in pain does its job better. I don’t think the owners would be very concerned about the ethical problems—look at how humans treat other animals.
You can see this technology as a sort of slavery set up so that slaves are cheap and unsympathetic and powerless. They won’t run away, because: they’ll want to perform their duties, for the drugs; many won’t be able to survive without owners to top up their food drips; they could be developed or drugged to ensure docility; you could prevent them from even getting the idea of emancipation, by not giving them the necessary sensors; perhaps you could even set things up so the overseer brains can read the thoughts of their charges directly, and punish bad thoughts. This world has many parallels to Hanson’s brain emulation world.
Is this scenario at all likely? Would these civilizations develop biological superintelligent AGI, or would they only be able to create superintelligent AGI once they develop electronic computing?
Alien neuropunk slaver civilizations
Here’s some blue-sky speculation about one way alien sapients’ civilizations might develop differently from our own. Alternatively, you can consider it conworlding. Content note: torture, slavery.
Looking at human history, after we developed electronics, we painstakingly constructed machines that can perform general computation, then built software which approximates the workings of the human brain. For instance, we nowadays use in-silico reinforcement learning and neural nets to solve various “messy” problems like computer vision and robot movement. In the future, we might scan brains and then emulate them on computers. This all seems like a very circuitous course of development—those algorithms have existed all around us for thousands of years in the form of brains. Putting them on computers requires an extra layer of technology.
Suppose that some alien species’s biology is a lot more robust than ours—their homeostatic systems are less failure-prone than our own, due to some difference in their environment or evolutionary history. They don’t get brain-damaged just from holding their breath for a couple minutes, and open wounds don’t easily get infected.
Now suppose that after they invent agriculture but before they invent electronics, they study biology and neurology. Combined with their robust biology, this leads to a world where things that are electronic in our world are instead controlled by vat-grown brains. For instance, a car-building robot could be constructed by growing a brain in a vat, hooking it up to some actuators and sensors, then dosing it with happy chemicals when it correctly builds a car, and stimulating its nociceptors when it makes mistakes. This rewarding and punishing can be done by other lab-grown “overseer” brains trained specifically for the job, which are in turn manually rewarded at the end of the day by their owner for the total number of cars successfully built. Custom-trained brains could control chemical plants, traffic lights, surveillance systems, etc. The actuators and sensors could be either biologically-based (lab-grown eyes, muscles, etc., powered with liquefied food) or powered with combustion engines or steam engines or even wound springs.
Obviously this is a pretty terrible world, because many minds will live lives with very little meaning, never grasping the big picture, at the mercy of unmerciful human or vat-brain overseers, without even the option of suicide. Brains wouldn’t necessarily be designed or drugged to be happy overall—maybe a brain in pain does its job better. I don’t think the owners would be very concerned about the ethical problems—look at how humans treat other animals.
You can see this technology as a sort of slavery set up so that slaves are cheap and unsympathetic and powerless. They won’t run away, because: they’ll want to perform their duties, for the drugs; many won’t be able to survive without owners to top up their food drips; they could be developed or drugged to ensure docility; you could prevent them from even getting the idea of emancipation, by not giving them the necessary sensors; perhaps you could even set things up so the overseer brains can read the thoughts of their charges directly, and punish bad thoughts. This world has many parallels to Hanson’s brain emulation world.
Is this scenario at all likely? Would these civilizations develop biological superintelligent AGI, or would they only be able to create superintelligent AGI once they develop electronic computing?