The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible
I.
The lizard people of Alpha Draconis 1 decided to build an ansible.
The transmitter was a colossal tower of silksteel, doorless and windowless. Inside were millions of modular silksteel cubes, each filled with beetles, a different species in every cube. Big beetles, small beetles, red beetles, blue beetles, friendly beetles, venomous beetles. There hadn’t been a million beetle species on Alpha Draconis I before the ansible. The lizard people had genetically engineered them, carefully, lovingly, making each one just different enough from all the others. Atop each beetle colony was a heat lamp. When the heat lamp was on, the beetles crawled up to the top of the cage, sunning themselves, basking in the glorious glow. When it turned off, they huddled together to warmth, chittering out their anger in little infrasonic groans only they could hear.
The receiver stood on 11845 Nochtli, eighty-five light years from Alpha Draconis, toward the galactic rim. It was also made of beetles, a million beetle colonies of the same million species that made up the transmitter. In each beetle colony was a pheromone dispenser. When it was on, the beetles would multiply until the whole cage was covered in them. When it was off, they would gradually die out until only a few were left.
Atop each beetle cage was a mouse cage, filled with a mix of white and grey mice. The white mice had been genetically engineered to want all levers in the “up” position, a desire beyond even food or sex in its intensity. The grey mice had been engineered to want levers in the “down” position, with equal ferocity. The lizard people had uplifted both strains to full sapience. In each of a million cages, the grey and white mice would argue whether levers should be up or down – sometimes through philosophical debate, sometimes through outright wars of extermination.
There was one lever in each mouse cage. It controlled the pheromone dispenser in the beetle cage just below.
This was all the lizard people of Alpha Draconis 1 needed to construct their ansible.
They had mastered every field of science. Physics, mathematics, astronomy, cosmology. It had been for nothing. There was no way to communicate faster-than-light. Tachyons didn’t exist. Hyperspace didn’t exist. Wormholes didn’t exist. The light speed barrier was absolute – if you limited yourself to physics, mathematics, astronomy, and cosmology.
The lizard people of Alpha Draconis I weren’t going to use any of those things. They were going to build their ansible out of negative average preference utilitarianism.
II.
Utilitarianism is a moral theory claiming that an action is moral if it makes the world a better place. But what do we mean by “a better place”?
Suppose you decide (as Jeremy Bentham did) that it means increasing the total amount of happiness in the universe as much as possible – the greatest good for the greatest number. Then you run into a so-called “repugnant conclusion”. The philosophers quantify happiness into “utils”, some arbitrary small unit of happiness. Suppose your current happiness level is 100 utils. And suppose you could sacrifice one util of happiness to create another person whose total happiness is two utils: they are only 1/50th as happy as you are. This person seems quite unhappy by our standards. But crucially, their total happiness is positive; they would (weakly) prefer living to dying. Maybe we can imagine this as a very poor person in a war-torn Third World country who is (for now) not actively suicidal.
It would seem morally correct to make this sacrifice. After all, you are losing one unit of happiness to create two units, increasing the total happiness in the universe. In fact, it would seem morally correct to keep making the sacrifice as many times as you get the opportunity. The end result is that you end up with a happiness of 1 util – barely above suicidality – and also there are 99 extra barely-not-suicidal people in war-torn Third World countries.
And the same moral principles that lead you to make the sacrifice bind everyone else alike. So the end result is everyone in the world ends up with the lowest possible positive amount of happiness, plus there are billions of extra near-suicidal people in war-torn Third World countries.
This seems abstract, but in some sense it might be the choice on offer if we have to decide whether to control population growth (thus preserving enough resources to give everyone a good standard of living), or continue explosive growth so that there are many more people but not enough resources for any of them to live comfortably.
The so-called “repugnant conclusion” led many philosophers away from “total utilitarianism” to “average utilitarianism”. Here the goal is still to make the world a better place, but it gets operationalized as “increase the average happiness level per person”. The repugnant conclusion clearly fails at this, so we avoid that particular trap.
But here we fall into another ambush: wouldn’t it be morally correct to kill unhappy people? This raises average happiness very effectively!
So we make another amendment. We’re not in the business of raising happiness, per se. We’re in the business of satisfying preferences. People strongly prefer not to die, so you can’t just kill them. Killing them actively lowers the average number of satisfied preferences.
Philosopher Roger Chao combines these and other refinements of the utilitarian method into a moral theory he calls negative average preference utilitarianism, which he considers the first system of ethics to avoid all the various traps and pitfalls. It says: an act is good if it decreases the average number of frustrated preferences per person.
This doesn’t imply we should create miserable people ad nauseum until the whole world is a Third World slum. It doesn’t imply that we should kill everyone who cracks a frown. It doesn’t imply we should murder people for their organs, or never have children again, or replace everybody with identical copies of themselves, or anything like that.
It just implies faster-than-light transmission of moral information.
III.
The ansible worked like this:
Each colony of beetles represented a bit of information. In the transmitter on Alpha Draconis I, the sender would turn the various colonies’ heat lamps on or off, increasing or decreasing the average utility of the beetles.
In the receiver on 11845 Nochtli, the beetles would be in a constant state of half-light: warmer than the Draconis beetles if their heat lamp was turned off, but colder than them if their heat lamp was turned on. So increasing the population of a certain beetle species on 11845 Nochtli would be morally good if the heat lamp for that species on Alpha Draconis were off, but morally evil otherwise.
The philosophers among the lizard people of Alpha Draconis 1 had realized that this was true regardless of intervening distance; morality was the only force that transcended the speed of light. The question was how to detect it. Yes, a change in the heat lamps on their homeworld would instantly change the moral valence of pulling a lever on a colony 85 light-years away, but how to detect the morality of an action?
The answer was: the arc of the moral universe is long, but it bends toward justice. Over time, as the great debates of history ebb and sway, evil may not be conquered completely, but it will lessen. Our own generation isn’t perfect, but we have left behind much of the slavery, bigotry, war and torture, of the past; perhaps our descendants will be wiser still. And how could this be, if not for some benevolent general rule, some principle that tomorrow must be brighter than today, and the march of moral progress slow but inevitable?
Thus the white and grey rats. They would debate, they would argue, they would even fight – but in the end, moral progress would have its way. If raising the lever and causing an increase in the beetle population was the right thing to do, then the white rats would eventually triumph; if lowering the lever and causing the beetle population to fall was right, then the victory would eventually go to the grey. All of this would be recorded by a camera watching the mouse colony, and – lo – a bit of information would have been transmitted.
The ansible of the lizard people of Alpha Draconis 1 was a flop.
They spent a century working on it: ninety years on near-light-speed starships just transporting the materials, and a decade constructing the receiver according to meticulous plans. With great fanfare, the Lizard Emperor himself sent the first message from Alpha Draconis I. And it was a total flop.
The arc of the moral universe is long, but it bends to justice. But nobody had ever thought to ask how long, and why. When everyone alike ought to love the good, why does it take so many years of debate and strife for virtue to triumph over wickedness? Why do war and slavery and torture persist for century after century, so that only endless grinding of the wheels of progress can do them any damage at all?
After eighty-five years of civilizational debate, the grey and white mice in each cage finally overcame their differences and agreed on the right position to put the lever, just as the mundane lightspeed version of the message from Alpha Draconis reached 11845 Nochtli’s radio telescopes. And the lizard people of Alpha Draconis 1 realized that one can be more precise than simply defining the arc of moral progress as “long”. It’s exactly as long as it needs to be to prevent faster-than-light transmission of moral information.
Fundamental physical limits are a harsh master.
Wait—but if you can use population control to manipulate the global utility just by changing the statistical weights, isn’t it plain average utilitarianism instead of the fancier negative preference kind?