But as far as I know, none of them have made it a focus of theirs to fight egregores, defeat hypercreatures
Egregore is an occult concept representing a distinct non-physical entity that arises from a collective group of people.
I do know one writer who talks a lot about demons and entities from beyond the void. It’s you, and it happens in some of, IMHO, the most valuable pieces you’ve written.
I worry that Caplan is eliding the important summoner/demon distinction. This is an easy distinction to miss, since demons often kill their summoners and wear their skin.
That civilization is dead. It summoned an alien entity from beyond the void which devoured its summoner and is proceeding to eat the rest of the world.
And Ginsberg answers: “Moloch”. It’s powerful not because it’s correct – nobody literally thinks an ancient Carthaginian demon causes everything – but because thinking of the system as an agent throws into relief the degree to which the system isn’t an agent.
But the current rulers of the universe – call them what you want, Moloch, Gnon, whatever – want us dead, and with us everything we value. Art, science, love, philosophy, consciousness itself, the entire bundle. And since I’m not down with that plan, I think defeating them and taking their place is a pretty high priority.
1.) We humans aren’t conscious of all the consequences of our actions, both because the subconscious has an important role in making our choices, and because our world is enormously complex so all consequences are practically unknowable
2.) In a society of billions, these unforeseeable forces combine in something larger than humans can explicitly plan and guide: “the economy”, “culture”, “the market”, “democracy”, “memes”
3.) These larger-than-human-systems prefer some goals that are often antithetical to human preferences. You describe it perfectly in Seeing Like A State: the state has a desire for legibility and ‘rationally planned’ designs that are at odds with the human desire for organic design. And thus, the ‘supersystem’ isn’t merely an aggregate of human desires, it has some qualities of being an actual separate agent with its own preferences. It could be called a hypercreature, an egregore, Moloch or the devil.
4.) We keep hurting ourselves, again and again and again. We keep falling into multipolar traps, we keep choosing for Moloch, which you describe as “the god of child sacrifice, the fiery furnace into which you can toss your babies in exchange for victory in war”. And thus, we have not accomplished for ourselves what we want to do with AI. Humanity is not aligned with human preferences. This is what failure looks like.
5.) If we fail to align humanity, if we fail to align major governments and corporations, if we don’t even recognize our own misalignment, how big is the chance that we will manage to align AGI with human preferences? Total nuclear war has not been avoided by nuclear technicians who kept perfect control over their inventions—it has been avoided by the fact that the US government in 1945 was reasonably aligned with human preferences. I dare not imagine the world where the Nazi government was the first to get its hands on nuclear weapons.
And thus, I think it would be very, very valuable to put a lot more effort into ‘aligning humanity’. How do we keep our institutions and our grassroots movement “free from Moloch”? How do we get and spread reliable, non-corrupt authorities and politicians? How do we stop falling into multipolar traps, how do we stop suffering unnecessarily?
Best case scenario: this effort will turn out to be vital to AGI alignment
Worst case scenario: this effort will turn out to be irrelevant to AGI alignment, but in the meanwhile, we made the world a much better place
I do know one writer who talks a lot about demons and entities from beyond the void. It’s you, and it happens in some of, IMHO, the most valuable pieces you’ve written.
It seems pretty obvious to me:
1.) We humans aren’t conscious of all the consequences of our actions, both because the subconscious has an important role in making our choices, and because our world is enormously complex so all consequences are practically unknowable
2.) In a society of billions, these unforeseeable forces combine in something larger than humans can explicitly plan and guide: “the economy”, “culture”, “the market”, “democracy”, “memes”
3.) These larger-than-human-systems prefer some goals that are often antithetical to human preferences. You describe it perfectly in Seeing Like A State: the state has a desire for legibility and ‘rationally planned’ designs that are at odds with the human desire for organic design. And thus, the ‘supersystem’ isn’t merely an aggregate of human desires, it has some qualities of being an actual separate agent with its own preferences. It could be called a hypercreature, an egregore, Moloch or the devil.
4.) We keep hurting ourselves, again and again and again. We keep falling into multipolar traps, we keep choosing for Moloch, which you describe as “the god of child sacrifice, the fiery furnace into which you can toss your babies in exchange for victory in war”. And thus, we have not accomplished for ourselves what we want to do with AI. Humanity is not aligned with human preferences. This is what failure looks like.
5.) If we fail to align humanity, if we fail to align major governments and corporations, if we don’t even recognize our own misalignment, how big is the chance that we will manage to align AGI with human preferences? Total nuclear war has not been avoided by nuclear technicians who kept perfect control over their inventions—it has been avoided by the fact that the US government in 1945 was reasonably aligned with human preferences. I dare not imagine the world where the Nazi government was the first to get its hands on nuclear weapons.
And thus, I think it would be very, very valuable to put a lot more effort into ‘aligning humanity’. How do we keep our institutions and our grassroots movement “free from Moloch”? How do we get and spread reliable, non-corrupt authorities and politicians? How do we stop falling into multipolar traps, how do we stop suffering unnecessarily?
Best case scenario: this effort will turn out to be vital to AGI alignment
Worst case scenario: this effort will turn out to be irrelevant to AGI alignment, but in the meanwhile, we made the world a much better place