If we’re in a world where EY is right, we’re already dead. Most of the expected value will be in the worlds where alignment is neither guaranteed nor extremely difficult.
By observation, entities with present access to centralized power, such as governments, corporations, and humans selected for prominent roles in them, seem relatively poorly aligned. The theory that we’re in a civilizational epoch dominated by Molochian dynamics seems like a good fit for observed evidence: the incentive landscapes are such that most transferable resources have landed in Moloch-aligned hands.
First impression: distributing the AI among Moloch-unaligned actors seems like the best actionable plan to escape the Molochian attractor. We’ll spin up the parts of our personal collaborative networks that we trust and can rouse on short notice and spend a few precious hours trying to come up with a safer plan before proceeding.
***
ETA: That’s what I would say on the initial phone call before I have time to think deeply and consider contextual information not included in the prompt. For example, the leak, as it became public, could trigger potentially destabilizing reactions from various actors. The scenario could diverge quickly as more minds got on the problem and more context became available.
If we’re in a world where EY is right, we’re already dead. Most of the expected value will be in the worlds where alignment is neither guaranteed nor extremely difficult.
By observation, entities with present access to centralized power, such as governments, corporations, and humans selected for prominent roles in them, seem relatively poorly aligned. The theory that we’re in a civilizational epoch dominated by Molochian dynamics seems like a good fit for observed evidence: the incentive landscapes are such that most transferable resources have landed in Moloch-aligned hands.
First impression: distributing the AI among Moloch-unaligned actors seems like the best actionable plan to escape the Molochian attractor. We’ll spin up the parts of our personal collaborative networks that we trust and can rouse on short notice and spend a few precious hours trying to come up with a safer plan before proceeding.
***
ETA: That’s what I would say on the initial phone call before I have time to think deeply and consider contextual information not included in the prompt. For example, the leak, as it became public, could trigger potentially destabilizing reactions from various actors. The scenario could diverge quickly as more minds got on the problem and more context became available.