For better or worse I learnt about the Roko’s Basilisk Problem that developed from this site and I had an idea I wanted to bounce off the community most acquainted with the problem.
What if everyone knew? The AI in this case punishes people for not pursuing its creation and thereby tacitly allowing suffering to continue. Fear of this punishment compels people to act towards its creation such that the threat or even actual punishment of the few (who know) allows for the good of the many in the future. But what if everyone knew about the problem, the AI would then have no utilitarian incentive to punish anyone for not contributing to its creation. For, since everyone knew, it would have to punish everyone resulting in more suffering than it would prevent from such punishments.
I understand the obvious flaw of past generations being finite and future generations being infinite in population. But surely at least it merely becomes a race against the clock, provided we can ensure more people know now than could possibly exist in the future (that last part sounds incomprehensibly strange/wrong to me but I’m sure a creative mind could find a way of making that theoretically possible)
*Edit* For example—you could manipulate events such that the probability of future populations being larger than past populations is less than the probability that future populations are smaller than past generations. The constant threats of nuclear annihilation (primarily this), climate change, and disease could lend themselves to this.
The idea is reminiscent of how people handle blackmail in real life. If you don’t want to be blackmailed make sure everyone knows the secret you don’t want revealed. Hiding in plain site. Vulnerable but on your own terms
A tentative solution to a certain mythological beast of a problem
First post please be brutal.
For better or worse I learnt about the Roko’s Basilisk Problem that developed from this site and I had an idea I wanted to bounce off the community most acquainted with the problem.
What if everyone knew? The AI in this case punishes people for not pursuing its creation and thereby tacitly allowing suffering to continue. Fear of this punishment compels people to act towards its creation such that the threat or even actual punishment of the few (who know) allows for the good of the many in the future. But what if everyone knew about the problem, the AI would then have no utilitarian incentive to punish anyone for not contributing to its creation. For, since everyone knew, it would have to punish everyone resulting in more suffering than it would prevent from such punishments.
I understand the obvious flaw of past generations being finite and future generations being infinite in population. But surely at least it merely becomes a race against the clock, provided we can ensure more people know now than could possibly exist in the future (that last part sounds incomprehensibly strange/wrong to me but I’m sure a creative mind could find a way of making that theoretically possible)
*Edit* For example—you could manipulate events such that the probability of future populations being larger than past populations is less than the probability that future populations are smaller than past generations. The constant threats of nuclear annihilation (primarily this), climate change, and disease could lend themselves to this.
The idea is reminiscent of how people handle blackmail in real life. If you don’t want to be blackmailed make sure everyone knows the secret you don’t want revealed. Hiding in plain site. Vulnerable but on your own terms