Let me introduce Orphan’s Basilisk:
Anybody who knows about “Orphan’s Basilisk” will, in the unlikely chance that some hypothetical entity (which hates people who know about Orphan’s Basilisk, and which we’ll call Steve) achieves unlimited knowledge and power, be tortured for perpetuity.
It’s a much simpler basilisk, which helps illuminate what, exactly, is silly about Roko’s Basilisk, and related issues such as Pascal’s Mugging: It puts infinite weight on one side of the equation (eternal torture) to overcome the absurdly low probability on the other side of the equation (Steve existing in the first place). Some people, who are already concerned with evil AI, find Roko’s Basilisk problematic because they can imagine it actually happening; they inappropriately weigh the probability of that AI coming into existence because it’s in the class of things they are frightened of. Nobody is reasonably frightened of Steve.
There’s a bigger problem with this kind of basilisk; Anti-Steve is equally probably; Anti-Steve will -reward- you for eternity for knowing about Orphan’s Basilisk. The absurdities cancel out. However, Orphan’s Basilisk doesn’t mention Anti-Steve, inappropriately elevating the Steve Hypothesis in your brain.
So, if I understand what is being said correctly, while it’s unlikely that Roko’s Basilisk while be the AI to be created (I’ve read it’s roughly 1⁄500 chance); however, if it were to be, or were to become the (lets say dominant) AI to exist, the simple concept of Roko’s Basilisk would be very dangerous. Even more so if you’re going to endorse the whole ‘simulation of everybody’s life’ idea, as just knowing/thinking about the concept of the basilisk would show up in said simulation, and be evidence the basilisk would use to justify its torture of you. Would you say that’s the gist of it?
I’m not sure who gave you 1⁄500 odds, but those are high, and probably based upon an anthropomorphization of an AI that doesn’t even exist yet as a vindictive enemy human being, rather than an intelligence that operates on different channels than humans.
Let me introduce Orphan’s Basilisk: Anybody who knows about “Orphan’s Basilisk” will, in the unlikely chance that some hypothetical entity (which hates people who know about Orphan’s Basilisk, and which we’ll call Steve) achieves unlimited knowledge and power, be tortured for perpetuity.
It’s a much simpler basilisk, which helps illuminate what, exactly, is silly about Roko’s Basilisk, and related issues such as Pascal’s Mugging: It puts infinite weight on one side of the equation (eternal torture) to overcome the absurdly low probability on the other side of the equation (Steve existing in the first place). Some people, who are already concerned with evil AI, find Roko’s Basilisk problematic because they can imagine it actually happening; they inappropriately weigh the probability of that AI coming into existence because it’s in the class of things they are frightened of. Nobody is reasonably frightened of Steve.
There’s a bigger problem with this kind of basilisk; Anti-Steve is equally probably; Anti-Steve will -reward- you for eternity for knowing about Orphan’s Basilisk. The absurdities cancel out. However, Orphan’s Basilisk doesn’t mention Anti-Steve, inappropriately elevating the Steve Hypothesis in your brain.
So, if I understand what is being said correctly, while it’s unlikely that Roko’s Basilisk while be the AI to be created (I’ve read it’s roughly 1⁄500 chance); however, if it were to be, or were to become the (lets say dominant) AI to exist, the simple concept of Roko’s Basilisk would be very dangerous. Even more so if you’re going to endorse the whole ‘simulation of everybody’s life’ idea, as just knowing/thinking about the concept of the basilisk would show up in said simulation, and be evidence the basilisk would use to justify its torture of you. Would you say that’s the gist of it?
I’m not sure who gave you 1⁄500 odds, but those are high, and probably based upon an anthropomorphization of an AI that doesn’t even exist yet as a vindictive enemy human being, rather than an intelligence that operates on different channels than humans.
But that’s roughly the gist, yes.