To quote someone else here: “Well, in the original formulation, Roko’s Basilisk is an FAI
I don’t know who you are quoting but they are someone who considers AIs that will torture me to be friendly. They are confused in a way that is dangerous.
The AI acausally blackmails people into building it sooner, not into building it at all.
It applies to both—causing itself to exist at a different place in time or causing itself to exist at all. I’ve explicitly mentioned elsewhere in this thread that merely refusing blackmail is insufficient when there are other humans who can defect and create the torture-AI anyhow.
You asked “How could it?”. You got an answer. Your rhetorical device fails.
“How could it” means “how could it always result in”, not “how could it in at least one case”. Giving examples of how it could do it in at least one case is trivial (consider the case where refusing to be blackmailed results in humanity being killed off for some unlikely reason, and humanity, being killed off, can’t build an AI).
I don’t know who you are quoting but they are someone who considers AIs that will torture me to be friendly. They are confused in a way that is dangerous.
It applies to both—causing itself to exist at a different place in time or causing itself to exist at all. I’ve explicitly mentioned elsewhere in this thread that merely refusing blackmail is insufficient when there are other humans who can defect and create the torture-AI anyhow.
You asked “How could it?”. You got an answer. Your rhetorical device fails.
“How could it” means “how could it always result in”, not “how could it in at least one case”. Giving examples of how it could do it in at least one case is trivial (consider the case where refusing to be blackmailed results in humanity being killed off for some unlikely reason, and humanity, being killed off, can’t build an AI).