Quick note: This post isn’t the right place to be talking about this subject. (Separately, the Roko’s basilisk subject has been kinda beaten to death and you’re unlikely to contribute usefully to the discussion if you haven’t caught up on the sequences)
You’d also have to subscribe to the thought that the future is set in stone and that by the AI messing with people now, the AI’s future would not be altered, since the future in which they exist allows for their existence and messing with the past could erase the AI completely. You’d have to also throw out any possible parallel universe theory or any theory about shifting timelines, etc. The past they would alter, would not necessarily link to the future in which it exits, but an alternate future. I would postulate an AI could calculate the exact amount they could make someone miserable without sparking their own demise—people are vindictive creatures made stronger by failures than successes in some cases, but unless it had a positive outcome in their current existence the only purpose would be to create multiple timelines. At that point, you already have no idea which timeline you exist and wouldn’t know the difference if the AI messed with you or not. So I say, do whatever you want because you won’t know what happened anyway. Worrying about blackmail from an AI in the future is tantamount to worshipping god.
Quick note: This post isn’t the right place to be talking about this subject. (Separately, the Roko’s basilisk subject has been kinda beaten to death and you’re unlikely to contribute usefully to the discussion if you haven’t caught up on the sequences)
You’d also have to subscribe to the thought that the future is set in stone and that by the AI messing with people now, the AI’s future would not be altered, since the future in which they exist allows for their existence and messing with the past could erase the AI completely. You’d have to also throw out any possible parallel universe theory or any theory about shifting timelines, etc. The past they would alter, would not necessarily link to the future in which it exits, but an alternate future. I would postulate an AI could calculate the exact amount they could make someone miserable without sparking their own demise—people are vindictive creatures made stronger by failures than successes in some cases, but unless it had a positive outcome in their current existence the only purpose would be to create multiple timelines. At that point, you already have no idea which timeline you exist and wouldn’t know the difference if the AI messed with you or not. So I say, do whatever you want because you won’t know what happened anyway. Worrying about blackmail from an AI in the future is tantamount to worshipping god.