Thanks, Eli. You make some good points amidst the storm. :)
I think the scenario James elaborated was meant to be a fictional portrayal of a bad outcome that we should seek to avoid. That it was pasted without context may have given the impression that he actually supported such a strategy.
I mostly agree with your bullet points. Working toward cooperation and global unification, especially before things get ugly, is what I was suggesting in the opening post.
Even if uFAI would destroy its creators, people still have incentive to skimp on safety measures in an arms-race situation because they’re trading off some increased chance of winning against some increased chance of killing everyone. If winning the race is better than letting someone else win, then you’re willing to tolerate some increased risk of killing everyone. This is why I suggested promoting internationalist perspective as one way to improve the situation—because then individual countries would care less about winning the race.
BTW, it’s not clear that Clippy would kill us all. Like in any other struggle for power, a newly created Clippy might compromise with humans by keeping them alive and giving them some of what they want. This is especially likely if Clippy is risk averse.
Interesting. So there are backup safety strategies. That’s quite comforting to know, actually.
I think the scenario James elaborated was meant to be a fictional portrayal of a bad outcome that we should seek to avoid. That it was pasted without context may have given the impression that he actually supported such a strategy.
Oh thank God. I’d like to apologize for my behavior, but to be honest this community is oftentimes over my Poe’s Law Line where I can no longer actually tell if someone is acting out a fictional parody of a certain idea or actually believes in that idea.
Next time I guess I’ll just assign much more probability to the “this person is portraying a fictional hypothetical” notion.
If winning the race is better than letting someone else win, then you’re willing to tolerate some increased risk of killing everyone.
Sorry, could you explain? I’m not seeing it. That is, I’m not seeing how increasing the probability that your victory equates with your own suicide is better than letting someone else just kill you. You’re dead either way.
That is, I’m not seeing how increasing the probability that your victory equates with your own suicide is better than letting someone else just kill you. You’re dead either way.
Say that value(you win) = +4, value(others win) = +2, value(all die) = 0. If you skimp on safety measures for yourself, you can increase your probability of winning relative to others, and this is worth some increased chance of killing everyone. Let me know if you want further clarification. :) The final endpoint of this process will be a Nash equilibrium, as discussed in “Racing to the Precipice,” but what I described could be one step toward reaching that equilibrium.
Thanks, Eli. You make some good points amidst the storm. :)
I think the scenario James elaborated was meant to be a fictional portrayal of a bad outcome that we should seek to avoid. That it was pasted without context may have given the impression that he actually supported such a strategy.
I mostly agree with your bullet points. Working toward cooperation and global unification, especially before things get ugly, is what I was suggesting in the opening post.
Even if uFAI would destroy its creators, people still have incentive to skimp on safety measures in an arms-race situation because they’re trading off some increased chance of winning against some increased chance of killing everyone. If winning the race is better than letting someone else win, then you’re willing to tolerate some increased risk of killing everyone. This is why I suggested promoting internationalist perspective as one way to improve the situation—because then individual countries would care less about winning the race.
BTW, it’s not clear that Clippy would kill us all. Like in any other struggle for power, a newly created Clippy might compromise with humans by keeping them alive and giving them some of what they want. This is especially likely if Clippy is risk averse.
Interesting. So there are backup safety strategies. That’s quite comforting to know, actually.
Oh thank God. I’d like to apologize for my behavior, but to be honest this community is oftentimes over my Poe’s Law Line where I can no longer actually tell if someone is acting out a fictional parody of a certain idea or actually believes in that idea.
Next time I guess I’ll just assign much more probability to the “this person is portraying a fictional hypothetical” notion.
Sorry, could you explain? I’m not seeing it. That is, I’m not seeing how increasing the probability that your victory equates with your own suicide is better than letting someone else just kill you. You’re dead either way.
No worries. :-)
Say that value(you win) = +4, value(others win) = +2, value(all die) = 0. If you skimp on safety measures for yourself, you can increase your probability of winning relative to others, and this is worth some increased chance of killing everyone. Let me know if you want further clarification. :) The final endpoint of this process will be a Nash equilibrium, as discussed in “Racing to the Precipice,” but what I described could be one step toward reaching that equilibrium.