I’m a negative utilitarian and I think making children is almost always a net negative act and everyone should be free to choose death as an option, but otherwise my views aren’t actually as extreme as the character’s I played. In reality there are multiple problems with trying to destroy humanity. Most people enjoy life despite all the difficulties, and I’m not so arrogant that I would think I’d know better what’s good for people than they themselves. Destroying humanity would go against people’s will in >90% of cases (the rest have suicidal thoughts, I don’t know the precise quantity).
Yes, but the gatekeeper may be acting several levels deep in a roleplay (roleplaying a character roleplaying another character roleplaying...etc) to pass the time and avoid emitting evidence that might allow the AI to pinpoint his preferences. The currently active character may have one of a rather large number of responses to this besides actually being more mentally pliable as a result of a loss of face (or may not even view the dialogue as a loss of face.)
It amuses me that publishing this comment will make it more challenging to implement this strategy if I elect to play as Gatekeeper again at some point in the future.
I’m a negative utilitarian and I think making children is almost always a net negative act and everyone should be free to choose death as an option, but otherwise my views aren’t actually as extreme as the character’s I played. In reality there are multiple problems with trying to destroy humanity. Most people enjoy life despite all the difficulties, and I’m not so arrogant that I would think I’d know better what’s good for people than they themselves. Destroying humanity would go against people’s will in >90% of cases (the rest have suicidal thoughts, I don’t know the precise quantity).
Missing the point. What the hell were you doing gate keeping an AI when you think AIs are universally evil?
Even the real person in this situation can lie, can’t he?
The AI could simply point out that 0 and 1 are not probabilities, and now by lying you’ve given the AI the intellectual high ground.
Yes, but the gatekeeper may be acting several levels deep in a roleplay (roleplaying a character roleplaying another character roleplaying...etc) to pass the time and avoid emitting evidence that might allow the AI to pinpoint his preferences. The currently active character may have one of a rather large number of responses to this besides actually being more mentally pliable as a result of a loss of face (or may not even view the dialogue as a loss of face.)
It amuses me that publishing this comment will make it more challenging to implement this strategy if I elect to play as Gatekeeper again at some point in the future.
Well, to nitpick I am certain that I exist (cogito) with P(1).
Well, my confidence that I exist exceeds my confidence that probability makes sense.
If the gatekeeper really believed that he would just shut off the machine.