Congratulations on your insights, but please don’t snrk implement them until snigger you’ve made sure that oh heck I can’t keep a straight face anymore.
The reactions to the parent comment are very amusing. We have people sarcastically supporting the commenter, people sarcastically telling the commenter they’re a threat to the world, people sarcastically telling the commenter to fear for their life, people non-sarcastically telling the commenter to fear for their life, people honestly telling the commenter they’re probably nuts, and people failing to get every instance of the sarcasm. Yet at bottom, we’re probably all (except for private_messaging) thinking the same thing: that FinalState almost certainly has no way of creating an AGI and that no-one involved need feel threatened by anyone else.
Yet at bottom, we’re probably all (except for private_messaging) thinking the same thing: that FinalState almost certainly has no way of creating an AGI
nah, I stated that probability of him creating AGI is epsilon (my probability for his project hurting me is microscopic epsilon while the SI hurting him somehow is a larger epsilon, I only stated a relation that the latter is larger than former. The probability of a person going unfriendly is way, way higher than the probability of a person creating AGI that kills us all).
I think we’re all here for various sarcastic or semi sarcastic points; my point is that given the SI stance, AGI researchers would (and have to) try to keep away from SI, especially those whom have some probability of creating an AGI, given combination of probability of useful contribution by SI versus probability of SI going nuts.
that FinalState almost certainly has no way of creating an AGI
I actually meant that I thought you disagreed with:
and that no-one involved need feel threatened by anyone else.
Sorry for the language ambiguity. If you think the probability of SI hurting FinalState is epsilon, I misunderstood you. I thought you thought it was a large enough probability to be worth worrying about and warning FinalState about.
Congratulations on your insights, but please don’t snrk implement them until snigger you’ve made sure that oh heck I can’t keep a straight face anymore.
The reactions to the parent comment are very amusing. We have people sarcastically supporting the commenter, people sarcastically telling the commenter they’re a threat to the world, people sarcastically telling the commenter to fear for their life, people non-sarcastically telling the commenter to fear for their life, people honestly telling the commenter they’re probably nuts, and people failing to get every instance of the sarcasm. Yet at bottom, we’re probably all (except for private_messaging) thinking the same thing: that FinalState almost certainly has no way of creating an AGI and that no-one involved need feel threatened by anyone else.
nah, I stated that probability of him creating AGI is epsilon (my probability for his project hurting me is microscopic epsilon while the SI hurting him somehow is a larger epsilon, I only stated a relation that the latter is larger than former. The probability of a person going unfriendly is way, way higher than the probability of a person creating AGI that kills us all).
I think we’re all here for various sarcastic or semi sarcastic points; my point is that given the SI stance, AGI researchers would (and have to) try to keep away from SI, especially those whom have some probability of creating an AGI, given combination of probability of useful contribution by SI versus probability of SI going nuts.
I never thought you disagreed with:
I actually meant that I thought you disagreed with:
Sorry for the language ambiguity. If you think the probability of SI hurting FinalState is epsilon, I misunderstood you. I thought you thought it was a large enough probability to be worth worrying about and warning FinalState about.