The obvious next question would be to ask if you’re OK with your family being tortured under the various circumstances this would suggest you would be.
I’ve lost the context to understand this question.
How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?
I mean, it’s unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)
I don’t like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
The usual version of this I hear is from people who’ve read Minsky and/or Moravec, and feel we should treat any entity that can pass some reasonable Turing test as legally and morally human. I disagree because I believe a self-aware entity can be simulated—maybe not perfectly, but to an arbitrarily high difficulty of disproving it—by a program that is not self-aware. And if such a standard were enacted, interest groups would use it to manufacture a large supply of these fakes and have them vote and/or fight for their side of political questions.
Oh. That’s an important distinction, yeah, but standard Singularity arguments suggest that by the time that would come up humans would no longer be making that decision anyway.
Um, if something is smart enough to solve every problem a human can, [how] relevant is the distinction? I mean, sure, it might (say) be lying about it’s preferences, but … surely it’ll have exactly the same impact on society, regardless?
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can’t. This is another reason I’d prefer that the capability continue not to exist.
On the other hand, this sounds like a tribal battle-cry rather than a rational, non-mindkilled discussion.
It is. At some point I have trouble justifying the one without invoking the other. Some things are just so obvious to me, and so senselessly not-believed by many, that I see no peaceful way out other than dismissing those people. How do you argue with someone who isn’t open to reason?
ahem … I’m … actually from the other tribe. Pretty heavily in favor of a Nanny Welfare State, and although I’m not sure I’d go quite so far as to say it’s “obvious” and anyone who disagrees must be “senseless … not open to reason”.
Care to trade chains of logic? A welfare state, in particular, seems kind of really important from here.
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don’t accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they’re a unique exception to the old saw “there’s no accounting for taste” because a person’s code of ethics determines whether he’s trustworthy and in what ways).
I think the trouble with these sort of battle-cries is that they lead to, well, assuming the other side must be evil strawmen. It’s a problem. (That’s why political discussion is unofficially banned here, unless you make an effort to be super neutral and rational about it.)
One can’t really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
For the same reason, I never expect judges, journalists, or historians to be “unbiased” because I don’t believe true “unbiasedness” is possible even in principle.
I don’t like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.
But I see we agree on this.
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can’t. This is another reason I’d prefer that the capability continue not to exist.
But is it possible to impersonate intelligence? Isn’t anything that can “fake” problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don’t accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they’re a unique exception to the old saw “there’s no accounting for taste” because a person’s code of ethics determines whether he’s trustworthy and in what ways).
I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.
What makes you think that “individual rights” are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they’re the correct moral theory, what evidence would you point to? You might change my mind.
One can’t really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
Oh, everyone is misguided. (Hence the name of the site.) But they generally aren’t actual evil strawmen.
I don’t like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can’t. This is another reason I’d prefer that the capability continue not to exist.
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don’t accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they’re a unique exception to the old saw “there’s no accounting for taste” because a person’s code of ethics determines whether he’s trustworthy and in what ways).
One can’t really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
For the same reason, I never expect judges, journalists, or historians to be “unbiased” because I don’t believe true “unbiasedness” is possible even in principle.
Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.
But I see we agree on this.
But is it possible to impersonate intelligence? Isn’t anything that can “fake” problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)
I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.
What makes you think that “individual rights” are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they’re the correct moral theory, what evidence would you point to? You might change my mind.
Oh, everyone is misguided. (Hence the name of the site.) But they generally aren’t actual evil strawmen.