It isn’t racist, it’s realistic. If an entity thinks with something that we don’t even call a brain, we shouldn’t trust it because we have no way of knowing its motivations.
Clippy is a perfect example. How can I trust it to be a paperclip maximizer rather than an entity that claims to be a paperclip maximizer? (Over 50% of the LessWrong members, I estimate, do not) If Clippy were human, I would be able to easily assess whether or not it is telling the truth (in this particular instance, the answer would probably be “no”, because most humans I know do not make very good paperclip maximizers). If Clippy is not human, then I have no way to judge which points in mindspace make its actions most likely.
It isn’t racist, it’s realistic. If an entity thinks with something that we don’t even call a brain, we shouldn’t trust it because we have no way of knowing its motivations.
Yes, but it says “never trust”, not “don’t trust by default”. It should be possible for non-brain-based beings to demonstrate their trustworthiness.
Edit: Also, you can’t spell “REALISTIC” without “RACIST LIE”. Proof by anagram. So there.
Yes, but it says “never trust”, not “don’t trust by default”. It should be possible for non-brain-based beings to demonstrate their trustworthiness.
If we were going to be technical we’d have to start by considering whether or not race is involved at all. It is potentially prejudiced, but not racist.
I estimate a 99.9+% likelihood that nobody on this site trusts Clippy to be a paperclip maximizer.
In fact, I’m pretty much incorrigible on this point… that is, I estimate the likelihood that people will mis-state their beliefs about Clippy to be significantly higher than the likelihood that they actually trust Clippy to be a paperclip maximizer.
I do understand that this is epistemicly problematic, and I sort of wish it weren’t so… I don’t like to enter incorrigible states… but there it is.
You haven’t actually stated any beliefs about Clippy; you stated a belief about the readership of Less Wrong.
Regarding your beliefs about Clippy: as I said, I am incorrigibly certain that you believe Clippy to be human.
As for the likelihood that you were understating your beliefs about LW readers… hm. I don’t have much of a model of you, but treating LW-members as a reference class, I’d give that ~85% confidence.
The remaining ~15% is mostly that you weren’t understating them so much as not bothering to think explicitly about them at all, and used “over 50%” as a generic cached formula for “more confident than not.” Arguably that’s a distinction that makes no difference.
I estimate the likelihood that you actually disagree with me about LW readers, upon thinking about it, as ~0%.
I would actually call a statement racist if it’s primarily justified by racism (in which case it will be realistic only if it happens to be so accidentally). Since “racist” has a lot of negative connotations, it isn’t useful to call something racist if you plan to agree with it, and therefore if I had to make a racially-based realistic statement, I’d call it something dumb like a racially-based realistic statement.
That is racist against entities that think with things other than what we’d call brains.
Don’t you mean sexist? ;)
Come now, that was below the belt.
It isn’t racist, it’s realistic. If an entity thinks with something that we don’t even call a brain, we shouldn’t trust it because we have no way of knowing its motivations.
Clippy is a perfect example. How can I trust it to be a paperclip maximizer rather than an entity that claims to be a paperclip maximizer? (Over 50% of the LessWrong members, I estimate, do not) If Clippy were human, I would be able to easily assess whether or not it is telling the truth (in this particular instance, the answer would probably be “no”, because most humans I know do not make very good paperclip maximizers). If Clippy is not human, then I have no way to judge which points in mindspace make its actions most likely.
Yes, but it says “never trust”, not “don’t trust by default”. It should be possible for non-brain-based beings to demonstrate their trustworthiness.
Edit: Also, you can’t spell “REALISTIC” without “RACIST LIE”. Proof by anagram. So there.
If we were going to be technical we’d have to start by considering whether or not race is involved at all. It is potentially prejudiced, but not racist.
Talk about underconfidence!
I estimate a 99.9+% likelihood that nobody on this site trusts Clippy to be a paperclip maximizer.
In fact, I’m pretty much incorrigible on this point… that is, I estimate the likelihood that people will mis-state their beliefs about Clippy to be significantly higher than the likelihood that they actually trust Clippy to be a paperclip maximizer.
I do understand that this is epistemicly problematic, and I sort of wish it weren’t so… I don’t like to enter incorrigible states… but there it is.
What is your estimation of the likelihood that I was understating my beliefs about Clippy?
You haven’t actually stated any beliefs about Clippy; you stated a belief about the readership of Less Wrong.
Regarding your beliefs about Clippy: as I said, I am incorrigibly certain that you believe Clippy to be human.
As for the likelihood that you were understating your beliefs about LW readers… hm. I don’t have much of a model of you, but treating LW-members as a reference class, I’d give that ~85% confidence.
The remaining ~15% is mostly that you weren’t understating them so much as not bothering to think explicitly about them at all, and used “over 50%” as a generic cached formula for “more confident than not.” Arguably that’s a distinction that makes no difference.
I estimate the likelihood that you actually disagree with me about LW readers, upon thinking about it, as ~0%.
That category of things that we call racist does not exclude things simply because they are realistic. Political correctness isn’t about being fair.
I would actually call a statement racist if it’s primarily justified by racism (in which case it will be realistic only if it happens to be so accidentally). Since “racist” has a lot of negative connotations, it isn’t useful to call something racist if you plan to agree with it, and therefore if I had to make a racially-based realistic statement, I’d call it something dumb like a racially-based realistic statement.
Or a suggestion to generalize the concept of a “brain” for non-biological intelligences, such as paperclip maximizers.