Actually my first thought upon reading that was “follow the improbability”—be suspicious of elements of your world-model that seem particularly well optimized in some direction if you can’t see the source of the optimization pressure.
A much more concrete example is cloud computing. Granted, computers don’t “think,” but it’s a close enough analogy.
You must always keep in mind that there is no magic “cloud”- only concrete machines that other people own and keep hidden from you. People who might have very different ideas than you on such matters, as for example, privacy rights.
This is the allusion I had in mind, but actually I’ve had occasion to quote this when talking about corporations and similar institutions. If an organization doesn’t keep its brain inside a human skull (and I’m sure some do), it seems guaranteed to make bizarre decisions. Anthropomorphizing corporations can be a dangerous mistake (certainly has been for me more than once).
The reasonable way to interpret this seems to be “don’t trust something you don’t understand/cannot predict.” Not sure how seeing where it keeps its brain helps with that, though.
It isn’t racist, it’s realistic. If an entity thinks with something that we don’t even call a brain, we shouldn’t trust it because we have no way of knowing its motivations.
Clippy is a perfect example. How can I trust it to be a paperclip maximizer rather than an entity that claims to be a paperclip maximizer? (Over 50% of the LessWrong members, I estimate, do not) If Clippy were human, I would be able to easily assess whether or not it is telling the truth (in this particular instance, the answer would probably be “no”, because most humans I know do not make very good paperclip maximizers). If Clippy is not human, then I have no way to judge which points in mindspace make its actions most likely.
It isn’t racist, it’s realistic. If an entity thinks with something that we don’t even call a brain, we shouldn’t trust it because we have no way of knowing its motivations.
Yes, but it says “never trust”, not “don’t trust by default”. It should be possible for non-brain-based beings to demonstrate their trustworthiness.
Edit: Also, you can’t spell “REALISTIC” without “RACIST LIE”. Proof by anagram. So there.
Yes, but it says “never trust”, not “don’t trust by default”. It should be possible for non-brain-based beings to demonstrate their trustworthiness.
If we were going to be technical we’d have to start by considering whether or not race is involved at all. It is potentially prejudiced, but not racist.
I estimate a 99.9+% likelihood that nobody on this site trusts Clippy to be a paperclip maximizer.
In fact, I’m pretty much incorrigible on this point… that is, I estimate the likelihood that people will mis-state their beliefs about Clippy to be significantly higher than the likelihood that they actually trust Clippy to be a paperclip maximizer.
I do understand that this is epistemicly problematic, and I sort of wish it weren’t so… I don’t like to enter incorrigible states… but there it is.
You haven’t actually stated any beliefs about Clippy; you stated a belief about the readership of Less Wrong.
Regarding your beliefs about Clippy: as I said, I am incorrigibly certain that you believe Clippy to be human.
As for the likelihood that you were understating your beliefs about LW readers… hm. I don’t have much of a model of you, but treating LW-members as a reference class, I’d give that ~85% confidence.
The remaining ~15% is mostly that you weren’t understating them so much as not bothering to think explicitly about them at all, and used “over 50%” as a generic cached formula for “more confident than not.” Arguably that’s a distinction that makes no difference.
I estimate the likelihood that you actually disagree with me about LW readers, upon thinking about it, as ~0%.
I would actually call a statement racist if it’s primarily justified by racism (in which case it will be realistic only if it happens to be so accidentally). Since “racist” has a lot of negative connotations, it isn’t useful to call something racist if you plan to agree with it, and therefore if I had to make a racially-based realistic statement, I’d call it something dumb like a racially-based realistic statement.
Never trust anything that can think for itself if you can’t see where it keeps its brain.
--J. K. Rowling, Harry Potter and the Chamber of Secrets
I can’t help but ask whether you’ve ever found this advice personally useful, and if so, how.
Actually my first thought upon reading that was “follow the improbability”—be suspicious of elements of your world-model that seem particularly well optimized in some direction if you can’t see the source of the optimization pressure.
Never trust another computational agent unless you can see its source code?
A much more concrete example is cloud computing. Granted, computers don’t “think,” but it’s a close enough analogy.
You must always keep in mind that there is no magic “cloud”- only concrete machines that other people own and keep hidden from you. People who might have very different ideas than you on such matters, as for example, privacy rights.
This is the allusion I had in mind, but actually I’ve had occasion to quote this when talking about corporations and similar institutions. If an organization doesn’t keep its brain inside a human skull (and I’m sure some do), it seems guaranteed to make bizarre decisions. Anthropomorphizing corporations can be a dangerous mistake (certainly has been for me more than once).
Telemarketers.
The reasonable way to interpret this seems to be “don’t trust something you don’t understand/cannot predict.” Not sure how seeing where it keeps its brain helps with that, though.
Never trust other thinking beings if you don’t know the location of their intelligence center so that you can destroy it if necessary?
Never trust anyone unless you’re talking in person? :p
Talking to Clippy? As in, I don’t.
Why not?
That is racist against entities that think with things other than what we’d call brains.
Don’t you mean sexist? ;)
Come now, that was below the belt.
It isn’t racist, it’s realistic. If an entity thinks with something that we don’t even call a brain, we shouldn’t trust it because we have no way of knowing its motivations.
Clippy is a perfect example. How can I trust it to be a paperclip maximizer rather than an entity that claims to be a paperclip maximizer? (Over 50% of the LessWrong members, I estimate, do not) If Clippy were human, I would be able to easily assess whether or not it is telling the truth (in this particular instance, the answer would probably be “no”, because most humans I know do not make very good paperclip maximizers). If Clippy is not human, then I have no way to judge which points in mindspace make its actions most likely.
Yes, but it says “never trust”, not “don’t trust by default”. It should be possible for non-brain-based beings to demonstrate their trustworthiness.
Edit: Also, you can’t spell “REALISTIC” without “RACIST LIE”. Proof by anagram. So there.
If we were going to be technical we’d have to start by considering whether or not race is involved at all. It is potentially prejudiced, but not racist.
Talk about underconfidence!
I estimate a 99.9+% likelihood that nobody on this site trusts Clippy to be a paperclip maximizer.
In fact, I’m pretty much incorrigible on this point… that is, I estimate the likelihood that people will mis-state their beliefs about Clippy to be significantly higher than the likelihood that they actually trust Clippy to be a paperclip maximizer.
I do understand that this is epistemicly problematic, and I sort of wish it weren’t so… I don’t like to enter incorrigible states… but there it is.
What is your estimation of the likelihood that I was understating my beliefs about Clippy?
You haven’t actually stated any beliefs about Clippy; you stated a belief about the readership of Less Wrong.
Regarding your beliefs about Clippy: as I said, I am incorrigibly certain that you believe Clippy to be human.
As for the likelihood that you were understating your beliefs about LW readers… hm. I don’t have much of a model of you, but treating LW-members as a reference class, I’d give that ~85% confidence.
The remaining ~15% is mostly that you weren’t understating them so much as not bothering to think explicitly about them at all, and used “over 50%” as a generic cached formula for “more confident than not.” Arguably that’s a distinction that makes no difference.
I estimate the likelihood that you actually disagree with me about LW readers, upon thinking about it, as ~0%.
That category of things that we call racist does not exclude things simply because they are realistic. Political correctness isn’t about being fair.
I would actually call a statement racist if it’s primarily justified by racism (in which case it will be realistic only if it happens to be so accidentally). Since “racist” has a lot of negative connotations, it isn’t useful to call something racist if you plan to agree with it, and therefore if I had to make a racially-based realistic statement, I’d call it something dumb like a racially-based realistic statement.
Or a suggestion to generalize the concept of a “brain” for non-biological intelligences, such as paperclip maximizers.