Actually my first thought upon reading that was “follow the improbability”—be suspicious of elements of your world-model that seem particularly well optimized in some direction if you can’t see the source of the optimization pressure.
A much more concrete example is cloud computing. Granted, computers don’t “think,” but it’s a close enough analogy.
You must always keep in mind that there is no magic “cloud”- only concrete machines that other people own and keep hidden from you. People who might have very different ideas than you on such matters, as for example, privacy rights.
This is the allusion I had in mind, but actually I’ve had occasion to quote this when talking about corporations and similar institutions. If an organization doesn’t keep its brain inside a human skull (and I’m sure some do), it seems guaranteed to make bizarre decisions. Anthropomorphizing corporations can be a dangerous mistake (certainly has been for me more than once).
The reasonable way to interpret this seems to be “don’t trust something you don’t understand/cannot predict.” Not sure how seeing where it keeps its brain helps with that, though.
I can’t help but ask whether you’ve ever found this advice personally useful, and if so, how.
Actually my first thought upon reading that was “follow the improbability”—be suspicious of elements of your world-model that seem particularly well optimized in some direction if you can’t see the source of the optimization pressure.
Never trust another computational agent unless you can see its source code?
A much more concrete example is cloud computing. Granted, computers don’t “think,” but it’s a close enough analogy.
You must always keep in mind that there is no magic “cloud”- only concrete machines that other people own and keep hidden from you. People who might have very different ideas than you on such matters, as for example, privacy rights.
This is the allusion I had in mind, but actually I’ve had occasion to quote this when talking about corporations and similar institutions. If an organization doesn’t keep its brain inside a human skull (and I’m sure some do), it seems guaranteed to make bizarre decisions. Anthropomorphizing corporations can be a dangerous mistake (certainly has been for me more than once).
Telemarketers.
The reasonable way to interpret this seems to be “don’t trust something you don’t understand/cannot predict.” Not sure how seeing where it keeps its brain helps with that, though.
Never trust other thinking beings if you don’t know the location of their intelligence center so that you can destroy it if necessary?
Never trust anyone unless you’re talking in person? :p
Talking to Clippy? As in, I don’t.
Why not?