I don’t find this surprising at all, other than that it occurred to a consequentialist. Being a virtue ethicist and something of a Romantic, it seems to me that the best world will be one of great and terrible events, where a person has the chance to be truly and tragically heroic. And no, that doesn’t sound comfortable to me, or a place where I’d particularly thrive.
Thom_Blake
Jed, your comment (the second example, specifically) reminds me of the story about how the structure of DNA was discovered. Apparently the ‘Eureka’ moment actually came after the researchers obtained better materials for modeling.
Tilden is another roboticist who’s gotten rich and famous off of unintelligent robots: BEAM robotics
Interesting idea… though I still think you’re wrong to step away from anthropomorphism, and ‘necessary and sufficient’ is a phrase that should probably be corralled into the domain of formal logic.
And I’m not sure this adds anything to Sternberg and Salter’s definition: ‘goal-directed adaptive behavior’.
I’ve yet to hear of anyone turning back successfully, though I think some have tried, or wished they could.
It seems to be one interpretation of the Buddhist project
Regarding self, I tend to include much more than my brain in “I”—but then, I’m not one of those who thinks being ‘uploaded’ makes a whole lot of sense.
Anonymous: torture’s inefficacy was well-known by the fourteenth century; Bernardo Gui, a famous inquisitor who supervised many tortures, argued against using it because it is only good at getting the tortured to say whatever will end the torture. I can’t seem to find the citation, but here is someone who refers to it: http://www.ewtn.com/library/ANSWERS/INQUIS2.htm
Toby,
You should never, ever murder an innocent person who’s helped you, even if it’s the right thing to do
You should never, ever do X, even if if you are exceedingly confident that it is the right thing to do
I believe a more sensible interpretation would be, “You should have an unbreakable prohibition against doing X, even in cases where X is the right thing to do”—the issue is not that you might be wrong about it being the right thing to do, but rather that not having the prohibition is a bad thing.
pdf, the only reason that suggestion works is that we’re not in the business of bombing headquarters at 2AM on a weekend. If both sides were scheduling bombings at 2AM, I’d bet they’d be at work at 2AM.
“Everyone has a right to their own opinion” is largely a product of its opposite. For a long period many people believed “If my neighbor has a different opinion than I do, then I should kill him”. This led to a bad state of affairs and, by force, a less lethal meme took hold.
Russell, I don’t think that necessarily specifies a ‘cheap trick’. If you start with a rock on the “don’t let the AI out” button, then the AI needs to start by convincing the gatekeeper to take the rock off the button. “This game has serious consequences and so you should really play rather than just saying ‘no’ repeatedly” seems to be a move in that direction that keeps with the spirit of the protocol, and is close to Silas’s suggestion.
I’m with Kaj on this. Playing the AI, one must start with the assumption that there’s a rock on the “don’t let the AI out” button. That’s why this problem is impossible. I have some ideas about how to argue with ‘a rock’, but I agree with the sentiment of not telling.
This doesn’t seem to mesh with the Friendly AI goal of getting it perfectly right on the first try.
Do we accept some uncertainty and risk to do something extraordinary now, or do we take the slow, calm, deliberative course that stands a chance of achieving perfection?
Is there any chance of becoming a master of the blade without beginning to cut?
I think if history remembers you, I’d bet that it will be for the journey more than its end. If the interesting introspective bits get published in a form that gets read, then I’d bet it will be memorable in the way that Lao zi or Sun zi is memorable. In case the Singularity / Friendly AI stuff doesn’t work out, please keep up the good work anyway.
scott clark,
I think your history is a bit off. The plan wasn’t ‘originally’ for Luke to kill Vader, his father; it wasn’t until midway through filming Empire (or at least, after the release of A New Hope) that Lucas decided that Vader was Luke’s father.
Fer a bit thar I were thinkin’ that ye’d be agreein’ with that yellow-bellied skallywag Hanson. Yar, but the Popperians ha’ it! A pint of rum fer ol’ Eliezer!
Avast! But ‘ought’ ain’t needin’ to be comin’ from another ‘ought’, if it be arrived at empirically. Yar.
Several places in the US did have regulations protecting the horse industry from the early automobile industry—I’m not sure what “the present system” refers to as opposed to that sort of thing.
But if there are repeatable psi experiments, then why hasn’t anyone won the million dollars? (or even passed the relatively easy first round?)
You’re forgetting the philosopher’s dotted-line trick of making it clearer by saying it in a foreign language. “Oh, you thought I meant ‘happiness’ which is ill-defined? I actually meant ‘eudaimonia’!”
Julian,
Agreed. Utilitarians are not to be trusted.
kekeke