BBC Radio : Should we be frightened of intelligent computers? http://www.bbc.co.uk/programmes/p01rqkp4 Includes Nick Bostrom from about halfway through.
Lapsed_Lurker
Drat. I just came here to post that. Still, at least this time I only missed by hours.
You need a different definition for ‘blackmail’ then. Action X might be beneficial to the blackmailer rather than negative in value and still be blackmail.
Why not taboo ‘blackmail’? That word already has a bunch of different meanings in law and common usage.
Omega gives you a choice of either $1 or $X, where X is either 2 or 100?
It seems like you must have meant something else, but I can’t figure it out.
Isn’t that steel-man, rather than strong-man?
Reading that, I thought: “I bet people asking questions like that is why ‘Original Sin’ got invented”.
Of course, the next step is to ask: “Why doesn’t the priest drown the baby in the baptismal font, now that its Original Sin is forgiven?”
…
I, Robin, or Michael Vassar could probably think for five minutes and name five major probable-big-win meta-level improvements that society isn’t investing in
Are there lists like this about? I think I’d like to read about that sort of stuff.
I remember seeing a few AI(and other things, sometimes) debates (mostly on YouTube) where they’d just be getting to the point where they are clarifying what it is that each one actually believes and you get: ‘agree to disagree’. The end.
Just when the really interesting part seemed to be approaching! :(
For text-based discussions that fail to go anywhere, that brings to mind the ‘talking past each other’ you mention or ‘appears to be deliberately misinterpreting the other person’
Has there been any evolution in either of their positions since 2008, or is that the latest we have?
edit Credit to XiXiDu to sending me this OB link, which contains in the comments this YouTube video of a Hanson-Yudkowsky AI debate in 2011. Boiling it down to one sentence I’d say it amounts to Hanson thinking that a singleton Foom is a lot less likely than Yudkowsky thinks.
Is that more or less what it was in 2008?
I find it is the downsides of those things that I generally blame for not doing them, though I do own a Bon Jovi CD.
…powers such as precognition (knowledge of the future), telepathy or psychokinesis…
Sounds like a description of magic to me. They could have written it differently if they’d wanted to evoke the impression of super-advanced technologies.
I hope that happens quick. There are systems in my body that need some re-engineering, lest I die even sooner than the average Englishman.
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for making cheesecake.
Several comments on the original thread seem to be making a comparison between “I found a complicated machine-thing, something must have made it” and the classic anti-evolution “This looks complicated, therefore God”
I can’t quite see how they can leap from one to the other.
So, a choice between the worst possible thing a superintelligence can do to you by teaching you an easily-verifiable truth and the most wonderful possible thing by having you believe an untruth. That ought to be an easy choice, except maybe when there’s no Omega and people are tempted to signal about how attached to the truth they are, or something.
I am worried about “a belief/fact in its class” the class chosen could have an extreme effect on the outcome.
OpenOffice file, I think. edit OpenDocument Presentation. You ought to be able to view it with more recent versions of MS Office, it seems
[pollid:49]
Surely if you provably know what the ideal FAI would do in many situations, a giant step forward has been made in FAI theory?