Those aren’t terribly helpful or persuasive arguments. And the second (broken link) when repaired is I believe supposed to link to http://lesswrong.com/lw/bfj/evidence_for_the_orthogonality_thesis/68np yes? That’s not really that helpful, since that just amounts to saying that an AI won’t be a random point in mind space and that some possible methods (partcularly uploads) might not be awful. That’s not exactly very strong as arguments go.
It’s not strong in the sense of reducing the likelihood of uFAI to 0. It strong enough to disprove a confident “will be unfriendly”. Note that the combination of low likleihood and high impact (and asking for money to solve the problem) is a Pascal’s mugging.
I fail to follow that logic. There’s not some magic opinion associated with MIRI that’s relevant to this claim. MIRI’s existence or opinions of how to approach this doesn’t alter at all whether or not this is an existential threat that needs to be taken seriously, or whether the orthogonality thesis is plausible, or any of the other issues. That’s an example of the genetic fallacy.
Whether or not anyone should believe that this is an existential threat that needs to be taken seriously depends on whether or not the claim can be justified, and only MIRI is making these specific version for the claim. You are trying to argue “never mind the justification, look at the truth”, but truth is not knowable except by justifying claims. If MIRI/LW is making a kind of claim, a P’S M (as defined by MIRI/LW) that MIRI/LW separately maintains is not a kind of claim that should be believed, then MIRI/LW is making incoherent claims (like “Don’t believe holy books, but believe the Bible).
No, not “proven” but highly likely.
The likelihood hasn’t been proven either.This , this and this
How does the Orthogonality Thesis help your point?
Those aren’t terribly helpful or persuasive arguments. And the second (broken link) when repaired is I believe supposed to link to http://lesswrong.com/lw/bfj/evidence_for_the_orthogonality_thesis/68np yes? That’s not really that helpful, since that just amounts to saying that an AI won’t be a random point in mind space and that some possible methods (partcularly uploads) might not be awful. That’s not exactly very strong as arguments go.
It’s not strong in the sense of reducing the likelihood of uFAI to 0. It strong enough to disprove a confident “will be unfriendly”. Note that the combination of low likleihood and high impact (and asking for money to solve the problem) is a Pascal’s mugging.
So how low a likelyhood do you need before it is a Pascal’s Mugging? 70%? 50%? 10%? 1%? Something lower?
That’s not my problem. It’s MIRIs problem to argue that the likelihood is above their threshold.
… nnnot if your goal is “find out whether or not AI existential risk is a problem,” and not “win an argument with MIRI”.
Do the contradictions in the Bible matter? Are athiesist strying to save their souls, or win an argument with beleivers?
You’ve argued that this is a Pascal’s mugging. So where do you set that threshold?
I argue that a sufficiently low likelihood is a P’s M, by MIRI’s definition, so MIRI needs to show the likelihood is above that threshold.
I fail to follow that logic. There’s not some magic opinion associated with MIRI that’s relevant to this claim. MIRI’s existence or opinions of how to approach this doesn’t alter at all whether or not this is an existential threat that needs to be taken seriously, or whether the orthogonality thesis is plausible, or any of the other issues. That’s an example of the genetic fallacy.
Whether or not anyone should believe that this is an existential threat that needs to be taken seriously depends on whether or not the claim can be justified, and only MIRI is making these specific version for the claim. You are trying to argue “never mind the justification, look at the truth”, but truth is not knowable except by justifying claims. If MIRI/LW is making a kind of claim, a P’S M (as defined by MIRI/LW) that MIRI/LW separately maintains is not a kind of claim that should be believed, then MIRI/LW is making incoherent claims (like “Don’t believe holy books, but believe the Bible).
Your first link is broken.