You really think there is logical certainty that uploading works in principle and your suggestions are exactly as likely as the suggestion ‘uploading doesn’t actually work’?
For any particular proposal for mind-uploading, there’s probably a significant risk that it doesn’t work, but I understand that to mean: there’s a risk that what it produces isn’t functionally equivalent to the person uploaded. Not “there’s a risk that when God/Ripley is watching everyone’s viewscreens from the control room, she sees that uploaded person’s thoughts are on a different screen from the original.”
Of course there is such a risk. We can’t even do formal mathematics without significant and ineradicable risk in the final proof; what on earth makes you think any anti-zombie or anti-Riply proof is going to do any better? And in formal math, you don’t usually have tons of experts disagreeing with the proof and final conclusion either. If you think uploading is so certain the risk it is fundamentally incorrect is zero or epsilon, you have drunk the koolaid.
I’d rate the chance that early upload techniques miss some necessary components of sapience as reasonably high, but that’s a technical problem rather than a philosophical one. My confidence in uploading in principle, on the other hand, is roughly equivalent to my confidence in reductionism: which is to say pretty damn high, although not quite one or one minus epsilon. Specifically: for all possible upload techniques to generate a discontinuity in a way that, say, sleep doesn’t, it seems to me that not only do minds need to involve some kind of irreducible secret sauce, but also that that needs to be bound to substrate in a non-transferable way, which would be rather surprising. Some kind of delicate QM nonsense might fit the bill, but that veers dangerously close to woo.
The most parsimonious explanation seems to be that, yes, it involves a discontinuity in consciousness, but so do all sorts of phenomena that we don’t bother to note or even notice. Which is a somewhat disquieting thought, but one I’ll have to live with.
Actually, http://lesswrong.com/lw/7ve/paper_draft_coalescing_minds_brain/ seems to discuss a way of upload being non-destructive transition. We know that brain can learn to use implanted neurons under some very special conditions now; so maybe you could first learn to use an artificial mind-holder (without a mind yet) as a minor supplement and then learn to use it more and more until death of your original brain is just a flesh wound. Maybe not—but it does seem to be a technological problem.
Yeah, I was assuming a destructive upload for simplicity’s sake. Processes similar to the one you outline don’t generate an obvious discontinuity, so I imagine they’d seem less intuitively scary; still, a strong Searlean viewpoint probably wouldn’t accept them.
This double-negative “if you really believe not-X then you’re wrong” framing is a bit confusing, so I’ll just ask.
Consider the set P of all processes that take a person X1 as input and produce X2 as output, where there’s no known test that can distinguish X1 from X2. Consider three such processes: P1 - A digital upload of X1 is created. P2 - X1 is cryogenically suspended and subsequently restored. P3 - X1 lives for a decade of normal life.
Call F(P) the probability that X2 is in any sense that matters not the same person as X1, or perhaps not a person at all.
Do you think F(P1) is more than epsilon different from F(P2)? Than F(P3)? Do you think F(P2) is more than epsilon different from F(P3)?
For my part, I consider all three within epsilon of one another, given the premises.
Do you think F(P1) is more than epsilon different from F(P2)? Than F(P3)?
Do you think F(P2) is more than epsilon different from F(P3)?
Erm, yes, to all three. The two transitions all involve things which are initially plausible and have not been driven down to epsilon (which is a very small quantity) by subsequent research.
For example, we still don’t have great evidence that brain activity isn’t dynamicly dependent on electrical activity (among others!) which is destroyed by death/cryonics. All we have are a few scatter-shot examples about hypothermia and stuff, which is a level of proof I would barely deign to look at for supplements, much less claim that it’s such great evidence that it drives down the probability of error to epsilon!
Yes, I agree, as do the quotes and Agar even: because this is not Pascal’s wager where the infinites render the probabilities irrelevant, we ultimately need to fill in specific probabilities before we can decide that destructive uploading is a bad idea, and this is where Agar goes terribly wrong—he presents poor arguments that the probabilities will be low enough to make it an obviously bad idea. But I don’t think this point is relevant to this conversation thread.
You really think there is logical certainty that uploading works in principle and your suggestions are exactly as likely as the suggestion ‘uploading doesn’t actually work’?
How would you show that my suggestions are less likely? The thing is, it’s not as though “nobody’s mind has annihilated” is data that we can work from. It’s impossible to have such data except in the first-person case, and even there it’s impossible to know that your mind didn’t annihilate last year and then recreate itself five seconds ago.
We’re predisposed to say that a jarring physical discontinuity (even if afterwards, we have an agent functionally equivalent to the original) is more likely to cause mind-annihilation than no such discontinuity, but this intuition seems to be resting on nothing whatsoever.
We’re predisposed to say that a jarring physical discontinuity (even if afterwards, we have an agent functionally equivalent to the original) is more likely to cause mind-annihilation than no such discontinuity, but this intuition seems to be resting on nothing whatsoever.
Do you have any argument that all our previous observations where jarring physical discontinuities tend to be associated with jarring mental discontinuities (like, oh I don’t know, death) are wrong? Or are you just trying to put the burden of proof on me and smugly use an argument from ignorance?
Of course, we haven’t had any instances of jarring physical discontinuities not being accompanied by ‘functional discontinuities’ (hopefully it’s clear what I mean).
But the deeper point is that the whole presumption that we have ‘mental continuity’ (in a way that transcends functional organization) is an intuition founded on nothing.
(To be fair, even if we accept that these intuitions are indefensible, it’s remains to be explained where they come from. I don’t think it’s all that “bizarre”.)
You really think there is logical certainty that uploading works in principle and your suggestions are exactly as likely as the suggestion ‘uploading doesn’t actually work’?
For any particular proposal for mind-uploading, there’s probably a significant risk that it doesn’t work, but I understand that to mean: there’s a risk that what it produces isn’t functionally equivalent to the person uploaded. Not “there’s a risk that when God/Ripley is watching everyone’s viewscreens from the control room, she sees that uploaded person’s thoughts are on a different screen from the original.”
Of course there is such a risk. We can’t even do formal mathematics without significant and ineradicable risk in the final proof; what on earth makes you think any anti-zombie or anti-Riply proof is going to do any better? And in formal math, you don’t usually have tons of experts disagreeing with the proof and final conclusion either. If you think uploading is so certain the risk it is fundamentally incorrect is zero or epsilon, you have drunk the koolaid.
I’d rate the chance that early upload techniques miss some necessary components of sapience as reasonably high, but that’s a technical problem rather than a philosophical one. My confidence in uploading in principle, on the other hand, is roughly equivalent to my confidence in reductionism: which is to say pretty damn high, although not quite one or one minus epsilon. Specifically: for all possible upload techniques to generate a discontinuity in a way that, say, sleep doesn’t, it seems to me that not only do minds need to involve some kind of irreducible secret sauce, but also that that needs to be bound to substrate in a non-transferable way, which would be rather surprising. Some kind of delicate QM nonsense might fit the bill, but that veers dangerously close to woo.
The most parsimonious explanation seems to be that, yes, it involves a discontinuity in consciousness, but so do all sorts of phenomena that we don’t bother to note or even notice. Which is a somewhat disquieting thought, but one I’ll have to live with.
Actually, http://lesswrong.com/lw/7ve/paper_draft_coalescing_minds_brain/ seems to discuss a way of upload being non-destructive transition. We know that brain can learn to use implanted neurons under some very special conditions now; so maybe you could first learn to use an artificial mind-holder (without a mind yet) as a minor supplement and then learn to use it more and more until death of your original brain is just a flesh wound. Maybe not—but it does seem to be a technological problem.
Yeah, I was assuming a destructive upload for simplicity’s sake. Processes similar to the one you outline don’t generate an obvious discontinuity, so I imagine they’d seem less intuitively scary; still, a strong Searlean viewpoint probably wouldn’t accept them.
This double-negative “if you really believe not-X then you’re wrong” framing is a bit confusing, so I’ll just ask.
Consider the set P of all processes that take a person X1 as input and produce X2 as output, where there’s no known test that can distinguish X1 from X2. Consider three such processes:
P1 - A digital upload of X1 is created.
P2 - X1 is cryogenically suspended and subsequently restored.
P3 - X1 lives for a decade of normal life.
Call F(P) the probability that X2 is in any sense that matters not the same person as X1, or perhaps not a person at all.
Do you think F(P1) is more than epsilon different from F(P2)? Than F(P3)?
Do you think F(P2) is more than epsilon different from F(P3)?
For my part, I consider all three within epsilon of one another, given the premises.
Erm, yes, to all three. The two transitions all involve things which are initially plausible and have not been driven down to epsilon (which is a very small quantity) by subsequent research.
For example, we still don’t have great evidence that brain activity isn’t dynamicly dependent on electrical activity (among others!) which is destroyed by death/cryonics. All we have are a few scatter-shot examples about hypothermia and stuff, which is a level of proof I would barely deign to look at for supplements, much less claim that it’s such great evidence that it drives down the probability of error to epsilon!
OK, thanks for clarifying.
Indeed, the line in the quote:
Could apply equally well to crossing a street. There is very, very little we can do without some “ineliminable risk” being attached to it.
We have to balance the risks and expected benefits for our actions; which requires knowledge not philosophical “might-be”s.
Yes, I agree, as do the quotes and Agar even: because this is not Pascal’s wager where the infinites render the probabilities irrelevant, we ultimately need to fill in specific probabilities before we can decide that destructive uploading is a bad idea, and this is where Agar goes terribly wrong—he presents poor arguments that the probabilities will be low enough to make it an obviously bad idea. But I don’t think this point is relevant to this conversation thread.
It occurred to me when I was reading the original post, but I was inspired to post it here mostly as a me-too to your line:
That is, reinforcing that everything has some “ineradicable risk”.
How would you show that my suggestions are less likely? The thing is, it’s not as though “nobody’s mind has annihilated” is data that we can work from. It’s impossible to have such data except in the first-person case, and even there it’s impossible to know that your mind didn’t annihilate last year and then recreate itself five seconds ago.
We’re predisposed to say that a jarring physical discontinuity (even if afterwards, we have an agent functionally equivalent to the original) is more likely to cause mind-annihilation than no such discontinuity, but this intuition seems to be resting on nothing whatsoever.
Yes. How bizarre of us to be so predisposed.
Nice sarcasm. So it must be really easy for you to answer my question then: “How would you show that my suggestions are less likely?”
Right?
Do you have any argument that all our previous observations where jarring physical discontinuities tend to be associated with jarring mental discontinuities (like, oh I don’t know, death) are wrong? Or are you just trying to put the burden of proof on me and smugly use an argument from ignorance?
Of course, we haven’t had any instances of jarring physical discontinuities not being accompanied by ‘functional discontinuities’ (hopefully it’s clear what I mean).
But the deeper point is that the whole presumption that we have ‘mental continuity’ (in a way that transcends functional organization) is an intuition founded on nothing.
(To be fair, even if we accept that these intuitions are indefensible, it’s remains to be explained where they come from. I don’t think it’s all that “bizarre”.)