How much do you want to bet on the conjunction of all those claims? (hint: I think at least one of them is provably untrue even according to current knowledge)
How much do you want to bet on the conjunction of yours?
Just for exercise, let’s estimate the probability of the conjunction of my claims.
claim A: I think the idea of a single ‘self’ in the brain is provably untrue according to currently understood neuroscience. I do honestly think so, therefore P(A) as close to 1.0 as makes no difference. Whether I’m right is another matter.
claim B: I think a wildly speculative vague idea thrown into a discussion and then repeatedly disclaimed does little to clarify anything. P(B) approx 0.998 - I might change my mind before the day is out.
claim C: The thing I claim to think in claim B is in fact “usually” true. P(C) maybe 0.97 because I haven’t really thought it through but I reckon a random sample of 20 instances of such would be unlikely to reveal 10 exceptions, defeating the “usually”.
claim D: A running virtual machine is a physical process happening in a physical object. P(D) very close to 1, because I have no evidence of non-physical processes, and sticking close to the usual definition of a virtual machine, we definitely have never built and run a non-physical one.
claim E: You too are a physical process happening in a physical object. P(E) also close to 1. Never seen a non-physical person either, and if they exist, how do they type comments on lesswrong?
claim F: Nobody knows enough about the reality of consciousness to make legitimate claims that human minds are not information-processing physical processes. P(F) = 0.99. I’m pretty sure I’d have heard something if that problem had been so conclusively solved, but maybe they were disappeared by the CIA or it was announced last week and I’ve been busy or something.
P( A B C D E F) is approx 0.96.
The amount of money I’d bet would depend on the odds on offer.
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
No, you’re right. You did technically answer my question, it wasn’t rude, I should have made my intended point clearer. But your answer is really a restatement of your refutation of Mitchell Porter’s position, not an affirmative defense of your own.
First of all, have I fairly characterized your position in my own post (near the bottom, starting with “For patternists to be right, both the following would have to be true...”)?
If I have not, please let me know which the conditions are not necessary and why.
If I have captured the minimum set of things that have to be true for you to be right, do you see how they (at least the first two) are also conjunctive and at least one of them is provably untrue?
Oh, OK. I get you. I don’t describe myself as a patternist, and I might not be what you mean by it. In any case I am not making the first of those claims.
However, it seems possible to me that a sufficiently close copy of me would think it was me, experience being me, and would maybe even be more similar to me as a person than biological me of five years ago or five years hence.
I do claim that it is theoretically possible to construct such a copy, but I don’t think it is at all probable that signing up for cryonics will result in such a copy ever being made.
If I had to give a reason for thinking it’s possible in principle, I’d have to say: I am deeply sceptical that there is any need for a “self” to be made of anything other than classical physical processes. I don’t think our brains, however complex, require in their physical construction, anything more mysterious than room-temperature chemistry.
The amazing mystery of the informational complexity of our brains is undiminished by believing it to be physically prosaic when you reduce it to its individual components, so it’s not like I’m trying to disappear a problem I don’t understand by pretending that just saying “chemistry” explains it.
I stand by my scepticism of the self as a single indivisible entity with special properties that are posited only to make it agreeable to someone’s intuition, rather than because it best fits the results of experiment. That’s really all my post was about: impatience with argument from intuition and argument by hand-waving.
I’ll continue to doubt the practicality of cryonics until they freeze a rat and restore it 5 years later to a state where they can tell that it remembers stimuli it was taught before freezing. If that state is a virtual rat running on silicon, that will be interesting too.
I’ll continue to doubt the practicality of cryonics until they freeze a rat and restore it 5 years later to a state where they can tell that it remembers stimuli it was taught before freezing. If that state is a virtual rat running on silicon, that will be interesting too.
...and this is a weakly continualist concern that patternists should also agree with even if they disagree with the strong form (“a copy forked off from me is no longer me from that point forward and destroying the original doesn’t solve this problem”).
But this weak continualism is enough to throw some cold water on declaring premature victory in cryonic revival: the lives of humans have worth not only to others but to themselves, and just how close exactly is “close enough” and how to tell the difference are very central to whether lives are being saved or taken away.
How much do you want to bet on the conjunction of yours?
Just for exercise, let’s estimate the probability of the conjunction of my claims.
claim A: I think the idea of a single ‘self’ in the brain is provably untrue according to currently understood neuroscience. I do honestly think so, therefore P(A) as close to 1.0 as makes no difference. Whether I’m right is another matter.
claim B: I think a wildly speculative vague idea thrown into a discussion and then repeatedly disclaimed does little to clarify anything. P(B) approx 0.998 - I might change my mind before the day is out.
claim C: The thing I claim to think in claim B is in fact “usually” true. P(C) maybe 0.97 because I haven’t really thought it through but I reckon a random sample of 20 instances of such would be unlikely to reveal 10 exceptions, defeating the “usually”.
claim D: A running virtual machine is a physical process happening in a physical object. P(D) very close to 1, because I have no evidence of non-physical processes, and sticking close to the usual definition of a virtual machine, we definitely have never built and run a non-physical one.
claim E: You too are a physical process happening in a physical object. P(E) also close to 1. Never seen a non-physical person either, and if they exist, how do they type comments on lesswrong?
claim F: Nobody knows enough about the reality of consciousness to make legitimate claims that human minds are not information-processing physical processes. P(F) = 0.99. I’m pretty sure I’d have heard something if that problem had been so conclusively solved, but maybe they were disappeared by the CIA or it was announced last week and I’ve been busy or something.
P( A B C D E F) is approx 0.96.
The amount of money I’d bet would depend on the odds on offer.
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
No, you’re right. You did technically answer my question, it wasn’t rude, I should have made my intended point clearer. But your answer is really a restatement of your refutation of Mitchell Porter’s position, not an affirmative defense of your own.
First of all, have I fairly characterized your position in my own post (near the bottom, starting with “For patternists to be right, both the following would have to be true...”)?
If I have not, please let me know which the conditions are not necessary and why.
If I have captured the minimum set of things that have to be true for you to be right, do you see how they (at least the first two) are also conjunctive and at least one of them is provably untrue?
Oh, OK. I get you. I don’t describe myself as a patternist, and I might not be what you mean by it. In any case I am not making the first of those claims.
However, it seems possible to me that a sufficiently close copy of me would think it was me, experience being me, and would maybe even be more similar to me as a person than biological me of five years ago or five years hence.
I do claim that it is theoretically possible to construct such a copy, but I don’t think it is at all probable that signing up for cryonics will result in such a copy ever being made.
If I had to give a reason for thinking it’s possible in principle, I’d have to say: I am deeply sceptical that there is any need for a “self” to be made of anything other than classical physical processes. I don’t think our brains, however complex, require in their physical construction, anything more mysterious than room-temperature chemistry.
The amazing mystery of the informational complexity of our brains is undiminished by believing it to be physically prosaic when you reduce it to its individual components, so it’s not like I’m trying to disappear a problem I don’t understand by pretending that just saying “chemistry” explains it.
I stand by my scepticism of the self as a single indivisible entity with special properties that are posited only to make it agreeable to someone’s intuition, rather than because it best fits the results of experiment. That’s really all my post was about: impatience with argument from intuition and argument by hand-waving.
I’ll continue to doubt the practicality of cryonics until they freeze a rat and restore it 5 years later to a state where they can tell that it remembers stimuli it was taught before freezing. If that state is a virtual rat running on silicon, that will be interesting too.
...and this is a weakly continualist concern that patternists should also agree with even if they disagree with the strong form (“a copy forked off from me is no longer me from that point forward and destroying the original doesn’t solve this problem”).
But this weak continualism is enough to throw some cold water on declaring premature victory in cryonic revival: the lives of humans have worth not only to others but to themselves, and just how close exactly is “close enough” and how to tell the difference are very central to whether lives are being saved or taken away.