This is the argument of the beard. You can pluck one hair from a bearded man and he still has a beard, therefore by induction you can pluck all the hairs and he still has a beard.
Or if you stipulate that replacing N neurons not merely causes no “significant” change, but absolutely no change at all, even according to observations that we don’t yet know we would need to make, then you’ve baked the conclusion into the premises.
If I continually pluck hairs from my beard then I have noticeably less of a beard. Eventually I will have no beard. Replacing some neurons with the given procedure does not change behavior so the subject cannot notice a change. If the subject noticed a change then there would be a change in behavior. If you assert that a change in consciousness occurred, then you assert that a change in consciousness does not produce a change in consciousness to notice it.
We can fall asleep without noticing, but there is always a way to notice the changes. One can decide to be vigilant and use self awareness to prevent oneself from falling asleep, for example. After the procedure of replacing any arbitrary number of neurons, one cannot notice an internal change at all regardless of any self evaluation of consciousness one decides to do. What standard of deciding claims of consciousness can possibly supersede consciousness evaluating itself? If I had a million neurons replaced and could not possibly notice a difference, how could you honestly justify a claim that my identity was degraded?
If plucking hairs changes my beard then there will be a point at which it is noticeable before it is completely gone. My beard does not go from existing to not existing in a single pluck.
My consciousness does not go from existing to not existing in a single neuron pluck. My identity does not radically change in a single pluck. There is a continuum of small changes that lead to large changes. There will come a point at which the changes accumulate that can be noticed.
Note that I’m not referring to gradual changes through time, but a single procedure occurring once that replaces N neurons in one go.
Assume that the procedure does produce a significant change, significant meaning noticeable but not crippling, to consciousness at some number of replacements U. There is a number of replacements 0 < N ⇐ U such that N-1 replacements is not noticeable by the subject. Noticing is a yes or no binary matter, the subject can be asked to say yes or no to whether a change is noticed.
The crucial part of the argument is that one cannot in any way notice any difference regardless of how many neurons are altered during the procedure because the specified procedure preserves behavior. Conscious awareness corresponds with behavior. If behavior cannot change when the procedure alters a third of the brain, then consciousness cannot noticeably change. If consciousness is noticeably changed from an internal perspective then a difference in behavior can be produced.
One advantage to a thought experiment is that it can be scaled without cost. Instead of your sorites series, let us posit a huge number of conscious humans. We alter each human to correspond to a single step in your gradual change over time, so that we wind up performing in parallel what you posit as a series of steps. Line our subjects in “stage of alteration” order.
Now the conclusion of your series of steps corresponds to the state of the last subject in our lineup. Is this subject’s consciousness the same as at start? If we assume yes, then we have assumed our conclusion, and the argument assumes its conclusions.
If we assume for sake of argument the subject’s consciousness at the end of our lineup differs from the start of the lineup, then we can walk along the line and locate where we first begin to notice a change. This might vary with groups of subjects, but we can certainly then find a mean for where the change may start. This is possible even if in series we cannot perceive a difference between the subject from one step to another.
Where did I say “times”? I meant that kN neurons are effectively replaced at once. I said in the argument that the neurons are replaced with a neglible time difference.
Doing them all at once doesn’t help. You are still arguing that if kN neurons make no observable difference, then neither do (k+1)N, for any k. This is not true, and the underlying binary concept that it either does, or does not, make an observable difference does not fit the situation.
Let P(n) designate the proposition that the procedure does not alter current or future consciousness if n neurons are replaced at once.
P(0) is true.
2. Suppose P(k) is true for some number k. Then replacing k neurons does not change consciousness for the present or future. Replace a single extra neuron in a neglible amount of time since the former replacement, such as the reaction time of a single neuron divided by the total number of neurons in the brain. #Replacing a single neuron on an unaltered consciousness with a functional replacement produces no change in current or future consciousness. # Therefore P(k+1) is true.
By mathematical induction, P(n) is true for all n >= 0
The proof uses mathematical induction. The only way to argue against this is to show that 1. or 2. is false. P(0) is obviously true. The supposition is valid because P(k) is true for at least one k, k = 0. One must then demonstrate that the statement between the hashtags is false. As I implied in my update, the statement between the hashtags is not necessarily true.
One must then demonstrate that the statement between the hashtags is false. As I implied in my update, the statement between the hashtags is not necessarily true.
Then that undercuts the whole argument. That is exactly the argument by the beard. It depends on indistinguishablility being a transitive property, but it is not. If A and B are, for example, two colours that you cannot tell apart, and also B and C, and also C and D, you may see a clear difference between A and D.
You cannot see grass grow from one minute to the next. But you can see it grow from one day to the next.
“Indistinguishability” in my original argument was meant as a behavior change that reflects the subject’s awareness of a change in consciousness. The replacement indistinguishability is not transitive. Regardless of how many are replaced in any order there cannot be a behavior change, even if it goes as A to B, A to C, A to D...
I think we differ in that I assumed that a change in consciousness can be manifested in a behavior change. You may disagree with this and claim that consciousness can change without the behavior being able to change.
The replacement indistinguishability is not transitive.
I assume that’s a typo for “is transitive”.
Regardless of how many are replaced in any order there cannot be a behavior change, even if it goes as A to B, A to C, A to D.
Why not? If you assume absolute identity of behaviour, you’re assuming the conclusion. But absolute identity is unobservable. The best you can get is indistinguishability under whatever observations you’re making, in which case it is not transitive. There is no way to make this argument work without assuming the conclusion.
All proofs at least implicitly contain the conclusion in the assumptions or axioms. That’s because proofs don’t generate information, they just unravel what one has already assumed by definition or axioms.
So yes, I’m implicitly assuming the conclusion in the assumptions. The point of the proof was to convince people who agreed with all the assumptions in the first place but who did not believe in the conclusion. There are people who do believe the assumptions but do not agree with the conclusion, which, as you say is in the assumptions.
This is the argument of the beard. You can pluck one hair from a bearded man and he still has a beard, therefore by induction you can pluck all the hairs and he still has a beard.
Or if you stipulate that replacing N neurons not merely causes no “significant” change, but absolutely no change at all, even according to observations that we don’t yet know we would need to make, then you’ve baked the conclusion into the premises.
If I continually pluck hairs from my beard then I have noticeably less of a beard. Eventually I will have no beard. Replacing some neurons with the given procedure does not change behavior so the subject cannot notice a change. If the subject noticed a change then there would be a change in behavior. If you assert that a change in consciousness occurred, then you assert that a change in consciousness does not produce a change in consciousness to notice it.
We can fall asleep without noticing, but there is always a way to notice the changes. One can decide to be vigilant and use self awareness to prevent oneself from falling asleep, for example. After the procedure of replacing any arbitrary number of neurons, one cannot notice an internal change at all regardless of any self evaluation of consciousness one decides to do. What standard of deciding claims of consciousness can possibly supersede consciousness evaluating itself? If I had a million neurons replaced and could not possibly notice a difference, how could you honestly justify a claim that my identity was degraded?
Almost all gradual-brain-to-device replacement arguments are indeed sorites arguments. You assume:
Plucking one or three hairs from a beard that has 10000 hairs beard is too small an action to change a beard visibly
Plucking 2 hairs from a beard with 9998 hairs is too small a change to see (true)
Plucking 2 hairs from a beard with 9996 hairs is too small a change to see (true)
...
plucking 2 hairs 4000 times from a beard is too small a change to see (false)
If plucking hairs changes my beard then there will be a point at which it is noticeable before it is completely gone. My beard does not go from existing to not existing in a single pluck.
My consciousness does not go from existing to not existing in a single neuron pluck. My identity does not radically change in a single pluck. There is a continuum of small changes that lead to large changes. There will come a point at which the changes accumulate that can be noticed.
Note that I’m not referring to gradual changes through time, but a single procedure occurring once that replaces N neurons in one go.
Assume that the procedure does produce a significant change, significant meaning noticeable but not crippling, to consciousness at some number of replacements U. There is a number of replacements 0 < N ⇐ U such that N-1 replacements is not noticeable by the subject. Noticing is a yes or no binary matter, the subject can be asked to say yes or no to whether a change is noticed.
The crucial part of the argument is that one cannot in any way notice any difference regardless of how many neurons are altered during the procedure because the specified procedure preserves behavior. Conscious awareness corresponds with behavior. If behavior cannot change when the procedure alters a third of the brain, then consciousness cannot noticeably change. If consciousness is noticeably changed from an internal perspective then a difference in behavior can be produced.
One advantage to a thought experiment is that it can be scaled without cost. Instead of your sorites series, let us posit a huge number of conscious humans. We alter each human to correspond to a single step in your gradual change over time, so that we wind up performing in parallel what you posit as a series of steps. Line our subjects in “stage of alteration” order.
Now the conclusion of your series of steps corresponds to the state of the last subject in our lineup. Is this subject’s consciousness the same as at start? If we assume yes, then we have assumed our conclusion, and the argument assumes its conclusions.
If we assume for sake of argument the subject’s consciousness at the end of our lineup differs from the start of the lineup, then we can walk along the line and locate where we first begin to notice a change. This might vary with groups of subjects, but we can certainly then find a mean for where the change may start. This is possible even if in series we cannot perceive a difference between the subject from one step to another.
You refer to doing this k times. There is your gradual process, your argument by the beard.
If A is indistinguishable from B, and B is indistinguishable from C, it does not follow that A is indistinguishable from C.
Where did I say “times”? I meant that kN neurons are effectively replaced at once. I said in the argument that the neurons are replaced with a neglible time difference.
Doing them all at once doesn’t help. You are still arguing that if kN neurons make no observable difference, then neither do (k+1)N, for any k. This is not true, and the underlying binary concept that it either does, or does not, make an observable difference does not fit the situation.
Let P(n) designate the proposition that the procedure does not alter current or future consciousness if n neurons are replaced at once.
P(0) is true.
2. Suppose P(k) is true for some number k. Then replacing k neurons does not change consciousness for the present or future. Replace a single extra neuron in a neglible amount of time since the former replacement, such as the reaction time of a single neuron divided by the total number of neurons in the brain. #Replacing a single neuron on an unaltered consciousness with a functional replacement produces no change in current or future consciousness. # Therefore P(k+1) is true.
By mathematical induction, P(n) is true for all n >= 0
The proof uses mathematical induction. The only way to argue against this is to show that 1. or 2. is false. P(0) is obviously true. The supposition is valid because P(k) is true for at least one k, k = 0. One must then demonstrate that the statement between the hashtags is false. As I implied in my update, the statement between the hashtags is not necessarily true.
Then that undercuts the whole argument. That is exactly the argument by the beard. It depends on indistinguishablility being a transitive property, but it is not. If A and B are, for example, two colours that you cannot tell apart, and also B and C, and also C and D, you may see a clear difference between A and D.
You cannot see grass grow from one minute to the next. But you can see it grow from one day to the next.
“Indistinguishability” in my original argument was meant as a behavior change that reflects the subject’s awareness of a change in consciousness. The replacement indistinguishability is not transitive. Regardless of how many are replaced in any order there cannot be a behavior change, even if it goes as A to B, A to C, A to D...
I think we differ in that I assumed that a change in consciousness can be manifested in a behavior change. You may disagree with this and claim that consciousness can change without the behavior being able to change.
I assume that’s a typo for “is transitive”.
Why not? If you assume absolute identity of behaviour, you’re assuming the conclusion. But absolute identity is unobservable. The best you can get is indistinguishability under whatever observations you’re making, in which case it is not transitive. There is no way to make this argument work without assuming the conclusion.
All proofs at least implicitly contain the conclusion in the assumptions or axioms. That’s because proofs don’t generate information, they just unravel what one has already assumed by definition or axioms.
So yes, I’m implicitly assuming the conclusion in the assumptions. The point of the proof was to convince people who agreed with all the assumptions in the first place but who did not believe in the conclusion. There are people who do believe the assumptions but do not agree with the conclusion, which, as you say is in the assumptions.