What if it’s a better simulation, though? What if we do some neuroscience to characterize how the whole system works, and some philosophy to characterize some of what is valuable about this thing built from neurons, and realize “Hey, this whole dependence on blood sugar levels, vitamin levels, oxygen availability, etc is kind of a raw deal. I don’t want my very nature to be subject to such stupid things as what I did or did not eat this morning.”
Also, the OP did not merely say “I’m not indifferent”, it said “a person running on a different computational substrate might have no value, even though they are indistinguishable for all practical purposes, and an FAI thinks it’s OK.” At that level of conservatism, we might as well not do anything. What if the people we shoot into space have no value once they leave earth’s gravity well? What if only that one tribe in the amazon that’s never interacted with technology are actually people? What if the humane, rational, superintelligent best guess of the thing to do is X, but that’s actually wrong and we should do Y?
These problems have very low probability, and we have much more important problems to worry about, like “what if we build what we think is an FAI, but it can’t do philosophy and we programmed in a bunch of stupid philosophical assumptions”, which is the more serious problem you alluded to in OP. This problem was also discussed by MIRI people in 2004 in CEV, and by Wei Dai with his “Philosophical AI” ideas. It could use more discussion.
I guess I got confused by you mixing in the FAI stuff with the patternism stuff. Are you more interested in whether patternism is true, or whether and FAI would get the right answer on that question?
What if there’s some guy next door who shares a lot of my personality and background but is smarter, funnier, saner, healthier and more hard working than I am. Maybe to you we are interchangeable, and if I die you’ll say “Oh well, we still have bokov-prime, they’re equivalent”. But it turns out that I’m not okay with that arrangement. His existence would not cause me to become any less concerned about protecting my own.
Also, the OP did not merely say “I’m not indifferent”, it said “a person running on a different computational substrate might have no value
I didn’t say no value. I said, less value to me than myself.
even though they are indistinguishable for all practical purposes, and an FAI thinks it’s OK
No. More like “even though they are indistinguishable by measurements possible at the time of the upload/copy except if one could somehow directly experience both mental states”.
I guess I got confused by you mixing in the FAI stuff with the patternism stuff. Are you more interested in whether patternism is true, or whether and FAI would get the right answer on that question?
I’m interested in FAI not ending up with values antagonistic to my own. The value most at risk appears to be continuity. Therefore, I’m engaging FAI people on that issue in hopes that they will convince me, I convince them, or we discover that there are enough unknown unknowns that we should plan for the possibility that both or either points of view could be wrong and treat as dangerous proposals to solve human problems via uploading in the absence of these unknowns being filled in.
The approach I propose is not doing nothing, nor rejecting uploading. This conversation has helped me figure out what safeguard I do advocate: rejecting destructive uploading and placing a priority on developing brain-machine interfaces, so we aren’t operating blind on whether we have achieved equivalence or not.
I’m interested in FAI not ending up with values antagonistic to my own. The value most at risk appears to be continuity. Therefore, I’m engaging FAI people on that issue in hopes that they will convince me, I convince them, or we discover that there are enough unknown unknowns that we should plan for the possibility that both or either points of view could be wrong and treat as dangerous proposals to solve human problems via uploading in the absence of these unknowns being filled in.
Ok, but the current state of the debate on FAI is already that we don’t trust human philosophers, and we need to plan for the possibility that all our assumptions are wrong, and build capability to deal with that into the FAI.
What we decide on patternism today has no relevance to what happens post-FAI, because everyone seriously working on it realizes that it would be stupid for the FAI not to be able to revise everything to the correct position, or discover the truth itself if we didn’t bother. So the only purpose of these philosophical discussions is either for our own entertainment, or for making decisions before FAI. So the FAI thing doesn’t actually come into it at all.
rejecting destructive uploading and placing a priority on developing brain-machine interfaces, so we aren’t operating blind on whether we have achieved equivalence or not.
This is very sensible, even if you were a die-hard patternist. In that way, even patternism probably doesn’t come into the point you are making, which is that we should be really, really cautious with irreversible technological change, especially of the transhuman variety, because we can’t recover from it and the stakes are so high.
I, for one, think doing any transhuman stuff, and even a lot of mundane stuff like universal networking and computation, without adult supervision (FAI) is a really bad idea. We need to get FAI right as fast as possible so that we flawed humans don’t even have to make these decisions.
What if we do some neuroscience to characterize how the whole system works, and some philosophy to characterize some of what is valuable about this thing built from neurons
Hidden in the phrases “do some neuroscience” and “some philosophy” are hard problems. What reason do you have for believing that either of them is an easier problem than creating a brain simulation that third parties will find convincing?
What if it’s a better simulation, though? What if we do some neuroscience to characterize how the whole system works, and some philosophy to characterize some of what is valuable about this thing built from neurons, and realize “Hey, this whole dependence on blood sugar levels, vitamin levels, oxygen availability, etc is kind of a raw deal. I don’t want my very nature to be subject to such stupid things as what I did or did not eat this morning.”
Also, the OP did not merely say “I’m not indifferent”, it said “a person running on a different computational substrate might have no value, even though they are indistinguishable for all practical purposes, and an FAI thinks it’s OK.” At that level of conservatism, we might as well not do anything. What if the people we shoot into space have no value once they leave earth’s gravity well? What if only that one tribe in the amazon that’s never interacted with technology are actually people? What if the humane, rational, superintelligent best guess of the thing to do is X, but that’s actually wrong and we should do Y?
These problems have very low probability, and we have much more important problems to worry about, like “what if we build what we think is an FAI, but it can’t do philosophy and we programmed in a bunch of stupid philosophical assumptions”, which is the more serious problem you alluded to in OP. This problem was also discussed by MIRI people in 2004 in CEV, and by Wei Dai with his “Philosophical AI” ideas. It could use more discussion.
I guess I got confused by you mixing in the FAI stuff with the patternism stuff. Are you more interested in whether patternism is true, or whether and FAI would get the right answer on that question?
What if there’s some guy next door who shares a lot of my personality and background but is smarter, funnier, saner, healthier and more hard working than I am. Maybe to you we are interchangeable, and if I die you’ll say “Oh well, we still have bokov-prime, they’re equivalent”. But it turns out that I’m not okay with that arrangement. His existence would not cause me to become any less concerned about protecting my own.
I didn’t say no value. I said, less value to me than myself.
No. More like “even though they are indistinguishable by measurements possible at the time of the upload/copy except if one could somehow directly experience both mental states”.
I’m interested in FAI not ending up with values antagonistic to my own. The value most at risk appears to be continuity. Therefore, I’m engaging FAI people on that issue in hopes that they will convince me, I convince them, or we discover that there are enough unknown unknowns that we should plan for the possibility that both or either points of view could be wrong and treat as dangerous proposals to solve human problems via uploading in the absence of these unknowns being filled in.
The approach I propose is not doing nothing, nor rejecting uploading. This conversation has helped me figure out what safeguard I do advocate: rejecting destructive uploading and placing a priority on developing brain-machine interfaces, so we aren’t operating blind on whether we have achieved equivalence or not.
Ok, but the current state of the debate on FAI is already that we don’t trust human philosophers, and we need to plan for the possibility that all our assumptions are wrong, and build capability to deal with that into the FAI.
What we decide on patternism today has no relevance to what happens post-FAI, because everyone seriously working on it realizes that it would be stupid for the FAI not to be able to revise everything to the correct position, or discover the truth itself if we didn’t bother. So the only purpose of these philosophical discussions is either for our own entertainment, or for making decisions before FAI. So the FAI thing doesn’t actually come into it at all.
This is very sensible, even if you were a die-hard patternist. In that way, even patternism probably doesn’t come into the point you are making, which is that we should be really, really cautious with irreversible technological change, especially of the transhuman variety, because we can’t recover from it and the stakes are so high.
I, for one, think doing any transhuman stuff, and even a lot of mundane stuff like universal networking and computation, without adult supervision (FAI) is a really bad idea. We need to get FAI right as fast as possible so that we flawed humans don’t even have to make these decisions.
Hidden in the phrases “do some neuroscience” and “some philosophy” are hard problems. What reason do you have for believing that either of them is an easier problem than creating a brain simulation that third parties will find convincing?