I’m interested in FAI not ending up with values antagonistic to my own. The value most at risk appears to be continuity. Therefore, I’m engaging FAI people on that issue in hopes that they will convince me, I convince them, or we discover that there are enough unknown unknowns that we should plan for the possibility that both or either points of view could be wrong and treat as dangerous proposals to solve human problems via uploading in the absence of these unknowns being filled in.
Ok, but the current state of the debate on FAI is already that we don’t trust human philosophers, and we need to plan for the possibility that all our assumptions are wrong, and build capability to deal with that into the FAI.
What we decide on patternism today has no relevance to what happens post-FAI, because everyone seriously working on it realizes that it would be stupid for the FAI not to be able to revise everything to the correct position, or discover the truth itself if we didn’t bother. So the only purpose of these philosophical discussions is either for our own entertainment, or for making decisions before FAI. So the FAI thing doesn’t actually come into it at all.
rejecting destructive uploading and placing a priority on developing brain-machine interfaces, so we aren’t operating blind on whether we have achieved equivalence or not.
This is very sensible, even if you were a die-hard patternist. In that way, even patternism probably doesn’t come into the point you are making, which is that we should be really, really cautious with irreversible technological change, especially of the transhuman variety, because we can’t recover from it and the stakes are so high.
I, for one, think doing any transhuman stuff, and even a lot of mundane stuff like universal networking and computation, without adult supervision (FAI) is a really bad idea. We need to get FAI right as fast as possible so that we flawed humans don’t even have to make these decisions.
Ok, but the current state of the debate on FAI is already that we don’t trust human philosophers, and we need to plan for the possibility that all our assumptions are wrong, and build capability to deal with that into the FAI.
What we decide on patternism today has no relevance to what happens post-FAI, because everyone seriously working on it realizes that it would be stupid for the FAI not to be able to revise everything to the correct position, or discover the truth itself if we didn’t bother. So the only purpose of these philosophical discussions is either for our own entertainment, or for making decisions before FAI. So the FAI thing doesn’t actually come into it at all.
This is very sensible, even if you were a die-hard patternist. In that way, even patternism probably doesn’t come into the point you are making, which is that we should be really, really cautious with irreversible technological change, especially of the transhuman variety, because we can’t recover from it and the stakes are so high.
I, for one, think doing any transhuman stuff, and even a lot of mundane stuff like universal networking and computation, without adult supervision (FAI) is a really bad idea. We need to get FAI right as fast as possible so that we flawed humans don’t even have to make these decisions.