I have a rather straightforward argument—well, I have an idea that I completely stole from someone else who might be significantly less confident of it than I am—anyway, I have an argument that there is a strong possibility, let’s call it 30% for kicks, that conditional on yer typical FAI FOOM outwards at lightspeed singularity, all humans who have died can be revived with very high accuracy. (In fact it can also work if FAI isn’t developed and human technology completely stagnates, but that scenario makes it less obvious.) This argument does not depend on the possibility of magic powers (e.g. questionably precise simulations by Friendly “counterfactual” quantum sibling branches), it applies to humans who were cremated, and it also applies to humans who lived before there was recorded history. Basically, there doesn’t have to be much of any local information around come FOOM.
Again, this argument is disjunctive with the unknown big angelic powers argument, and doesn’t necessitate aid from quantum siblings
You’ve done a lot of promotion of cryonics. There are good memetic engineering reasons. But are you really very confident that cryonics is necessary for an FAI to revive arbitrary dead human beings with ‘lots’ of detail? If not, is your lack of confidence taken into account in your seemingly-confident promotion of cryonics for its own sake rather than just as a memetic strategy to get folk into the whole ‘taking transhumanism/singularitarianism seriously’ clique?
I have a rather straightforward argument [...] anyway, I have an argument that there is a strong possibility [...] This argument does not depend on [...] Again, this argument is disjunctive with [...]
How foolish of you to ask. You’re supposed to revise your probability simply based on Will’s claim that he has an argument. That is how rational agreement works.
Bwa ha ha. I’ve already dropped way too many hints here and elsewhere, and I think it’s way too awesome for me to reveal given that I didn’t come up with it and there is a sharper more interesting more general more speculative idea that it would be best to introduce at the same time because the generalized argument leads to an that is even more awesome by like an order of magnitude (but is probably like an order of magnitude less probable (though that’s just from the addition of logical uncertainty, not a true conjunct)). (I’m kind of in an affective death spiral around it because it’s a great example of the kinds of crazy awesome things you can get from a single completely simple and obvious inferential step.)
I have a rather straightforward argument—well, I have an idea that I completely stole from someone else who might be significantly less confident of it than I am—anyway, I have an argument that there is a strong possibility, let’s call it 30% for kicks, that conditional on yer typical FAI FOOM outwards at lightspeed singularity, all humans who have died can be revived with very high accuracy. (In fact it can also work if FAI isn’t developed and human technology completely stagnates, but that scenario makes it less obvious.) This argument does not depend on the possibility of magic powers (e.g. questionably precise simulations by Friendly “counterfactual” quantum sibling branches), it applies to humans who were cremated, and it also applies to humans who lived before there was recorded history. Basically, there doesn’t have to be much of any local information around come FOOM.
Again, this argument is disjunctive with the unknown big angelic powers argument, and doesn’t necessitate aid from quantum siblings
You’ve done a lot of promotion of cryonics. There are good memetic engineering reasons. But are you really very confident that cryonics is necessary for an FAI to revive arbitrary dead human beings with ‘lots’ of detail? If not, is your lack of confidence taken into account in your seemingly-confident promotion of cryonics for its own sake rather than just as a memetic strategy to get folk into the whole ‘taking transhumanism/singularitarianism seriously’ clique?
And that argument is … ?
How foolish of you to ask. You’re supposed to revise your probability simply based on Will’s claim that he has an argument. That is how rational agreement works.
Actually, rational agreement for humans involves betting. I’d like to find a way to bet on this one. AI-box style.
Bwa ha ha. I’ve already dropped way too many hints here and elsewhere, and I think it’s way too awesome for me to reveal given that I didn’t come up with it and there is a sharper more interesting more general more speculative idea that it would be best to introduce at the same time because the generalized argument leads to an that is even more awesome by like an order of magnitude (but is probably like an order of magnitude less probable (though that’s just from the addition of logical uncertainty, not a true conjunct)). (I’m kind of in an affective death spiral around it because it’s a great example of the kinds of crazy awesome things you can get from a single completely simple and obvious inferential step.)