I’d say It depends how complete you think modern neuroscience is. If you think neuroscience is fairly complete and there won’t be many gotcha’s about how things work then I would adopt your view.
The less complete it is, and the more known unknowns and unknown unknowns there might be before we get a full understanding the more chance that one of those unknowns will interact with the vitrification fluid or how quickly we manage to get people vitrified at the moment (we might not be being quick enough to preserve some chemical structures).
I half jokingly compared it to alchemy in the pre-chemistry days, they had so many unknown unknowns people couldn’t find convincing arguments against it. Should they have expected it to work? That is an extreme example though, I think we have a better handle of the brain than the alchemists did of the possibilities of transmuting lead to gold.
But remember that the alchemists’ conclusion is correct. You can turn lead into gold. It’s harder than they thought, and the tools at our disposal are far more effective than they could have imagined. In the end, it turns out that the latter won out over the former.
In a case of uncertainty, you assign broad probability distributions. If the bet has a lopsided cost function, then uncertainty, “unknown unknowns”, etc, are reasons to take the bet. More uncertainty = more expected payoff.
I’d say It depends how complete you think modern neuroscience is. If you think neuroscience is fairly complete and there won’t be many gotcha’s about how things work then I would adopt your view.
The less complete it is, and the more known unknowns and unknown unknowns there might be before we get a full understanding the more chance that one of those unknowns will interact with the vitrification fluid or how quickly we manage to get people vitrified at the moment (we might not be being quick enough to preserve some chemical structures).
I half jokingly compared it to alchemy in the pre-chemistry days, they had so many unknown unknowns people couldn’t find convincing arguments against it. Should they have expected it to work? That is an extreme example though, I think we have a better handle of the brain than the alchemists did of the possibilities of transmuting lead to gold.
But remember that the alchemists’ conclusion is correct. You can turn lead into gold. It’s harder than they thought, and the tools at our disposal are far more effective than they could have imagined. In the end, it turns out that the latter won out over the former.
In a case of uncertainty, you assign broad probability distributions. If the bet has a lopsided cost function, then uncertainty, “unknown unknowns”, etc, are reasons to take the bet. More uncertainty = more expected payoff.