If the idea of AGI is supposed to be a major info hasard, so much that the majority of the population does not know about it, you can not use predictions markets for cryo as a proxis for solving AGI alignement. Most people who invest on this particular prediction market are blind on the relevant information. Unless the Keepers have the money to outbid everyone ? Then it’s not a prediction market anymore, it’s a press statement from the keepers.
Worse than that, the relationship between “actually solving alignment” and “predicting revival” is very weak. The most obvious flaw is that they might succeed in reviving the Preserved, but without having developed aligned AGI. Being careful, they don’t develop unaligned AGI either. This scenario could easily account for almost all of the 97% assessed probability of successfully revivals.
There are other flaws as well, any of which would be consistent with <1% chance of ever solving alignment.
Given the sheer complexity of the human brain, it seems very unlikely anyone could possibly assess a 97% probability of revival without conditioning on AI of some strong form, if not a full AGI.
It seems very unlikely that anyone could credibly assess a 97% probability of revival for the majority at all, if they haven’t already successfully carried it out at least a few times. Even a fully aligned strongly superhuman AGI may well say “nope, they’re gone” and provide a process for preservation that works instead of whatever they actually did.
I think we’re supposed to assume as part of the premise that dath ilan’s material science and neuroscience are good enough that they got preservation right—that is, they are justly confident that the neurology is preserved at the biochemical level. Since even we are reasonably certain that that’s the necessary data, the big question is just whether we can ever successfully data mine and model it.
I expect dath ilan has some mechanism that makes this work. Maybe it’s just that ordinary people won’t trade because the Keepers have better information than them. In any case, I had messaged Eliezer and he seems to agree with my interpretation.
If Keepers utilise their secret information to bet well then that is a way for the secrets to leak out. So flaunting their ability and confidentiality are in conflict there. I wouldn’t put it past for Keepers to be able to model “What would I think if I did not posses my secret informations” to a very fine detail level, but then that judgement is not the full blown judgement or concern that the person can do.
But this prediction market is exactly the one case where, if the Keepers are concerned about AGI existential risk, signalling to the market not to do this thing is much much more important than preserving the secret. Preventing this thing is what you’re preserving the secret for; if Civilization starts advancing computing too quickly the Keepers have lost.
To deceive in a prediction market is to change the outcome. In this case in the opposite of the way the Keepers want. The whole point of having the utterly trustworthy reputation of the Keepers is so that they can make unexplained bids that strongly signal “you shouldn’t do this and also you shouldn’t ask too hard why not” and have people believe them.
If the idea of AGI is supposed to be a major info hasard, so much that the majority of the population does not know about it, you can not use predictions markets for cryo as a proxis for solving AGI alignement. Most people who invest on this particular prediction market are blind on the relevant information. Unless the Keepers have the money to outbid everyone ? Then it’s not a prediction market anymore, it’s a press statement from the keepers.
Worse than that, the relationship between “actually solving alignment” and “predicting revival” is very weak. The most obvious flaw is that they might succeed in reviving the Preserved, but without having developed aligned AGI. Being careful, they don’t develop unaligned AGI either. This scenario could easily account for almost all of the 97% assessed probability of successfully revivals.
There are other flaws as well, any of which would be consistent with <1% chance of ever solving alignment.
Given the sheer complexity of the human brain, it seems very unlikely anyone could possibly assess a 97% probability of revival without conditioning on AI of some strong form, if not a full AGI.
It seems very unlikely that anyone could credibly assess a 97% probability of revival for the majority at all, if they haven’t already successfully carried it out at least a few times. Even a fully aligned strongly superhuman AGI may well say “nope, they’re gone” and provide a process for preservation that works instead of whatever they actually did.
I think we’re supposed to assume as part of the premise that dath ilan’s material science and neuroscience are good enough that they got preservation right—that is, they are justly confident that the neurology is preserved at the biochemical level. Since even we are reasonably certain that that’s the necessary data, the big question is just whether we can ever successfully data mine and model it.
I expect dath ilan has some mechanism that makes this work. Maybe it’s just that ordinary people won’t trade because the Keepers have better information than them. In any case, I had messaged Eliezer and he seems to agree with my interpretation.
If Keepers utilise their secret information to bet well then that is a way for the secrets to leak out. So flaunting their ability and confidentiality are in conflict there. I wouldn’t put it past for Keepers to be able to model “What would I think if I did not posses my secret informations” to a very fine detail level, but then that judgement is not the full blown judgement or concern that the person can do.
But this prediction market is exactly the one case where, if the Keepers are concerned about AGI existential risk, signalling to the market not to do this thing is much much more important than preserving the secret. Preventing this thing is what you’re preserving the secret for; if Civilization starts advancing computing too quickly the Keepers have lost.
To deceive in a prediction market is to change the outcome. In this case in the opposite of the way the Keepers want. The whole point of having the utterly trustworthy reputation of the Keepers is so that they can make unexplained bids that strongly signal “you shouldn’t do this and also you shouldn’t ask too hard why not” and have people believe them.
I think the idea is that the Keepers have their own internal prediction market.
In this case the prediction market is weaker.