Many people in our community claim to have ideas for how to build AGI, or other things, that they deem infohazardous and so don’t want to publish. It would be great if they could publicly register these ideas in an encrypted way, so that later when their predictions come true they can reveal the key and everyone can see that they called it and give them epistemic credit accordingly.
I know this is possible in principle, e.g. by using PGP and posting encrypted messages on your LW shortform and then later revealing the key.
But it would be nice if this was a convenient, hassle-free feature embedded in LW, for example.
Also: Is this a bad idea for some reason? Is the privacy not as secure as I think, such that people would be hesistant to make even these encrypted predictions? (I guess there is the matter of how to securely store the key...) Is there a way to make a prediction that will automatically be decrypted after N years?
[Question] Is there a convenient way to make “sealed” predictions?
Many people in our community claim to have ideas for how to build AGI, or other things, that they deem infohazardous and so don’t want to publish. It would be great if they could publicly register these ideas in an encrypted way, so that later when their predictions come true they can reveal the key and everyone can see that they called it and give them epistemic credit accordingly.
I know this is possible in principle, e.g. by using PGP and posting encrypted messages on your LW shortform and then later revealing the key.
But it would be nice if this was a convenient, hassle-free feature embedded in LW, for example.
Also: Is this a bad idea for some reason? Is the privacy not as secure as I think, such that people would be hesistant to make even these encrypted predictions? (I guess there is the matter of how to securely store the key...) Is there a way to make a prediction that will automatically be decrypted after N years?