These two lines seem to me contradictory. It is not clear to me should I upload you or preserve your brain.
I don’t understand how the cells of the brain produce qualia and consciousness, and have a certain concern that an attempt at uploading my mind into digital form may lose important parts of my self. If you haven’t solved those fundamental problems of how brains produce minds, I would prefer to be revived as a biological, living being, rather than have my mind uploaded into software form.
I understand that all choices contain risk. However, I believe that the “information” theory of identity is a more useful guide than theories of identity which tie selfhood to a physical brain. I also suspect that there will be certain advantages to be one of the first minds turned into software, and certain disadvantages. In order to try to gain those advantages, and minimize those disadvantages, I am willing to volunteer to let my cryonically-preserved brain be used for experimental mind-uploading procedures, provided that certain preconditions are met, including:
The intended meaning, which it seems I will need to rephrase to clarify: “If you are experimenting with uploading, and can meet these minimal common-sense standards, then I’m willing to volunteer ahead of time be your guinea pig. If you can’t meet them, then I’d rather stay frozen a little longer. Just FYI.”
MIRI, Open AI, FHI, etc. are focusing largely on artificial paths to superintelligence, since that leads to the value loading problem. While this is likely the biggest concern, in terms of expected utility, neuron-level simulations of minds may provide another route. This might actually be where the bulk of the probability of superintelligence resides, even if the bulk of the expected utility lies in preventing things like paperclip maximizers.
Robin Hanson has some persuasive arguments that uploading may actually occur years before artificial intelligence becomes possible. (See Age of EM.) If this is the case, then it may be highly valuable to have the first uploads be very familiar with the risks of the alignment problem. This could prevent 2 paths to misaligned AI:
Uploads running at faster subjective speeds greatly accelerating the advent of true AI, by developing it themselves. Imagine a thousand copies of the smartest AI researcher running at 1000x human speed, collaborating with him or herself on the first AI.
The uploads themselves are likely to be significantly modifiable. Since it would always be possible to be reset to backup, it becomes much easier to experiment with someone’s mind. Even if we start out only knowing how neurons are connected, but not much about how they function, we may quickly develop the ability to massively modify our own minds. If we mess with our utility functions, whether intentionally or unintentionally, this starts to raise concerns like AI alignment and value drift.
The obvious solution is to hand Bostrom’s Superintelligence out like candy to cryonicists. Maybe even get Alcor to try and revive FAI researchers first. However, given a first-in-last-out policy, this may not be as important for us as for future generations. We obviously have a lot of time to sort this out, so this is likely a low priority this decade/century.
These two lines seem to me contradictory. It is not clear to me should I upload you or preserve your brain.
I don’t understand how the cells of the brain produce qualia and consciousness, and have a certain concern that an attempt at uploading my mind into digital form may lose important parts of my self. If you haven’t solved those fundamental problems of how brains produce minds, I would prefer to be revived as a biological, living being, rather than have my mind uploaded into software form.
I understand that all choices contain risk. However, I believe that the “information” theory of identity is a more useful guide than theories of identity which tie selfhood to a physical brain. I also suspect that there will be certain advantages to be one of the first minds turned into software, and certain disadvantages. In order to try to gain those advantages, and minimize those disadvantages, I am willing to volunteer to let my cryonically-preserved brain be used for experimental mind-uploading procedures, provided that certain preconditions are met, including:
The intended meaning, which it seems I will need to rephrase to clarify: “If you are experimenting with uploading, and can meet these minimal common-sense standards, then I’m willing to volunteer ahead of time be your guinea pig. If you can’t meet them, then I’d rather stay frozen a little longer. Just FYI.”
This is potentially quite important.
MIRI, Open AI, FHI, etc. are focusing largely on artificial paths to superintelligence, since that leads to the value loading problem. While this is likely the biggest concern, in terms of expected utility, neuron-level simulations of minds may provide another route. This might actually be where the bulk of the probability of superintelligence resides, even if the bulk of the expected utility lies in preventing things like paperclip maximizers.
Robin Hanson has some persuasive arguments that uploading may actually occur years before artificial intelligence becomes possible. (See Age of EM.) If this is the case, then it may be highly valuable to have the first uploads be very familiar with the risks of the alignment problem. This could prevent 2 paths to misaligned AI:
Uploads running at faster subjective speeds greatly accelerating the advent of true AI, by developing it themselves. Imagine a thousand copies of the smartest AI researcher running at 1000x human speed, collaborating with him or herself on the first AI.
The uploads themselves are likely to be significantly modifiable. Since it would always be possible to be reset to backup, it becomes much easier to experiment with someone’s mind. Even if we start out only knowing how neurons are connected, but not much about how they function, we may quickly develop the ability to massively modify our own minds. If we mess with our utility functions, whether intentionally or unintentionally, this starts to raise concerns like AI alignment and value drift.
The obvious solution is to hand Bostrom’s Superintelligence out like candy to cryonicists. Maybe even get Alcor to try and revive FAI researchers first. However, given a first-in-last-out policy, this may not be as important for us as for future generations. We obviously have a lot of time to sort this out, so this is likely a low priority this decade/century.