MIRI, Open AI, FHI, etc. are focusing largely on artificial paths to superintelligence, since that leads to the value loading problem. While this is likely the biggest concern, in terms of expected utility, neuron-level simulations of minds may provide another route. This might actually be where the bulk of the probability of superintelligence resides, even if the bulk of the expected utility lies in preventing things like paperclip maximizers.
Robin Hanson has some persuasive arguments that uploading may actually occur years before artificial intelligence becomes possible. (See Age of EM.) If this is the case, then it may be highly valuable to have the first uploads be very familiar with the risks of the alignment problem. This could prevent 2 paths to misaligned AI:
Uploads running at faster subjective speeds greatly accelerating the advent of true AI, by developing it themselves. Imagine a thousand copies of the smartest AI researcher running at 1000x human speed, collaborating with him or herself on the first AI.
The uploads themselves are likely to be significantly modifiable. Since it would always be possible to be reset to backup, it becomes much easier to experiment with someone’s mind. Even if we start out only knowing how neurons are connected, but not much about how they function, we may quickly develop the ability to massively modify our own minds. If we mess with our utility functions, whether intentionally or unintentionally, this starts to raise concerns like AI alignment and value drift.
The obvious solution is to hand Bostrom’s Superintelligence out like candy to cryonicists. Maybe even get Alcor to try and revive FAI researchers first. However, given a first-in-last-out policy, this may not be as important for us as for future generations. We obviously have a lot of time to sort this out, so this is likely a low priority this decade/century.
This is potentially quite important.
MIRI, Open AI, FHI, etc. are focusing largely on artificial paths to superintelligence, since that leads to the value loading problem. While this is likely the biggest concern, in terms of expected utility, neuron-level simulations of minds may provide another route. This might actually be where the bulk of the probability of superintelligence resides, even if the bulk of the expected utility lies in preventing things like paperclip maximizers.
Robin Hanson has some persuasive arguments that uploading may actually occur years before artificial intelligence becomes possible. (See Age of EM.) If this is the case, then it may be highly valuable to have the first uploads be very familiar with the risks of the alignment problem. This could prevent 2 paths to misaligned AI:
Uploads running at faster subjective speeds greatly accelerating the advent of true AI, by developing it themselves. Imagine a thousand copies of the smartest AI researcher running at 1000x human speed, collaborating with him or herself on the first AI.
The uploads themselves are likely to be significantly modifiable. Since it would always be possible to be reset to backup, it becomes much easier to experiment with someone’s mind. Even if we start out only knowing how neurons are connected, but not much about how they function, we may quickly develop the ability to massively modify our own minds. If we mess with our utility functions, whether intentionally or unintentionally, this starts to raise concerns like AI alignment and value drift.
The obvious solution is to hand Bostrom’s Superintelligence out like candy to cryonicists. Maybe even get Alcor to try and revive FAI researchers first. However, given a first-in-last-out policy, this may not be as important for us as for future generations. We obviously have a lot of time to sort this out, so this is likely a low priority this decade/century.