Corruption-by-power (and related issues) seem like problems worth thinking about here. Though they also strike me as problems that humans tend to be very vigilant about / concerned with by default, and problems that become a lot less serious if you’ve got a lot of emulated copies of different individuals, rather than just copies of a single individual.
that’s probably impossible to avoid without trading off generality/capability
You need to trade off some generality/capability anyway for the sake of alignment. One hope (though not the only one) might be that there’s overlap between the capabilities we want to remove for the sake of alignment, and the ones we want to remove for the sake of reducing-the-risk-that-the-AGI-is-conscious.
E.g., if you want your AGI to build nanotech for you and do nothing else, then you might want to limit its ability to think about itself, or its operators, or the larger world, or indeed anything other than different small-scale physical structures. Limiting its generality and self-awareness in this way might also be helpful for reducing the risk that it’s conscious.
EY seems to have just decided earlier on that since conscious AGI is problematic, it shan’t be so.
Where has EY said that he’s confident the first AGI systems won’t be conscious?
E.g., if you want your AGI to build nanotech for you and do nothing else, then you might want to limit its ability to think about itself, or its operators, or the larger world, or indeed anything other than different small-scale physical structures. Limiting its generality and self-awareness in this way might also be helpful for reducing the risk that it’s conscious.
I don’t quite get this example.
How could such a system build nanotech efficiently without it having those properties? Wouldn’t it need a human operator the moment it encountered unexpected phenomena?
If so, it just seems like a really fancy hammer and not an ‘AGI’
Corruption-by-power (and related issues) seem like problems worth thinking about here. Though they also strike me as problems that humans tend to be very vigilant about / concerned with by default, and problems that become a lot less serious if you’ve got a lot of emulated copies of different individuals, rather than just copies of a single individual.
You need to trade off some generality/capability anyway for the sake of alignment. One hope (though not the only one) might be that there’s overlap between the capabilities we want to remove for the sake of alignment, and the ones we want to remove for the sake of reducing-the-risk-that-the-AGI-is-conscious.
E.g., if you want your AGI to build nanotech for you and do nothing else, then you might want to limit its ability to think about itself, or its operators, or the larger world, or indeed anything other than different small-scale physical structures. Limiting its generality and self-awareness in this way might also be helpful for reducing the risk that it’s conscious.
Where has EY said that he’s confident the first AGI systems won’t be conscious?
I don’t quite get this example.
How could such a system build nanotech efficiently without it having those properties? Wouldn’t it need a human operator the moment it encountered unexpected phenomena?
If so, it just seems like a really fancy hammer and not an ‘AGI’