But surely some human uploads would be a good solution for safety, right? As a lower bound, if we had high-quality uploads of the alignment team, they could just do whatever they were going to in the real world in the emulation.
coming back to this I’m realizing I didn’t answer, no, I don’t think merely uploading the alignment team would really help that much, the problem is that universalizing coprotection between arbitrary blocks of matter in a way that doesn’t have adversarial examples is really really incredibly hard and being on a digital computer doesn’t really make you faster at figuring it out. you could try to self modify but if you don’t have some solution to verifiable inter matter safety, then you need to stay worried that you might be about to diverge. and I would expect almost any approach to uploads to introduce issues that are not detectable without a lot of work. if we are being serious about uploads as a proposal in the next two years it would involve suddenly doing a lot of very advanced neuroscience to try to accurately model physical neurons. that’s actually not obviously off the table to me but it doesn’t seem like an approach worth pushing.
My argument is that faithful exact brain uploads are guaranteed to not help unless you had already solved AI safety anyhow. I do think we can simply solve ai extinction risk anyhow, but it requires us to not only prevent AI that does not follow orders, but also prevent AI from “just following orders” to do things that some humans value but which abuse others. if we fall too far into the latter attractor—which we are at immediate risk of doing, well before stably self-reflective AGI ever happens—we become guaranteed to shortly go extinct as corporations are increasingly just an ai and a human driver. eventually the strongest corporations are abusing larger and larger portions of humanity with one human at the helm. then one day ai can drive the entire economy...
it’s pretty much just the slower version of yudkowsky’s concerns. I think he’s wrong to think self-distillation will be this quick snap-down onto the manifold of high quality hypotheses, but other than that I think he’s on point. and because of that, I think the incremental behavior of the market is likely to pull us into a defection-only-game-theory hole as society’s capabilities melt in the face of increased heat and chaos at various scales of the world.
But surely some human uploads would be a good solution for safety, right? As a lower bound, if we had high-quality uploads of the alignment team, they could just do whatever they were going to in the real world in the emulation.
coming back to this I’m realizing I didn’t answer, no, I don’t think merely uploading the alignment team would really help that much, the problem is that universalizing coprotection between arbitrary blocks of matter in a way that doesn’t have adversarial examples is really really incredibly hard and being on a digital computer doesn’t really make you faster at figuring it out. you could try to self modify but if you don’t have some solution to verifiable inter matter safety, then you need to stay worried that you might be about to diverge. and I would expect almost any approach to uploads to introduce issues that are not detectable without a lot of work. if we are being serious about uploads as a proposal in the next two years it would involve suddenly doing a lot of very advanced neuroscience to try to accurately model physical neurons. that’s actually not obviously off the table to me but it doesn’t seem like an approach worth pushing.
My argument is that faithful exact brain uploads are guaranteed to not help unless you had already solved AI safety anyhow. I do think we can simply solve ai extinction risk anyhow, but it requires us to not only prevent AI that does not follow orders, but also prevent AI from “just following orders” to do things that some humans value but which abuse others. if we fall too far into the latter attractor—which we are at immediate risk of doing, well before stably self-reflective AGI ever happens—we become guaranteed to shortly go extinct as corporations are increasingly just an ai and a human driver. eventually the strongest corporations are abusing larger and larger portions of humanity with one human at the helm. then one day ai can drive the entire economy...
it’s pretty much just the slower version of yudkowsky’s concerns. I think he’s wrong to think self-distillation will be this quick snap-down onto the manifold of high quality hypotheses, but other than that I think he’s on point. and because of that, I think the incremental behavior of the market is likely to pull us into a defection-only-game-theory hole as society’s capabilities melt in the face of increased heat and chaos at various scales of the world.