I totally agree with that notion, I however believe the current levers of progress massively incentivize and motivate AGI development over WBE. Currently regulations are based on flops, which will restrict progress towards WBE long before it restricts anything with AGI-like capabilities. If we had a perfectly aligned international system of oversight that assured WBE were possible and maximized in apparent value to those with the means to both develop it and push the levers, steering away from any risky AGI analogue before it is possible, then yes, but that seems very unlikely to me.
Also I worry. Humans are not aligned. Humans having WBE at our fingertips could mean infinite tortured simulations of the digital brains before they bear any more bountiful fruit for humans on Earth. It seems ominous, fully replicated human consciousness so exact a bit here or there off could destroy it.
I’m not sure what you’re talking about. Maybe you meant to say: “there are ideas for possible future AI regulations that have been under discussion recently, and these ideas involve flop-based thresholds”? If so, yeah that’s kinda true, albeit oversimplified.
which will restrict progress towards WBE long before it restricts anything with AGI-like capabilities
I think that’s very true in the “WBE without reverse engineering” route, but it’s at least not obvious in the “WBE with reverse engineering” route that I think we should be mainly talking about (as argued in OP). For the latter, we would have legible learning algorithms that we understand, and we would re-implement them in the most compute-efficient way we can on our GPUs/CPUs. And it’s at least plausible that the result would be close to the best learning algorithm there is. More discussion in Section 2.1 of this post. Certainly there would be room to squeeze some more intelligence into the same FLOP/s—e.g. tweaking motivations, saving compute by dropping the sense of smell, various other architectural tweaks, etc. But it’s at least plausible IMO that this adds up to <1 OOM. (Of course, non-WBE AGIs could still be radically superhuman, but it would be by using radically superhuman FLOP (e.g. model size, training time, speed, etc.))
Humans having WBE at our fingertips could mean infinite tortured simulations of the digital brains
I grant that a sadistic human could do that, and that’s bad, although it’s pretty low on my list of “likely causes of s-risk”. (Presumably Ems, like humans, would be more economically productive when they’re feeling pretty good, in a flow state, etc., and presumably most Ems would be doing economically productive things most of the time for various reasons.)
Anyway, you can say: “To avoid that type of problem, let’s never ever create sentient digital minds”, but that doesn’t strike me as a realistic thing to aim for. In particular, in my (controversial) opinion, that basically amounts to “let’s never ever create AGI” (the way I define “AGI”, e.g. AI that can do groundbreaking new scientific research, invent new gadgets, etc.) If “never ever create AGI” is your aim, then I don’t want to discourage you. Hmm, or maybe I do, I haven’t thought about it really, because in my opinion you’d be so unlikely to succeed that it’s a moot point. Forever is a long time.
I totally agree with that notion, I however believe the current levers of progress massively incentivize and motivate AGI development over WBE. Currently regulations are based on flops, which will restrict progress towards WBE long before it restricts anything with AGI-like capabilities. If we had a perfectly aligned international system of oversight that assured WBE were possible and maximized in apparent value to those with the means to both develop it and push the levers, steering away from any risky AGI analogue before it is possible, then yes, but that seems very unlikely to me.
Also I worry. Humans are not aligned. Humans having WBE at our fingertips could mean infinite tortured simulations of the digital brains before they bear any more bountiful fruit for humans on Earth. It seems ominous, fully replicated human consciousness so exact a bit here or there off could destroy it.
I’m not sure what you’re talking about. Maybe you meant to say: “there are ideas for possible future AI regulations that have been under discussion recently, and these ideas involve flop-based thresholds”? If so, yeah that’s kinda true, albeit oversimplified.
I think that’s very true in the “WBE without reverse engineering” route, but it’s at least not obvious in the “WBE with reverse engineering” route that I think we should be mainly talking about (as argued in OP). For the latter, we would have legible learning algorithms that we understand, and we would re-implement them in the most compute-efficient way we can on our GPUs/CPUs. And it’s at least plausible that the result would be close to the best learning algorithm there is. More discussion in Section 2.1 of this post. Certainly there would be room to squeeze some more intelligence into the same FLOP/s—e.g. tweaking motivations, saving compute by dropping the sense of smell, various other architectural tweaks, etc. But it’s at least plausible IMO that this adds up to <1 OOM. (Of course, non-WBE AGIs could still be radically superhuman, but it would be by using radically superhuman FLOP (e.g. model size, training time, speed, etc.))
Hmm. I should mention that I don’t expect that LLMs will scale to AGI. That might be a difference between our perspectives. Anyway, you’re welcome to believe that “WBE before non-WBE-AGI” is hopeless even if we put moonshot-level effort into accelerating WBE. That’s not a crazy thing to believe. I wouldn’t go as far as “hopeless”, but I’m pretty pessimistic too. That’s why, when I go around advocating for work on human connectomics to help AGI x-risk, I prefer to emphasize a non-WBE-related path to AI x-risk reduction that seems (to me) likelier to actualize.
I grant that a sadistic human could do that, and that’s bad, although it’s pretty low on my list of “likely causes of s-risk”. (Presumably Ems, like humans, would be more economically productive when they’re feeling pretty good, in a flow state, etc., and presumably most Ems would be doing economically productive things most of the time for various reasons.)
Anyway, you can say: “To avoid that type of problem, let’s never ever create sentient digital minds”, but that doesn’t strike me as a realistic thing to aim for. In particular, in my (controversial) opinion, that basically amounts to “let’s never ever create AGI” (the way I define “AGI”, e.g. AI that can do groundbreaking new scientific research, invent new gadgets, etc.) If “never ever create AGI” is your aim, then I don’t want to discourage you. Hmm, or maybe I do, I haven’t thought about it really, because in my opinion you’d be so unlikely to succeed that it’s a moot point. Forever is a long time.