2.5 sounds like we’re just way off and shouldn’t expect to get WBE until superhuman neurology AI. But given that, the remaining difficulties seem just another constant factor. The question becomes, do you expect “If you change random bits and try to run it, it mostly just breaks.” to hold up?
For Section 2.5 (C. elegans), @davidad is simultaneously hopeful about a human upload moonshot (cf. the post from yesterday) and intimately familiar with C. elegans uploading stuff (having been personally involved). And he’s a pretty reasonable guy IMO. So the inference “c. elegans stuff therefore human uploads are way off” is evidently less of a slam dunk inference than you seem to think it is. (As I mentioned in the post, I don’t know the details, and I hope I didn’t give a misleading impression there.)
I’m confused by your last sentence; how does that connect to the rest of your comment? (What I personally actually expect is that, if there are uploads at all, it would be via the reverse-engineering route, where we would not have to “change random bits”.)
My second sentence meant “If neurology AI can do WBE, a slightly (on a grand scale) more superhuman AI could do it without reverse engineering.”. But actually we could just have the AI reverse-engineer the brain, then obfuscate the upload, then delete the AI.
Suppose the company gets bought and they try to improve the upload’s performance without understanding it. My third sentence meant, would they find that the Algernon argument applies to uploads?
The question becomes, do you expect “If you change random bits and try to run it, it mostly just breaks.” to hold up?
My suspicion is that the answer is likely no, and this is actually a partial crux on why I’m less doomy than others on AI risk, especially from misalignment.
My general expectation is that most of the difficulty is hardware + ethics, and in particular the hardware for running a human brain just does not exist right now, primarily because of the memory bottleneck/Von Neumann bottleneck that exists for GPUs, and it would at the current state of affairs require deleting a lot of memory from a human brain.
I disagree about the hardware difficulty of uploading-with-reverse-engineering—the short version of one aspect of my perspective is here, the longer version with some flaws is here, the fixed version of the latter exists as a half-complete draft that maybe I’ll finish sooner or later. :)
2.5 sounds like we’re just way off and shouldn’t expect to get WBE until superhuman neurology AI. But given that, the remaining difficulties seem just another constant factor. The question becomes, do you expect “If you change random bits and try to run it, it mostly just breaks.” to hold up?
For Section 2.5 (C. elegans), @davidad is simultaneously hopeful about a human upload moonshot (cf. the post from yesterday) and intimately familiar with C. elegans uploading stuff (having been personally involved). And he’s a pretty reasonable guy IMO. So the inference “c. elegans stuff therefore human uploads are way off” is evidently less of a slam dunk inference than you seem to think it is. (As I mentioned in the post, I don’t know the details, and I hope I didn’t give a misleading impression there.)
I’m confused by your last sentence; how does that connect to the rest of your comment? (What I personally actually expect is that, if there are uploads at all, it would be via the reverse-engineering route, where we would not have to “change random bits”.)
Oh, okay.
My second sentence meant “If neurology AI can do WBE, a slightly (on a grand scale) more superhuman AI could do it without reverse engineering.”. But actually we could just have the AI reverse-engineer the brain, then obfuscate the upload, then delete the AI.
Suppose the company gets bought and they try to improve the upload’s performance without understanding it. My third sentence meant, would they find that the Algernon argument applies to uploads?
My suspicion is that the answer is likely no, and this is actually a partial crux on why I’m less doomy than others on AI risk, especially from misalignment.
My general expectation is that most of the difficulty is hardware + ethics, and in particular the hardware for running a human brain just does not exist right now, primarily because of the memory bottleneck/Von Neumann bottleneck that exists for GPUs, and it would at the current state of affairs require deleting a lot of memory from a human brain.
I disagree about the hardware difficulty of uploading-with-reverse-engineering—the short version of one aspect of my perspective is here, the longer version with some flaws is here, the fixed version of the latter exists as a half-complete draft that maybe I’ll finish sooner or later. :)