WBE is basically porting spaghetti code to a really different architecture… it may first seem easy, but…
What comes to mind is some C64 emulators that included logic for a simulated electron beam that scanned the display… as there were games that changed the memory while it was being read in order to… be able to use more colors? I’m not sure, but C64 computers are still created by humans, while evolution had millions of years to mess our brains up with complicated molecular-level stuff.
As I see it, WBE wouldn’t be an one-step achievement but rather a succession of smaller steps, building various kinds of implants, interfaces, etc, “rewriting” parts of it to have the same functionality, until we end up having a brain made out of new code, making the old, biology-based part irrelevant.
That said… I don’t think WBE would solve FAI so easily. The current concept is along the lines of “if we can build a working brain without either us or it knowing how it works, that’s safe”. That is indeed true, but only if we can treat it as a black box all along. Unfortunately, we can’t avoid learning stuff about how minds work in the process, so by the time we get the first functional WBE instance, maybe every grad student would be able to hack a working synthetic AGI together just by reading a few papers...
Following up on this, I wondered what it’d take to emulate a relatively simple processor with as many normal transistors as your brain has neurons, and when we should get to that assuming Moore’s Law hold. Also assuming that the number of transistors needed to emulate something is a simple linear function of the number of transistors in the thing you’re emulating. This seems like it should give a relatively conservative lower bound, but is obviously still just a napkin calculation. The result is about 48 years, and the math is:
Where all numbers are taken from Wikipedia, and the random 2 in the second equation is the Moore’s law years per doubling constant.
I’m not sure what to make of this number, but it is an interesting anchor for other estimates. That said, this whole style of problem is probably much easier in an FPGA or similar, which gives completely different estimates.
I wonder if people would sign up to be simulated with 95% accuracy. That’d would raise some questions about consciousness and identity. I guess you can’t really emulate anything with 100% accuracy. The question of how accurate the simulations would have to be before people think they are safe enough to be uploaded sounds like an interesting topic for debate.
Upvoted for the nice example at the beginning (although it would be even better if I didn’t have to look up C64; for anyone else reading this, C64 stands for Commodore 64, which was an old 8-bit home computer).
WBE is basically porting spaghetti code to a really different architecture… it may first seem easy, but…
What comes to mind is some C64 emulators that included logic for a simulated electron beam that scanned the display… as there were games that changed the memory while it was being read in order to… be able to use more colors? I’m not sure, but C64 computers are still created by humans, while evolution had millions of years to mess our brains up with complicated molecular-level stuff.
As I see it, WBE wouldn’t be an one-step achievement but rather a succession of smaller steps, building various kinds of implants, interfaces, etc, “rewriting” parts of it to have the same functionality, until we end up having a brain made out of new code, making the old, biology-based part irrelevant.
That said… I don’t think WBE would solve FAI so easily. The current concept is along the lines of “if we can build a working brain without either us or it knowing how it works, that’s safe”. That is indeed true, but only if we can treat it as a black box all along. Unfortunately, we can’t avoid learning stuff about how minds work in the process, so by the time we get the first functional WBE instance, maybe every grad student would be able to hack a working synthetic AGI together just by reading a few papers...
Emulation… I know I had a good link on that in Simulation inferences… Ah, here we go! This was a pretty neat Ars Technica article: “Accuracy takes power: one man’s 3GHz quest to build a perfect SNES emulator”
Whether you regard the examples and trade-offs as optimistic or pessimistic lessons for WBE reveals your own take on the matter.
Following up on this, I wondered what it’d take to emulate a relatively simple processor with as many normal transistors as your brain has neurons, and when we should get to that assuming Moore’s Law hold. Also assuming that the number of transistors needed to emulate something is a simple linear function of the number of transistors in the thing you’re emulating. This seems like it should give a relatively conservative lower bound, but is obviously still just a napkin calculation. The result is about 48 years, and the math is:
Where all numbers are taken from Wikipedia, and the random 2 in the second equation is the Moore’s law years per doubling constant.
I’m not sure what to make of this number, but it is an interesting anchor for other estimates. That said, this whole style of problem is probably much easier in an FPGA or similar, which gives completely different estimates.
I wonder if people would sign up to be simulated with 95% accuracy. That’d would raise some questions about consciousness and identity. I guess you can’t really emulate anything with 100% accuracy. The question of how accurate the simulations would have to be before people think they are safe enough to be uploaded sounds like an interesting topic for debate.
Upvoted for the nice example at the beginning (although it would be even better if I didn’t have to look up C64; for anyone else reading this, C64 stands for Commodore 64, which was an old 8-bit home computer).