Would an AI that simulates a physical human brain be less prone to FOOM than a human-level AI that doesn’t bother simulating neurons?
It sounds like it might be harder for such an AI to foom, since it would have to understand the physical brain well enough before it could improve on its’ simulated version. If such an AI exists at all, that knowedge would probably be available somewhere, so it could still happen if you simulated someone smart enough to learn it (or simulated one of the people who helped build it). The AI should at least be boxable if it doesn’t know much about neurology or programming, though.
Maybe the catch is that a boxed human simulation that can’t self-modify isn’t very useful. It’d be good as assistive technology or immortality, but you probably can’t learn much about any other kind of AI by studying a simulated human. (The things you could learn from it, are mostly ones you could learn about as easily from studying a physical human.)
Would an AI that simulates a physical human brain be less prone to FOOM than a human-level AI that doesn’t bother simulating neurons?
It sounds like it might be harder for such an AI to foom, since it would have to understand the physical brain well enough before it could improve on its’ simulated version. If such an AI exists at all, that knowedge would probably be available somewhere, so it could still happen if you simulated someone smart enough to learn it (or simulated one of the people who helped build it). The AI should at least be boxable if it doesn’t know much about neurology or programming, though.
Maybe the catch is that a boxed human simulation that can’t self-modify isn’t very useful. It’d be good as assistive technology or immortality, but you probably can’t learn much about any other kind of AI by studying a simulated human. (The things you could learn from it, are mostly ones you could learn about as easily from studying a physical human.)