I think we might have different definitions of a boxed-AI. An AI that is literally not allowed to interact with the world at all isn’t terribly useful and it sounds like a problem at least as hard as all other kinds of FAI.
I just mean a normal dangerous AI that physically can’t interact with the outside world. Importantly it’s goal is to provably give the best output it possibly can if you give it a problem. So it won’t hide nanotech in your cure for alzheimers because that would be a less fit and more complicated solution than a simple chemical compound (you would have to judge solutions based on complexity though and verify them by a human or in a simulation first just in case.)
I don’t think most computers today have anywhere near enough processing power to simulate a full human brain. A human down to the molecular level is entirely out of the question. An AI on a modern computer, if it’s smarter than human at all, will get there by having faster serial processing or more efficient algorithms, not because it has massive raw computational power.
And you can always scale down the hardware or charge it utility for using more computing power than it needs, forcing it to be efficient or limiting it’s intelligence further. You don’t need to invoke the full power of super-intelligence for every problem and for your safety you probably shouldn’t.
I think we might have different definitions of a boxed-AI. An AI that is literally not allowed to interact with the world at all isn’t terribly useful and it sounds like a problem at least as hard as all other kinds of FAI.
I just mean a normal dangerous AI that physically can’t interact with the outside world. Importantly it’s goal is to provably give the best output it possibly can if you give it a problem. So it won’t hide nanotech in your cure for alzheimers because that would be a less fit and more complicated solution than a simple chemical compound (you would have to judge solutions based on complexity though and verify them by a human or in a simulation first just in case.)
I don’t think most computers today have anywhere near enough processing power to simulate a full human brain. A human down to the molecular level is entirely out of the question. An AI on a modern computer, if it’s smarter than human at all, will get there by having faster serial processing or more efficient algorithms, not because it has massive raw computational power.
And you can always scale down the hardware or charge it utility for using more computing power than it needs, forcing it to be efficient or limiting it’s intelligence further. You don’t need to invoke the full power of super-intelligence for every problem and for your safety you probably shouldn’t.