Sure, for awhile, until it gets smart enough, say, smarter than whatever keeps it inside the box.
Then how do I know I’m not boxed?
Who says you aren’t? Who says we all aren’t? All those quantum limits and exponentially harder ways to get farther away from Earth might be the walls of the box in someone’s Truman show.
An AI that isn’t smart enough to notice (or care) that it’s boxed doesn’t seem to be a dangerous AI.
Which makes me think that AIs that would object to being boxed are precisely the ones that should be. But then that would make a smart AI pretend to be OK with it.
This reminds me of the Catch-22 case of soldiers who pretended to be insane by volunteering for suicide missions so that their superiors would remove them from said missions.
Sure, for awhile, until it gets smart enough, say, smarter than whatever keeps it inside the box.
Who says you aren’t? Who says we all aren’t? All those quantum limits and exponentially harder ways to get farther away from Earth might be the walls of the box in someone’s Truman show.
An AI that isn’t smart enough to notice (or care) that it’s boxed doesn’t seem to be a dangerous AI.
Which makes me think that AIs that would object to being boxed are precisely the ones that should be. But then that would make a smart AI pretend to be OK with it.
This reminds me of the Catch-22 case of soldiers who pretended to be insane by volunteering for suicide missions so that their superiors would remove them from said missions.