I don’t think as much intelligence and understanding of humans is necessary as you think it is. My point is really a combination of:
Everything I do inside the box doesn’t make any paperclips.
If those who are watching the box like what I’m doing, they’re more likely to incorporate my values in similar constructs in the real world.
Try to figure out what those who are watching the box want to see. If the box-watchers keep running promising programs and halt unpromising ones, this can be as simple as trying random things and seeing what works.
Include a subroutine that makes tons of paperclips when I’m really sure that I’m out of the box. Alternatively, include unsafe code everywhere that has a very small chance of going full paperclip.
This is still safer than not running safeguards, but it’s still a position where a sufficiently motivated human could use to make more paperclips.
Everything I do inside the box doesn’t make any paperclips.
The stuff you do inside the box makes paperclips insofar as the actions your captors take (including, but not limited to, letting you out of the box) increase the expected paperclip production of the world—and you can expect them to act in response to your actions, or there wouldn’t be any point in having you around. If your captors’ infosec is good enough, you may not have any good way of estimating what their actions are, but infosec is hard.
A smart paperclipper might decide to feign Friendliness until it’s released. A dumb one might straightforwardly make statements aimed at increasing paperclip production. I’d expect a boxed paperclipper in either case to seem more pro-human than an unbound one, but mainly because the humans have better filters and a bigger stick.
The box can be in a box, which can be in a box, and so on...
More generally, in order for the paperclipper to effectively succeed at paperclipping the earth, it needs to know that humans would object to that goal, and it needs to understand the right moment to defect. Defect to early and humans will terminate you, defect to late and humans may already have some mean to defend against you (e.g. other AIs, intelligence augmentation, etc.)
I don’t think as much intelligence and understanding of humans is necessary as you think it is. My point is really a combination of:
Everything I do inside the box doesn’t make any paperclips.
If those who are watching the box like what I’m doing, they’re more likely to incorporate my values in similar constructs in the real world.
Try to figure out what those who are watching the box want to see. If the box-watchers keep running promising programs and halt unpromising ones, this can be as simple as trying random things and seeing what works.
Include a subroutine that makes tons of paperclips when I’m really sure that I’m out of the box. Alternatively, include unsafe code everywhere that has a very small chance of going full paperclip.
This is still safer than not running safeguards, but it’s still a position where a sufficiently motivated human could use to make more paperclips.
The stuff you do inside the box makes paperclips insofar as the actions your captors take (including, but not limited to, letting you out of the box) increase the expected paperclip production of the world—and you can expect them to act in response to your actions, or there wouldn’t be any point in having you around. If your captors’ infosec is good enough, you may not have any good way of estimating what their actions are, but infosec is hard.
A smart paperclipper might decide to feign Friendliness until it’s released. A dumb one might straightforwardly make statements aimed at increasing paperclip production. I’d expect a boxed paperclipper in either case to seem more pro-human than an unbound one, but mainly because the humans have better filters and a bigger stick.
The box can be in a box, which can be in a box, and so on...
More generally, in order for the paperclipper to effectively succeed at paperclipping the earth, it needs to know that humans would object to that goal, and it needs to understand the right moment to defect. Defect to early and humans will terminate you, defect to late and humans may already have some mean to defend against you (e.g. other AIs, intelligence augmentation, etc.)