No reply. Just so you know, the collective position on testing here is bizarre.
How you can think that superintelligent agents are often dangerous AND that a good way of dealing with this is to release an untested one on the world is really quite puzzling.
Hardly anyone ever addresses the issue. When they do, it is by pointing to AI box experiments, which purport to show that a superintelligence can defeat a lesser intelligence, even if well strapped down.
That seems irrelevant to me. To build a jail, for the smartest agent in the world, you do not use vastly less powerful agents as guards, you use slightly less powerful ones. If necesssary, you can dope the restrained agent up a bit. There are in fact all manner of approaches to this problem—I recommend thinking about them some more before discarding the whole idea of testing superintelligences.
Other approaches seem likely to get there first.
...and what have you got against testing?
No reply. Just so you know, the collective position on testing here is bizarre.
How you can think that superintelligent agents are often dangerous AND that a good way of dealing with this is to release an untested one on the world is really quite puzzling.
Hardly anyone ever addresses the issue. When they do, it is by pointing to AI box experiments, which purport to show that a superintelligence can defeat a lesser intelligence, even if well strapped down.
That seems irrelevant to me. To build a jail, for the smartest agent in the world, you do not use vastly less powerful agents as guards, you use slightly less powerful ones. If necesssary, you can dope the restrained agent up a bit. There are in fact all manner of approaches to this problem—I recommend thinking about them some more before discarding the whole idea of testing superintelligences.
No-one’s against testing, that precaution should be taken, but it’s not the most pressing concern at this stage.
See “All of the above working first time, without testing the entire superintelligence”, upthread. This is not the first time.