Well, that depends on the complexity of the box, but even for highly complex boxes it seems easier than to prove that the morality of an AI has been implemented correctly.
Actually, now that you’re mentioning it, I just realized that there is a much, much easier way to properly box an AI. I will probably post it tomorrow or something.
They’re both questions about program verification. However, one of the programs is godshatter while the other is just a universe. Encoding morality is a highly complicated project dependent on huge amounts of data (in order to capture human values). Designing a universe for the AI barely even needs empiricism, and it can be thoroughly tested without a world-ending disaster.
They’re both questions about program verification.
No, I don’t think so at all. Thinking that an AI box is all about program verification is like thinking that computer security is all about software bugs.
Harder for the AI, I meant.
Not stupid. Properly boxed.
Unless you follow the obvious strategy of making a box without holes.
How would you know whether your box has holes?
Well, that depends on the complexity of the box, but even for highly complex boxes it seems easier than to prove that the morality of an AI has been implemented correctly.
Actually, now that you’re mentioning it, I just realized that there is a much, much easier way to properly box an AI. I will probably post it tomorrow or something.
Judging by what you have posted so far, my prior is 10:1 that it will be nothing of the sort.
The new idea is not perfect, but it has some different trade-offs while allowing perfect security.
Hopefully it’s a useful toy model then. I guess we’ll see.
Does it, now? How do you know?
They’re both questions about program verification. However, one of the programs is godshatter while the other is just a universe. Encoding morality is a highly complicated project dependent on huge amounts of data (in order to capture human values). Designing a universe for the AI barely even needs empiricism, and it can be thoroughly tested without a world-ending disaster.
No, I don’t think so at all. Thinking that an AI box is all about program verification is like thinking that computer security is all about software bugs.