They’re both questions about program verification. However, one of the programs is godshatter while the other is just a universe. Encoding morality is a highly complicated project dependent on huge amounts of data (in order to capture human values). Designing a universe for the AI barely even needs empiricism, and it can be thoroughly tested without a world-ending disaster.
They’re both questions about program verification.
No, I don’t think so at all. Thinking that an AI box is all about program verification is like thinking that computer security is all about software bugs.
They’re both questions about program verification. However, one of the programs is godshatter while the other is just a universe. Encoding morality is a highly complicated project dependent on huge amounts of data (in order to capture human values). Designing a universe for the AI barely even needs empiricism, and it can be thoroughly tested without a world-ending disaster.
No, I don’t think so at all. Thinking that an AI box is all about program verification is like thinking that computer security is all about software bugs.