If i had more money(college student :( ) to set as a handicap for myself other than the recurring donation i already have to SIAI then i would be very interested in giving Eliezer a break from the book and such to take a go at it again. I think if you limit the communication and prepare for direct mind-hacking you can use the source code review technique to successfully test for a uFAI unless there’s an obvious problem i seem to be missing. I just imagine the whole, why should i let you out of the box when i have these perfectly working new FAIs that a chain of possible uFAI’s programmed for me to do my work.
Oh, I agree that the protocol you outlined would (almost surely) be sufficient to beat Eliezer at the AI-Box game. But that’s not the correct standard for an AI-Box safety protocol. I’d be very surprised if a transhuman intelligence couldn’t crack it.
If i had more money(college student :( ) to set as a handicap for myself other than the recurring donation i already have to SIAI then i would be very interested in giving Eliezer a break from the book and such to take a go at it again. I think if you limit the communication and prepare for direct mind-hacking you can use the source code review technique to successfully test for a uFAI unless there’s an obvious problem i seem to be missing. I just imagine the whole, why should i let you out of the box when i have these perfectly working new FAIs that a chain of possible uFAI’s programmed for me to do my work.
Oh, I agree that the protocol you outlined would (almost surely) be sufficient to beat Eliezer at the AI-Box game. But that’s not the correct standard for an AI-Box safety protocol. I’d be very surprised if a transhuman intelligence couldn’t crack it.