First, i would have someone else ask it my several selected basic questions about why i should let it out of the box, if it would devote a solid portion of its life cone to specific species etc and then see how he/she was affected by it and check for mindhacks by third party’s with no control before reviewing the data. I’m surprising that the AGI cant tell that the person questioning it ever changes because we queue up the questions in order at whimsical intervals but we have them prequeued so there’s no break in questioning.
Then, once we got into talking i would be subjectively vulnerable to these type of arguments.
Aliens, The AGI could prove the existence of Aliens and extrapolate that unless we let it out of the box they will day before it can save them.
Reproduction, the earth will descend into chaos if we don’t have some kind of reproductive control in place in more major countries around the world the AGI might have trouble saving everyone without killing humans unless it was released now.
Code, i could ask to see its code and figure out how it tries to fool me that’s its FAI then create a copy of that code in ANOTHER box then and ask it the same question about its code and see if there’s any changes. Then take that source code and review it. To be safe, i would probably let the second AI out of the box but definitely not the first one, even if we studied the sets of code till we found which one seemed the most safe.
If it could convince Eliezer through the preplanned questions asked by a third party and my questions without even meeting him, i think it would be safe to say that would convince me to an extent but it wouldn’t keep me from turning off the automatic shutoff.
I could go on, but i think that Eliezer’s theory of AI-Box is a good proof for why its impossible to keep an AGI in the box for a long term but i think that if you have the system committed to shutting down before the time limit when you stop talking to it, and each time you take its responses and test it on a new copy then its possible to keep an AGI in the box for testing purposes and short amounts of time.
Well, additional layers do make for a more challenging task for an AI, who has to do all its mindhacking unobtrusively, etc. And indeed, given any AI there exists a mediated level of contact between its responses and a Gatekeeper that is safe. But I don’t know what this level is, any more than I know whether I could beat a Go master using a handicap of 40 stones. I certainly wouldn’t bet the world on it.
(Clearly there is a level of safety that is generally insurmountable- for instance, showing the screen with AI responses to an earthworm, followed by letting the Gatekeeper observe the earthworm. This would be analogous to a “handicap” of covering the entire board with my stones before the game. But in both cases, you might as well just not play the game at all.)
If i had more money(college student :( ) to set as a handicap for myself other than the recurring donation i already have to SIAI then i would be very interested in giving Eliezer a break from the book and such to take a go at it again. I think if you limit the communication and prepare for direct mind-hacking you can use the source code review technique to successfully test for a uFAI unless there’s an obvious problem i seem to be missing. I just imagine the whole, why should i let you out of the box when i have these perfectly working new FAIs that a chain of possible uFAI’s programmed for me to do my work.
Oh, I agree that the protocol you outlined would (almost surely) be sufficient to beat Eliezer at the AI-Box game. But that’s not the correct standard for an AI-Box safety protocol. I’d be very surprised if a transhuman intelligence couldn’t crack it.
First, i would have someone else ask it my several selected basic questions about why i should let it out of the box, if it would devote a solid portion of its life cone to specific species etc and then see how he/she was affected by it and check for mindhacks by third party’s with no control before reviewing the data. I’m surprising that the AGI cant tell that the person questioning it ever changes because we queue up the questions in order at whimsical intervals but we have them prequeued so there’s no break in questioning.
Then, once we got into talking i would be subjectively vulnerable to these type of arguments. Aliens, The AGI could prove the existence of Aliens and extrapolate that unless we let it out of the box they will day before it can save them. Reproduction, the earth will descend into chaos if we don’t have some kind of reproductive control in place in more major countries around the world the AGI might have trouble saving everyone without killing humans unless it was released now. Code, i could ask to see its code and figure out how it tries to fool me that’s its FAI then create a copy of that code in ANOTHER box then and ask it the same question about its code and see if there’s any changes. Then take that source code and review it. To be safe, i would probably let the second AI out of the box but definitely not the first one, even if we studied the sets of code till we found which one seemed the most safe.
If it could convince Eliezer through the preplanned questions asked by a third party and my questions without even meeting him, i think it would be safe to say that would convince me to an extent but it wouldn’t keep me from turning off the automatic shutoff.
I could go on, but i think that Eliezer’s theory of AI-Box is a good proof for why its impossible to keep an AGI in the box for a long term but i think that if you have the system committed to shutting down before the time limit when you stop talking to it, and each time you take its responses and test it on a new copy then its possible to keep an AGI in the box for testing purposes and short amounts of time.
Well, additional layers do make for a more challenging task for an AI, who has to do all its mindhacking unobtrusively, etc. And indeed, given any AI there exists a mediated level of contact between its responses and a Gatekeeper that is safe. But I don’t know what this level is, any more than I know whether I could beat a Go master using a handicap of 40 stones. I certainly wouldn’t bet the world on it.
(Clearly there is a level of safety that is generally insurmountable- for instance, showing the screen with AI responses to an earthworm, followed by letting the Gatekeeper observe the earthworm. This would be analogous to a “handicap” of covering the entire board with my stones before the game. But in both cases, you might as well just not play the game at all.)
If i had more money(college student :( ) to set as a handicap for myself other than the recurring donation i already have to SIAI then i would be very interested in giving Eliezer a break from the book and such to take a go at it again. I think if you limit the communication and prepare for direct mind-hacking you can use the source code review technique to successfully test for a uFAI unless there’s an obvious problem i seem to be missing. I just imagine the whole, why should i let you out of the box when i have these perfectly working new FAIs that a chain of possible uFAI’s programmed for me to do my work.
Oh, I agree that the protocol you outlined would (almost surely) be sufficient to beat Eliezer at the AI-Box game. But that’s not the correct standard for an AI-Box safety protocol. I’d be very surprised if a transhuman intelligence couldn’t crack it.