This idea is proposed by people with little idea of the value of testing[...]
The usefulness of testing is beside the point. The argument is that testing would be dangerous.
Also, you are now talking about performing “test runs”. Is that doing testing, now?
By “testing” I meant “running the code to see if it works”, which includes unit testing individual components, integration or functional testing on the program as a whole, or the simple measure of running the program and seeing if it does what it’s supposed to. By “doing test runs” I meant doing either of the latter two.
I would never ship, or trust in production, a program that had only been subjected to unit tests. This poses a problem for AI researchers, because while unit testing a potentially-FOOMing AI might well be safe (and would certainly be helpful in development), testing the whole thing at once would not be.
In fact, who has supposedly proposed this idea? What did they actually say?
I think EY’s the original person behind a lot of this, but now the main visible proponents seem to be SIAI. Here’s a link to the big ol’ document they wrote about FAI.
On the specific issue of having to formally prove friendliness before launching an AI, I can’t find anything specific in there at the moment. Perhaps that notion came from elsewhere? I’m not sure; but, it seems straightforward to me from the premises of the argument (AGI might FOOM, we want to make sure it FOOMs into something Friendly, we cannot risk running the AGI unless we know it will) that you’d have to have some way of showing that an AGI codebase is Friendly without running it, and the only other way I can think of would be to apply a rigorous proof.
Life is dangerous: the issue is surely whether testing is more dangerous than not testing.
It seems to me that a likely outcome of pursuing a strategy involving searching for a proof is that—while you are searching for it—some other team makes a machine intelligence that works—and suddenly whether your machine is “friendly”—or not—becomes totally irrelevant.
I think bashing testing makes no sense. People are interested in proving what they can about machines—in the hope of making them more reliable—but that is not the same as not doing testing.
The idea that we can make an intelligent machine—but are incapable of constructing a test harness capable of restraining it—seems like a fallacy to me.
Poke into these beliefs, and people will soon refer you to the AI-box experiment—which purports to explain that restrained intelligent machines can trick human gate keepers.
...but so what? You don’t imprison a super-intelligent agent—and then give the key to a single human and let them chat with the machine!
That just sounds crazy to me :-( Are these people actual programmers? How did they miss out on having the importance of unit tests drilled into them?
The problem is that running the AI might cause it to FOOM, and that could happen even in a test environment.
How do you get from that observation to the idea that running a complete untested program in the wild is going to be safer than not testing it at all?
No, the proposed solution is to first formally validate the program against some FAI theory before doing any test runs.
This idea is proposed by people with little idea of the value of testing—and little knowledge of the limitations of provable correctness—I presume.
In fact, who has supposedly proposed this idea? What did they actually say?
Also, you are now talking about performing “test runs”. Is that doing testing, now?
The usefulness of testing is beside the point. The argument is that testing would be dangerous.
By “testing” I meant “running the code to see if it works”, which includes unit testing individual components, integration or functional testing on the program as a whole, or the simple measure of running the program and seeing if it does what it’s supposed to. By “doing test runs” I meant doing either of the latter two.
I would never ship, or trust in production, a program that had only been subjected to unit tests. This poses a problem for AI researchers, because while unit testing a potentially-FOOMing AI might well be safe (and would certainly be helpful in development), testing the whole thing at once would not be.
I think EY’s the original person behind a lot of this, but now the main visible proponents seem to be SIAI. Here’s a link to the big ol’ document they wrote about FAI.
On the specific issue of having to formally prove friendliness before launching an AI, I can’t find anything specific in there at the moment. Perhaps that notion came from elsewhere? I’m not sure; but, it seems straightforward to me from the premises of the argument (AGI might FOOM, we want to make sure it FOOMs into something Friendly, we cannot risk running the AGI unless we know it will) that you’d have to have some way of showing that an AGI codebase is Friendly without running it, and the only other way I can think of would be to apply a rigorous proof.
Life is dangerous: the issue is surely whether testing is more dangerous than not testing.
It seems to me that a likely outcome of pursuing a strategy involving searching for a proof is that—while you are searching for it—some other team makes a machine intelligence that works—and suddenly whether your machine is “friendly”—or not—becomes totally irrelevant.
I think bashing testing makes no sense. People are interested in proving what they can about machines—in the hope of making them more reliable—but that is not the same as not doing testing.
The idea that we can make an intelligent machine—but are incapable of constructing a test harness capable of restraining it—seems like a fallacy to me.
Poke into these beliefs, and people will soon refer you to the AI-box experiment—which purports to explain that restrained intelligent machines can trick human gate keepers.
...but so what? You don’t imprison a super-intelligent agent—and then give the key to a single human and let them chat with the machine!