If you don’t want to face that problem, surely you can just constrain the machine to write readable code.
A complicated refeactoring that is difficult to show whether it does the same thing could be discarded. What is needed most is a path forward, not the ability to traverse any possible path that leads forwards.
It seems likely that there will be a tradeoff between progress speed and safety, with “looking for a proof” being the slowest approach. Such a technique seems relatively unlikely to be effective if applied during a race.
What’s the use of an AI that writes code I could have written myself? If that’s the case, cut out the middleman and just write the damn stuff! I specifically want an AI that’s smarter than me, otherwise I have no use for it.
It seems likely that there will be a tradeoff between progress speed and safety, with “looking for a proof” being the slowest approach. Such a technique seems relatively unlikely to be effective if applied during a race.
Yes, this is true. That’s a problem. We don’t want the first AI that FOOMs effectively to win. We want a provably Friendly AI to win. If we demonstrate that proving Friendliness has constraints that don’t apply generally, we still cannot abandon the constraints! That defeats the entire purpose!
What’s the use of an AI that writes code I could have written myself?
I’m confused. Isn’t this sort of thing what most machines are for? You might just as well ask, “What’s the use of a clothes-washing machine, when I could’ve just washed all of the clothes by hand?”
This is a good point, but there are two objections. First, the washing machine lets you substitute one kind of effort for another and thus use comparative advantage. I can earn enough money for a washing machine in much less time than the total time saved over the lifetime of the washing machine. With an AI, code is code; there’s no comparative advantage in writing code to instead of writing code.
Second, in referring to “code I could have written myself”, I was referring to qualitative advantages rather than time saved. To make the washing-machine analogy work with this, postulate a washing machine that doesn’t actually save you any time—maybe you have to sit about cranking a driving shaft for two hours—but produces much cleaner clothes, or a smaller chance of ripping, or some other qualitative measure.
I note that automatic code generators we have already, in some cases built into the language features, like templates in C++. They’re occasionally useful but not likely to FOOM on us.
First, the washing machine lets you substitute one kind of effort for another and thus use comparative advantage.
‘Comparative advantage’ needs exchanging one kind of effort for another because it’s a law regarding trade amongst humans. What you’re looking for is mechanical advantage, which often involves trading work for more work.
With an AI, code is code; there’s no [mechanical] advantage in writing code to instead of writing code.
No. If you spend 1 day writing code for an AI and then it writes all your code from now on, you’ve saved an arbitrarily large amount of time writing code.
Second, in referring to “code I could have written myself”, I was referring to qualitative advantages rather than time saved.
Well your question was “What’s the use of an AI...?”, to which I could legitimately bring up all sorts of advantages you hadn’t been referring to. If you had said, “What’s the use of a chicken when it can’t even dance?” I could respond “I could eat it” and presumably that would be an answer to your question.
To make the washing-machine analogy work with this, postulate a washing machine that doesn’t actually save you any time—maybe you have to sit about cranking a driving shaft for two hours—but produces much cleaner clothes, or a smaller chance of ripping, or some other qualitative measure.
Your washing machine does still sound preferable to hand-washing, so I’m not sure what the point was.
I’m terribly confused as to what your point was, or why you think a code-writing machine would be useless. I want one!
Ok, yes, an AI that saves me the effort of writing code would be useful, fair enough. I think, however, that in the context of writing a FOOMing Friendly AI, code that I could have written is not going to be sufficient.
We don’t want the first AI that FOOMs effectively to win. We want a provably Friendly AI to win.
This seems as though it is framing the problem incorrectly to me. Today’s self-improving systems are corporations. They are a mix of human and machine components. Nobody proves anything about their self-improvement trajectories—but that doesn’t necessarily mean that they are destined to go off the rails. The idea that growth will be so explosive that it can’t be dynamically steered neglects the possibility of throttles.
A “provably-Friendly AI” doesn’t look very likely to win—so due attention should be give to all the other possibilities with the potential to produce a positive outcome.
If you don’t want to face that problem, surely you can just constrain the machine to write readable code.
A complicated refeactoring that is difficult to show whether it does the same thing could be discarded. What is needed most is a path forward, not the ability to traverse any possible path that leads forwards.
It seems likely that there will be a tradeoff between progress speed and safety, with “looking for a proof” being the slowest approach. Such a technique seems relatively unlikely to be effective if applied during a race.
What’s the use of an AI that writes code I could have written myself? If that’s the case, cut out the middleman and just write the damn stuff! I specifically want an AI that’s smarter than me, otherwise I have no use for it.
Yes, this is true. That’s a problem. We don’t want the first AI that FOOMs effectively to win. We want a provably Friendly AI to win. If we demonstrate that proving Friendliness has constraints that don’t apply generally, we still cannot abandon the constraints! That defeats the entire purpose!
I’m confused. Isn’t this sort of thing what most machines are for? You might just as well ask, “What’s the use of a clothes-washing machine, when I could’ve just washed all of the clothes by hand?”
This is a good point, but there are two objections. First, the washing machine lets you substitute one kind of effort for another and thus use comparative advantage. I can earn enough money for a washing machine in much less time than the total time saved over the lifetime of the washing machine. With an AI, code is code; there’s no comparative advantage in writing code to instead of writing code.
Second, in referring to “code I could have written myself”, I was referring to qualitative advantages rather than time saved. To make the washing-machine analogy work with this, postulate a washing machine that doesn’t actually save you any time—maybe you have to sit about cranking a driving shaft for two hours—but produces much cleaner clothes, or a smaller chance of ripping, or some other qualitative measure.
I note that automatic code generators we have already, in some cases built into the language features, like templates in C++. They’re occasionally useful but not likely to FOOM on us.
‘Comparative advantage’ needs exchanging one kind of effort for another because it’s a law regarding trade amongst humans. What you’re looking for is mechanical advantage, which often involves trading work for more work.
No. If you spend 1 day writing code for an AI and then it writes all your code from now on, you’ve saved an arbitrarily large amount of time writing code.
Well your question was “What’s the use of an AI...?”, to which I could legitimately bring up all sorts of advantages you hadn’t been referring to. If you had said, “What’s the use of a chicken when it can’t even dance?” I could respond “I could eat it” and presumably that would be an answer to your question.
Your washing machine does still sound preferable to hand-washing, so I’m not sure what the point was.
I’m terribly confused as to what your point was, or why you think a code-writing machine would be useless. I want one!
Ok, yes, an AI that saves me the effort of writing code would be useful, fair enough. I think, however, that in the context of writing a FOOMing Friendly AI, code that I could have written is not going to be sufficient.
We’re in agreement.
This seems as though it is framing the problem incorrectly to me. Today’s self-improving systems are corporations. They are a mix of human and machine components. Nobody proves anything about their self-improvement trajectories—but that doesn’t necessarily mean that they are destined to go off the rails. The idea that growth will be so explosive that it can’t be dynamically steered neglects the possibility of throttles.
A “provably-Friendly AI” doesn’t look very likely to win—so due attention should be give to all the other possibilities with the potential to produce a positive outcome.