Eli, sometimes I find it hard to understand what your position actually is. It seems to me that your position is:
1) Work out an extremely robust solution to the Friendly AI problem
Only once this has been done do we move on to:
2) Build a powerful AGI
Practically, I think this strategy is risky. In my opinion, if you try to solve Friendliness without having a concrete AGI design, you will probably miss some important things. Secondly, I think that solving Friendliness will take longer than building the first powerful AGI. Thus, if you do 1 before getting into 2, I think it’s unlikely that you’ll be first.
But if when Eliezer gets finished on 1), someone else is getting finished on 2), the two may be combinable to some extent.
If someone (lets say, Eliezer, having been convinced by the above post to change tack) finishes 2), and no-one has done 1), then a non-friendly AGI becomes far more likely.
I’m not convinced by the singularity concept, but if it’s true Friendliness is orders of magnitude more important than just making an AGI. The difference between friendly AI and no-AI is big, but the difference between unfriendly AI and friendly AI dwarfs it.
And if it’s false? Well, if it’s false, making an AGI is orders of magnitude less important than that.
This cooperation thing sounds hugely important. What we want is for the AGI community to move in a direction where the best research is FAI-compatible. How can this be accomplished?
Any sufficiently-robust solution to 1 will essentially have to be proof-based programming; if your code isn’t mapped firmly to a proof that it won’t produce detrimental outcomes, then you can’t say in any real sense that it’s robust. When an overflow error could result in the ’FAI″s utility value of cheesecake going from 10^-3 to 10^50, you need some damn strong assurance that there won’t be an overflow.
Or in other words, one characteristic of a complete solution to 1 is a robust implementation that retains all the security of the theoretical solution, or in short, an AGI. And since this robustness continues to the hardware level, it would be an implemented AGI.
But if you do 2 before 1, you have created a powerful potential enemy who will probably work to prevent you from achieving 1 (unless, by accident, you have achieved 1 already).
I think that the key thing is to recognize the significance of that G in AGI. I agree that it is desirable to create powerful logic engines, powerful natural language processors, and powerful hardware design wizards on the way to solving the friendliness and AGI problems. We probably won’t get there without first creating such tools. But I personally don’t see why we cannot gain the benefits of such tools without loosing the ’G’enie.
Eli, sometimes I find it hard to understand what your position actually is. It seems to me that your position is:
1) Work out an extremely robust solution to the Friendly AI problem
Only once this has been done do we move on to:
2) Build a powerful AGI
Practically, I think this strategy is risky. In my opinion, if you try to solve Friendliness without having a concrete AGI design, you will probably miss some important things. Secondly, I think that solving Friendliness will take longer than building the first powerful AGI. Thus, if you do 1 before getting into 2, I think it’s unlikely that you’ll be first.
But if when Eliezer gets finished on 1), someone else is getting finished on 2), the two may be combinable to some extent.
If someone (lets say, Eliezer, having been convinced by the above post to change tack) finishes 2), and no-one has done 1), then a non-friendly AGI becomes far more likely.
I’m not convinced by the singularity concept, but if it’s true Friendliness is orders of magnitude more important than just making an AGI. The difference between friendly AI and no-AI is big, but the difference between unfriendly AI and friendly AI dwarfs it.
And if it’s false? Well, if it’s false, making an AGI is orders of magnitude less important than that.
This cooperation thing sounds hugely important. What we want is for the AGI community to move in a direction where the best research is FAI-compatible. How can this be accomplished?
I say much the same thing on: The risks of caution.
The race doesn’t usually go to the most cautious.
Any sufficiently-robust solution to 1 will essentially have to be proof-based programming; if your code isn’t mapped firmly to a proof that it won’t produce detrimental outcomes, then you can’t say in any real sense that it’s robust. When an overflow error could result in the ’FAI″s utility value of cheesecake going from 10^-3 to 10^50, you need some damn strong assurance that there won’t be an overflow.
Or in other words, one characteristic of a complete solution to 1 is a robust implementation that retains all the security of the theoretical solution, or in short, an AGI. And since this robustness continues to the hardware level, it would be an implemented AGI.
TL;DR: 1 entails 2.
But if you do 2 before 1, you have created a powerful potential enemy who will probably work to prevent you from achieving 1 (unless, by accident, you have achieved 1 already).
I think that the key thing is to recognize the significance of that G in AGI. I agree that it is desirable to create powerful logic engines, powerful natural language processors, and powerful hardware design wizards on the way to solving the friendliness and AGI problems. We probably won’t get there without first creating such tools. But I personally don’t see why we cannot gain the benefits of such tools without loosing the ’G’enie.