I can explain why, and possibly even what could be done instead, but it suddenly occurred to me that it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma...
IMO you should definitely do it. Even if LW karma is good an indicator of good ideas, more information rarely hurts, especially on a topic as important as this.
Ok—although maybe I should stick it in its own thread?
I realize much of this has been said before.
Part 1 : AGI will come before FAI, because:
Complexity of algorithm design:
Intuitively, FAI seems orders of magnitude more complex than AGI. If I decided to start trying to program an AGI tomorrow, I would have ideas on how to start, and maybe even make a minuscule amount of progress. Ben Goertzel even has a (somewhat optimistic) roadmap for AGI in a decade. Meanwhile, afaik FAI is still stuck at the stage of lob’s theorem. The fact that EY seems to be focusing on promoting rationality and writing (admittedly awesome) harry potter fanfiction seems to indicate that he doesn’t currently know how to write FAI (and nor does anyone else) otherwise he would be focusing on that now, and instead is planning for the long term.
Computational complexity
CEV requires modelling (and extrapolating) every human mind on the planet, while avoiding the creation of sentient entities. While modelling might be cheaper than ~10^17 flops per human due to short cuts, I doubt it’s going to come cheap. Randomly sampling a subset of humanity to extrapolate from, at least initially, could make this problem less severe. Furthermore, this can be partially circumvented by saying that the AI follows a specific utility function while bootstrapping to enough computing power to implement CEV, but then you have the problem of allowing it to bootstrap safely. Having to prove friendliness of each step in self-improvement strikes me as something that could also be costly.
Finally, I get the impression that people are considering using Solomonoff induction. It’s uncomputable, and while I realize that there exist approximations, I would imagine that these would be extremely expensive to calculate anything non-trivial. Is there any reason for using SI for FAI more than AGI, e.g. something todo with provability about the programs actions?
Infeasibility of relinquishment.
If you can’t convince Ben Goertzel that FAI is needed, even though he is familiar with the arguments and is an advisor to SIAI, you’re not going to get anywhere near a universal consensus on the matter. Furthermore, AI is increasingly being used in financial and possibly soon military applications, and so there are strong incentives to speed the development of AI. While these uses are unlikely to be full AGI, they could provide building blocks – I can imagine a plausible situation where an advanced AI that predict the stock exchange could easily be modified to be a universal predictor. The most powerful incentive to speed up AI development is the sheer number of people who die every day, and the amount of negentropy lost in the case that the 2nd law of thermodynamics cannot be circumvented. Even if there could be a worldwide ban on non-provably safe AGI, work would still probably continue in secret by people who thought the benefits of an earlier singularity outweighed the risks, and/or were worried about ideologically opposed groups getting their first.
Financial bootstrapping
If you are ok with running a non-provably friendly AGI, then even in the early stages when, for example, your AI can write simple code or make reasonably accurate predictions but not speak English or make plans, you can use these to earn money, and buy more hardware/programmers. This seems to be part of the approach Ben is taking.
Coming in Part II: is there any alternative (and doing nothing is not an alternative! even if FAI is unlikely to work its better than giving up!)
Welcome!
IMO you should definitely do it. Even if LW karma is good an indicator of good ideas, more information rarely hurts, especially on a topic as important as this.
Ok—although maybe I should stick it in its own thread?
I realize much of this has been said before.
Part 1 : AGI will come before FAI, because:
Complexity of algorithm design:
Intuitively, FAI seems orders of magnitude more complex than AGI. If I decided to start trying to program an AGI tomorrow, I would have ideas on how to start, and maybe even make a minuscule amount of progress. Ben Goertzel even has a (somewhat optimistic) roadmap for AGI in a decade. Meanwhile, afaik FAI is still stuck at the stage of lob’s theorem.
The fact that EY seems to be focusing on promoting rationality and writing (admittedly awesome) harry potter fanfiction seems to indicate that he doesn’t currently know how to write FAI (and nor does anyone else) otherwise he would be focusing on that now, and instead is planning for the long term.
Computational complexity CEV requires modelling (and extrapolating) every human mind on the planet, while avoiding the creation of sentient entities. While modelling might be cheaper than ~10^17 flops per human due to short cuts, I doubt it’s going to come cheap. Randomly sampling a subset of humanity to extrapolate from, at least initially, could make this problem less severe. Furthermore, this can be partially circumvented by saying that the AI follows a specific utility function while bootstrapping to enough computing power to implement CEV, but then you have the problem of allowing it to bootstrap safely. Having to prove friendliness of each step in self-improvement strikes me as something that could also be costly. Finally, I get the impression that people are considering using Solomonoff induction. It’s uncomputable, and while I realize that there exist approximations, I would imagine that these would be extremely expensive to calculate anything non-trivial. Is there any reason for using SI for FAI more than AGI, e.g. something todo with provability about the programs actions?
Infeasibility of relinquishment. If you can’t convince Ben Goertzel that FAI is needed, even though he is familiar with the arguments and is an advisor to SIAI, you’re not going to get anywhere near a universal consensus on the matter. Furthermore, AI is increasingly being used in financial and possibly soon military applications, and so there are strong incentives to speed the development of AI. While these uses are unlikely to be full AGI, they could provide building blocks – I can imagine a plausible situation where an advanced AI that predict the stock exchange could easily be modified to be a universal predictor.
The most powerful incentive to speed up AI development is the sheer number of people who die every day, and the amount of negentropy lost in the case that the 2nd law of thermodynamics cannot be circumvented. Even if there could be a worldwide ban on non-provably safe AGI, work would still probably continue in secret by people who thought the benefits of an earlier singularity outweighed the risks, and/or were worried about ideologically opposed groups getting their first.
Financial bootstrapping If you are ok with running a non-provably friendly AGI, then even in the early stages when, for example, your AI can write simple code or make reasonably accurate predictions but not speak English or make plans, you can use these to earn money, and buy more hardware/programmers. This seems to be part of the approach Ben is taking.
Coming in Part II: is there any alternative (and doing nothing is not an alternative! even if FAI is unlikely to work its better than giving up!)
Definitely worth its own Discussion post, once you have min karma, which should not take long.
They already have it.