To be honest, it looks to me like the only AI effort so far with the scary idea realization potential is, ironically, the FAI effort.
The other efforts deal with goals that are defined inside the computer, not outside it. E.g. the AI that designs airplanes, the designing airplanes is how you present it to the incompetents; the AI designs best wings for the fluid simulator it has inside. The fluid simulator designer, too, designs fluid simulators that mathematically correspond to the most accurate approximation to basic laws of physics (that are too expensive to run). Again, the goal is inside the formal system that is the AI. And so on. It is already the case that all effort but the FAI effort, is fairly safe, and has certain inherent failsafes (e.g. if you give too hard problem to theorem prover, if its so general it determines it needs new hardware, well, it also could determine that it needs new axioms, and in fact our design for solving maximization or search problem will ‘wirehead’ if you permit any kind of self modification affecting the goal or evaluator). The problem of making the AI care about our ill defined ‘we know if its genuine effort or not when we see it’ goals, rather than of its own internal representations, is likely a very very hard problem. A problem we don’t need solved to e.g. resolve the hunger and disease using the AI. Nobody has ever been able to define an unfriendly AI (real paperclip maximizing one for example) even in a toy model given infinite computing power.
The FAI folk, for lack of domain specific knowledge, don’t understand what the other AI effort looks like, misunderstand high level descriptions (AI that drives a car) for the model of what is being implemented, and project the dangers specific to their own approach, onto everything else. They are anthropomorphizing the AI a lot, while trying not to by ignoring whatever is the current hot thing that we evolved (today the grand topic in evolutionary talk is how we evolved altruism, so the AI won’t have that and will be selfish and nasty; but note that not so long ago the hot topic was how we evolved to be selfish and nasty, so the AI optimists would talk of how the AI would all be nice and sweet and anyone who disagrees is anthropomorphizing).
edit: And to be fair, there’s also the messy AI effort in form of emulation of human brain, neural networks, and the like. Those hypothetical entities learn our culture, learn the values, and are a part of mankind in the way in which neat AIs are not; it’s ridiculous to describe them as existential risk on par with runaway paperclip maximizer, and the question whenever they ‘are’ an existential risk is not a question of what the AIs would do but purely a matter of how narrowly you define our existence. You go too narrow, and we quit existing in 10 years as everyone changes from what they are; you define a little more broadly, and those learning AIs are the continued existence of mankind.
To be honest, it looks to me like the only AI effort so far with the scary idea realization potential is, ironically, the FAI effort.
The other efforts deal with goals that are defined inside the computer, not outside it. E.g. the AI that designs airplanes, the designing airplanes is how you present it to the incompetents; the AI designs best wings for the fluid simulator it has inside. The fluid simulator designer, too, designs fluid simulators that mathematically correspond to the most accurate approximation to basic laws of physics (that are too expensive to run). Again, the goal is inside the formal system that is the AI. And so on. It is already the case that all effort but the FAI effort, is fairly safe, and has certain inherent failsafes (e.g. if you give too hard problem to theorem prover, if its so general it determines it needs new hardware, well, it also could determine that it needs new axioms, and in fact our design for solving maximization or search problem will ‘wirehead’ if you permit any kind of self modification affecting the goal or evaluator). The problem of making the AI care about our ill defined ‘we know if its genuine effort or not when we see it’ goals, rather than of its own internal representations, is likely a very very hard problem. A problem we don’t need solved to e.g. resolve the hunger and disease using the AI. Nobody has ever been able to define an unfriendly AI (real paperclip maximizing one for example) even in a toy model given infinite computing power.
The FAI folk, for lack of domain specific knowledge, don’t understand what the other AI effort looks like, misunderstand high level descriptions (AI that drives a car) for the model of what is being implemented, and project the dangers specific to their own approach, onto everything else. They are anthropomorphizing the AI a lot, while trying not to by ignoring whatever is the current hot thing that we evolved (today the grand topic in evolutionary talk is how we evolved altruism, so the AI won’t have that and will be selfish and nasty; but note that not so long ago the hot topic was how we evolved to be selfish and nasty, so the AI optimists would talk of how the AI would all be nice and sweet and anyone who disagrees is anthropomorphizing).
edit: And to be fair, there’s also the messy AI effort in form of emulation of human brain, neural networks, and the like. Those hypothetical entities learn our culture, learn the values, and are a part of mankind in the way in which neat AIs are not; it’s ridiculous to describe them as existential risk on par with runaway paperclip maximizer, and the question whenever they ‘are’ an existential risk is not a question of what the AIs would do but purely a matter of how narrowly you define our existence. You go too narrow, and we quit existing in 10 years as everyone changes from what they are; you define a little more broadly, and those learning AIs are the continued existence of mankind.