Artificial Intelligence dates back to 1960. Fifty years later it has failed in such a humiliating way that it was not enough to move the goal posts; the old, heavy wooden goal posts have been burned and replaced with light weight portable aluminium goal posts, suitable for celebrating such achievements as from time to time occur.
Mainstream researchers have taken the history on board and now sit at their keyboards typing in code to hand-craft individual, focused solutions to each sub-challenge. Driving a car uses drive-a-car vision. Picking a nut and bolt from a component bin has nut-and-bolt vision. There is no generic see-vision. This kind of work cannot go FOOM for deep structural reasons. All the scary AI knowledge, the kind of knowledge that the pioneers of the 1960′s dreamed of, stays in the brains of the human researchers. The humans write the code. Though they use meta-programming, it is always “well-founded” in the sense that level n writes level n-1, all the way down to level 0. There is no level n code rewriting level n. That is why it cannot go FOOM.
Importantly, this restraint is enforced by a different kind of self-interest than avoiding existential risk. The researchers have no idea how to write code with level n re-writing level n. Well, maybe they have the old ideas that never came close to working, but they know that if they venture into that toxic quagmire they will have nothing to show before their grant runs out, funders will think they wasted their grant on quixotic work, and their careers will be over.
Obviously past failure can lead to future success. Even a hundred and fifty years of failure can be trumped by eventual success. (Think of steam car work, which finally succeeded with the Stanley steamer, only to elbowed aside by internal combustion). So it is fair enough for the SI to say that past failure does not in itself rule out an AI-FOOM. But you cannot just ditch the history as though it never happened. We have learned a lot, most of it about how badly humans suck at programming computers. Current ideas of AI-risk are too thin to be taken seriously because there is no engagement with the history—researchers are working within a constraining paradigm because the history has dumped them in it, but the SI isn’t worrying about how secure those constraints are, it is oblivious to them.
Artificial Intelligence dates back to 1960. Fifty years later it has failed in such a humiliating way that it was not enough to move the goal posts; the old, heavy wooden goal posts have been burned and replaced with light weight portable aluminium goal posts, suitable for celebrating such achievements as from time to time occur.
Mainstream researchers have taken the history on board and now sit at their keyboards typing in code to hand-craft individual, focused solutions to each sub-challenge. Driving a car uses drive-a-car vision. Picking a nut and bolt from a component bin has nut-and-bolt vision. There is no generic see-vision. This kind of work cannot go FOOM for deep structural reasons. All the scary AI knowledge, the kind of knowledge that the pioneers of the 1960′s dreamed of, stays in the brains of the human researchers. The humans write the code. Though they use meta-programming, it is always “well-founded” in the sense that level n writes level n-1, all the way down to level 0. There is no level n code rewriting level n. That is why it cannot go FOOM.
Importantly, this restraint is enforced by a different kind of self-interest than avoiding existential risk. The researchers have no idea how to write code with level n re-writing level n. Well, maybe they have the old ideas that never came close to working, but they know that if they venture into that toxic quagmire they will have nothing to show before their grant runs out, funders will think they wasted their grant on quixotic work, and their careers will be over.
Obviously past failure can lead to future success. Even a hundred and fifty years of failure can be trumped by eventual success. (Think of steam car work, which finally succeeded with the Stanley steamer, only to elbowed aside by internal combustion). So it is fair enough for the SI to say that past failure does not in itself rule out an AI-FOOM. But you cannot just ditch the history as though it never happened. We have learned a lot, most of it about how badly humans suck at programming computers. Current ideas of AI-risk are too thin to be taken seriously because there is no engagement with the history—researchers are working within a constraining paradigm because the history has dumped them in it, but the SI isn’t worrying about how secure those constraints are, it is oblivious to them.