but brings forward the date by which we must solve it
Does it really? I already explained that if someone makes an automated engineering tool, all users of that tool are at least as powerful as some (U)FAI based upon this engineering tool. Addition of independent will onto tank doesn’t make it suddenly win the war against much larger force of tanks with no independent will.
You are rationalizing the position here. If you actually reason forwards, it is clear that creation of such tools may, instead, be the life-saver when someone who thought he solved morality unleashes some horror upon the world. (Or sometime, hardware gets so good that very simple evolution simulator like systems could self improve to point of super-intelligence by evolving, albeit that is very far off into the future)
Suppose I were to convince you of butterfly effect, and explain that you sneezing could kill people, months later. And suppose you couldn’t think that non sneezing has same probability. You’d be trying real hard not to sneeze, for nothing, avoid the sudden bright lights (if you have sneeze reflex on bright lights), and so on.
The engineering super-intelligences don’t share our values to such profound extent, as to not even share the desire to ‘do something’ in the real world. Even the engineering intelligence inside my own skull, as far as I can feel. I build designs in real life, because I have rent to pay, or because I am not sure enough it will work and don’t trust the internal simulator that I use for design (i.e. imagining) [and that’s because my hardware is very flawed]. This is also the case with all my friends whom are good engineers.
The issue here is that you conflate things into ‘human level AI’. There’s at least three distinct aspects to AI:
1: Engineering, and other problem solving. This is a creation of designs in abstract design space.
2: Will to do something in real world in real time.
3: Morality.
People here see first two as inseparable, while seeing third as unrelated.
I already explained that if someone makes an automated engineering tool, all users of that tool are at least as powerful as some (U)FAI based upon this engineering tool.
Think of the tool and its human user as a single system. As long as the system is limited by the human’s intelligence then it will not be as powerful as a system consisting of the same tool driven by a superhuman intelligence. And if the system isn’t limited by the human’s intelligence then the tool is making decisions, it is an AI, and we’re facing the problem of making it follow the operator’s will. (And didn’t you mean to say “as powerful as any (U)FAI”?)
In general, it doesn’t make much sense to draw a sharp distinction between tools and wills that use them. How do you draw the line in the case of a self-modifying AI?
Addition of independent will onto tank doesn’t make it suddenly win the war against much larger force of tanks with no independent will.
Reasoning by cooked anecdote? Why speak of tanks and not, for example, automated biochemistry labs? I can imagine such existing in the future. And one of those could win the war against all the other biochemistry labs in the world and the rest of the biosphere too, if it were driven by a superior intelligence.
Does it really? I already explained that if someone makes an automated engineering tool, all users of that tool are at least as powerful as some (U)FAI based upon this engineering tool. Addition of independent will onto tank doesn’t make it suddenly win the war against much larger force of tanks with no independent will.
You are rationalizing the position here. If you actually reason forwards, it is clear that creation of such tools may, instead, be the life-saver when someone who thought he solved morality unleashes some horror upon the world. (Or sometime, hardware gets so good that very simple evolution simulator like systems could self improve to point of super-intelligence by evolving, albeit that is very far off into the future)
Suppose I were to convince you of butterfly effect, and explain that you sneezing could kill people, months later. And suppose you couldn’t think that non sneezing has same probability. You’d be trying real hard not to sneeze, for nothing, avoid the sudden bright lights (if you have sneeze reflex on bright lights), and so on.
The engineering super-intelligences don’t share our values to such profound extent, as to not even share the desire to ‘do something’ in the real world. Even the engineering intelligence inside my own skull, as far as I can feel. I build designs in real life, because I have rent to pay, or because I am not sure enough it will work and don’t trust the internal simulator that I use for design (i.e. imagining) [and that’s because my hardware is very flawed]. This is also the case with all my friends whom are good engineers.
The issue here is that you conflate things into ‘human level AI’. There’s at least three distinct aspects to AI:
1: Engineering, and other problem solving. This is a creation of designs in abstract design space.
2: Will to do something in real world in real time.
3: Morality.
People here see first two as inseparable, while seeing third as unrelated.
Think of the tool and its human user as a single system. As long as the system is limited by the human’s intelligence then it will not be as powerful as a system consisting of the same tool driven by a superhuman intelligence. And if the system isn’t limited by the human’s intelligence then the tool is making decisions, it is an AI, and we’re facing the problem of making it follow the operator’s will. (And didn’t you mean to say “as powerful as any (U)FAI”?)
In general, it doesn’t make much sense to draw a sharp distinction between tools and wills that use them. How do you draw the line in the case of a self-modifying AI?
Reasoning by cooked anecdote? Why speak of tanks and not, for example, automated biochemistry labs? I can imagine such existing in the future. And one of those could win the war against all the other biochemistry labs in the world and the rest of the biosphere too, if it were driven by a superior intelligence.