The very idea underlying AI is enabling people to get a program to do what they mean without having to explicitly encode all details.
I have never seen AI characterised like that before. Sounds like moonshine to me. Programming languages, libraries, and development environments yes, that’s what they’re for, but those don’t take away the task of having to explicitly and precisely think about what you mean, they just automate the routine grunt work for you. An AI isn’t going to superintelligently (that is to say,magically) know what you mean, if you didn’t actually mean anything.
Non AI systems uncontroversially require explicit coding. How would you characterise .AI systems, then?
XiXiDu’s characterisation seems suitable enough: programs able to perform tasks normally requiring human intelligence. One might add “or superhuman intelligence”, as long as one is not simply wishing for magic there. This is orthogonal to the question of how you tell such a system what you want it to do.
Indeed. But there is a how to-do-it definition of .AI, and it is kind of not aboutt explicit coding, for instance, if a student takes an .AI course as part of a degree, they are not taught explicit coding all over again. They are taught about learning algorithms, neural networks, etc.
They definitely require some amount of explicit coding of their values. You can try to reduce the burden of such explicit value-loading through various indirect means, such as value learning, indirect normativity, extrapolated volition, or even reinforcement learning (though that’s the most primitive and dangerous form of value-loading). You cannot, however, dodge the bullet.
Programming languages, libraries, and development environments yes, that’s what they’re for, but those don’t take away the task of having to explicitly and precisely think about what you mean, they just automate the routine grunt work for you.
What does improvement in the field of AI refer to? I think it isn’t wrong to characterize it as the development of programs able to perform tasks normally requiring human intelligence.
I believe that companies like Apple would like their products, such as Siri, to be able to increasingly understand what their customers expect their gadgets to do, without them having to learn programming.
In this context it seems absurd to imagine that when eventually our products become sophisticated enough to take over the world, they will do so due to objectively stupid misunderstandings.
What does improvement in the field of AI refer to? I think it isn’t wrong to characterize it as the development of programs able to perform tasks normally requiring human intelligence.
That’s a reasonably good description of the stuff that people call AI. Any particular task, however, is just an application area, not the definition of the whole thing. Natural language understanding is one of those tasks.
The dream of being able to tell a robot what to do, and it knowing exactly what you meant, goes beyond natural language understanding, beyond AI, beyond superhuman AI, to magic. In fact, it seems to me a dream of not existing—the magic AI will do everything for us. It will magically know what we want before we ask for it, before we even know it. All we do in such a world is to exist. This is just another broken utopia.
The dream of being able to tell a robot what to do, and it knowing exactly what you meant, goes beyond natural language understanding, beyond AI, beyond superhuman AI, to magic.
I agree. All you need is a robot that does not mistake “earn a college degree” for “kill all other humans and print an official paper confirming that you earned a college degree”.
All trends I am aware of indicate that software products will become better at knowing what you meant. But in order for them to constitute an existential risk they would have to become catastrophically worse at understanding what you meant while at the same time becoming vastly more powerful at doing what you did not mean. But this doesn’t sound at all likely to me.
What I imagine is that at some point we’ll have a robot that can enter a classroom, sit down, and process what it hears and sees in such a way that it will be able to correctly fill out a multiple choice test at the end of the lesson. Maybe the robot will literally step on someones toes. This will then have to be fixed.
What I don’t think is that the first robot entering a classroom, in order to master a test, will take over the world after hacking school’s WLAN and solving molecular nanotechnology. That’s just ABSURD.
I have never seen AI characterised like that before. Sounds like moonshine to me. Programming languages, libraries, and development environments yes, that’s what they’re for, but those don’t take away the task of having to explicitly and precisely think about what you mean, they just automate the routine grunt work for you. An AI isn’t going to superintelligently (that is to say,magically) know what you mean, if you didn’t actually mean anything.
Non AI systems uncontroversially require explicit coding. How would you characterise .AI systems, then?
XiXiDu’s characterisation seems suitable enough: programs able to perform tasks normally requiring human intelligence. One might add “or superhuman intelligence”, as long as one is not simply wishing for magic there. This is orthogonal to the question of how you tell such a system what you want it to do.
Indeed. But there is a how to-do-it definition of .AI, and it is kind of not aboutt explicit coding, for instance, if a student takes an .AI course as part of a degree, they are not taught explicit coding all over again. They are taught about learning algorithms, neural networks, etc.
They definitely require some amount of explicit coding of their values. You can try to reduce the burden of such explicit value-loading through various indirect means, such as value learning, indirect normativity, extrapolated volition, or even reinforcement learning (though that’s the most primitive and dangerous form of value-loading). You cannot, however, dodge the bullet.
Because?
What does improvement in the field of AI refer to? I think it isn’t wrong to characterize it as the development of programs able to perform tasks normally requiring human intelligence.
I believe that companies like Apple would like their products, such as Siri, to be able to increasingly understand what their customers expect their gadgets to do, without them having to learn programming.
In this context it seems absurd to imagine that when eventually our products become sophisticated enough to take over the world, they will do so due to objectively stupid misunderstandings.
That’s a reasonably good description of the stuff that people call AI. Any particular task, however, is just an application area, not the definition of the whole thing. Natural language understanding is one of those tasks.
The dream of being able to tell a robot what to do, and it knowing exactly what you meant, goes beyond natural language understanding, beyond AI, beyond superhuman AI, to magic. In fact, it seems to me a dream of not existing—the magic AI will do everything for us. It will magically know what we want before we ask for it, before we even know it. All we do in such a world is to exist. This is just another broken utopia.
I agree. All you need is a robot that does not mistake “earn a college degree” for “kill all other humans and print an official paper confirming that you earned a college degree”.
All trends I am aware of indicate that software products will become better at knowing what you meant. But in order for them to constitute an existential risk they would have to become catastrophically worse at understanding what you meant while at the same time becoming vastly more powerful at doing what you did not mean. But this doesn’t sound at all likely to me.
What I imagine is that at some point we’ll have a robot that can enter a classroom, sit down, and process what it hears and sees in such a way that it will be able to correctly fill out a multiple choice test at the end of the lesson. Maybe the robot will literally step on someones toes. This will then have to be fixed.
What I don’t think is that the first robot entering a classroom, in order to master a test, will take over the world after hacking school’s WLAN and solving molecular nanotechnology. That’s just ABSURD.
Um, I think you meant “disagree”.