What does improvement in the field of AI refer to? I think it isn’t wrong to characterize it as the development of programs able to perform tasks normally requiring human intelligence.
That’s a reasonably good description of the stuff that people call AI. Any particular task, however, is just an application area, not the definition of the whole thing. Natural language understanding is one of those tasks.
The dream of being able to tell a robot what to do, and it knowing exactly what you meant, goes beyond natural language understanding, beyond AI, beyond superhuman AI, to magic. In fact, it seems to me a dream of not existing—the magic AI will do everything for us. It will magically know what we want before we ask for it, before we even know it. All we do in such a world is to exist. This is just another broken utopia.
The dream of being able to tell a robot what to do, and it knowing exactly what you meant, goes beyond natural language understanding, beyond AI, beyond superhuman AI, to magic.
I agree. All you need is a robot that does not mistake “earn a college degree” for “kill all other humans and print an official paper confirming that you earned a college degree”.
All trends I am aware of indicate that software products will become better at knowing what you meant. But in order for them to constitute an existential risk they would have to become catastrophically worse at understanding what you meant while at the same time becoming vastly more powerful at doing what you did not mean. But this doesn’t sound at all likely to me.
What I imagine is that at some point we’ll have a robot that can enter a classroom, sit down, and process what it hears and sees in such a way that it will be able to correctly fill out a multiple choice test at the end of the lesson. Maybe the robot will literally step on someones toes. This will then have to be fixed.
What I don’t think is that the first robot entering a classroom, in order to master a test, will take over the world after hacking school’s WLAN and solving molecular nanotechnology. That’s just ABSURD.
That’s a reasonably good description of the stuff that people call AI. Any particular task, however, is just an application area, not the definition of the whole thing. Natural language understanding is one of those tasks.
The dream of being able to tell a robot what to do, and it knowing exactly what you meant, goes beyond natural language understanding, beyond AI, beyond superhuman AI, to magic. In fact, it seems to me a dream of not existing—the magic AI will do everything for us. It will magically know what we want before we ask for it, before we even know it. All we do in such a world is to exist. This is just another broken utopia.
I agree. All you need is a robot that does not mistake “earn a college degree” for “kill all other humans and print an official paper confirming that you earned a college degree”.
All trends I am aware of indicate that software products will become better at knowing what you meant. But in order for them to constitute an existential risk they would have to become catastrophically worse at understanding what you meant while at the same time becoming vastly more powerful at doing what you did not mean. But this doesn’t sound at all likely to me.
What I imagine is that at some point we’ll have a robot that can enter a classroom, sit down, and process what it hears and sees in such a way that it will be able to correctly fill out a multiple choice test at the end of the lesson. Maybe the robot will literally step on someones toes. This will then have to be fixed.
What I don’t think is that the first robot entering a classroom, in order to master a test, will take over the world after hacking school’s WLAN and solving molecular nanotechnology. That’s just ABSURD.
Um, I think you meant “disagree”.