Eliezer: what I proposed is not a superintelligence, it’s a tool. Intelligence is composed of multiple factors, and what I’m proposing is stripping away the active, dynamic, live factor—the factor that has any motivations at all—and leaving just the computational part; that is, leaving the part which can navigate vast networks of data and help the user make sense of them and come to conclusions that he would not be able to on his own. Effectively, what I’m proposing is an intelligence tool that can be used as a supplement by the brains of its users.
How is that different from Google, or data mining? It isn’t. It’s conceptually the same thing, just with better algorithms. Algorithms don’t care how they’re used.
This bit of technology is something that will have to be developed to put together the first iteration of an AI anyway. By definition, this “making sense of things” technology needs to be strong enough that it allows a user to improve the technology itself; that is what an iterative, self-improving AI would be doing. So why let the AI self-improve itself, which more likely than not will run amok, despite the designers’ efforts and best intentions? Why not use the same technology that the AI would use to improve itself, to improve _your_self? Indeed, it seems ridiculous not to do so.
To build an AI, you need all the same skills that you would need to improve yourself. So why create an external entity, when you can be that entity?
Eliezer: what I proposed is not a superintelligence, it’s a tool. Intelligence is composed of multiple factors, and what I’m proposing is stripping away the active, dynamic, live factor—the factor that has any motivations at all—and leaving just the computational part; that is, leaving the part which can navigate vast networks of data and help the user make sense of them and come to conclusions that he would not be able to on his own. Effectively, what I’m proposing is an intelligence tool that can be used as a supplement by the brains of its users.
How is that different from Google, or data mining? It isn’t. It’s conceptually the same thing, just with better algorithms. Algorithms don’t care how they’re used.
This bit of technology is something that will have to be developed to put together the first iteration of an AI anyway. By definition, this “making sense of things” technology needs to be strong enough that it allows a user to improve the technology itself; that is what an iterative, self-improving AI would be doing. So why let the AI self-improve itself, which more likely than not will run amok, despite the designers’ efforts and best intentions? Why not use the same technology that the AI would use to improve itself, to improve _your_self? Indeed, it seems ridiculous not to do so.
To build an AI, you need all the same skills that you would need to improve yourself. So why create an external entity, when you can be that entity?