It may be a red herring to focus on the definition of “intelligence” in this context. If you prefer, taboo the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do. The issue is what happens after one has a machine that reaches that point.
… the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do.
But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?
But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?
Irrelevant to the question at hand, which is what would happen if a machine had such capabilities. But, if you insist on discussing this issue also, machines with human-like abilities could be very helpful. For example, one might be able to train one of them to do some task, and then make multiple copies of it, much more efficient than individually training lots of humans. Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)
Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)
Why is it distinct? Whether doing something is an error determines if it’s beneficial to obtain ability and willingness to do it.
It’s distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn’t impact the ethical calculation as it would if we were sending a person.
(I think that’s what JoshuaZ was getting at. The “distinct question” would presumably be that of the AI’s potential personhood.)
Um… we already do all that to a pretty high extent and we don’t need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that’s all you need.
Um… we already do all that to a pretty high extent and we don’t need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that’s all you need.
There are a large number of tasks where the expertise level needed by current technology is woefully insufficient. Anything that has a strong natural language requirement for example.
Oh fun, we’re talking about my advisers’ favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP.
But here’s the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It’s making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.
It would surprise me if human-level natural-language processing were possible without sitting on top of a fairly sophisticated and robust world-model.
I mean, just as an example, consider how much a system has to know about the world to realize that in your next-to-last sentence, “It’s” is most likely a typo for “Isn’t.”
Granted that one could manually construct and maintain such a model rather than build tools that maintain it automatically based on ongoing observations, but the latter seems like it would pay off over time.
It may be a red herring to focus on the definition of “intelligence” in this context. If you prefer, taboo the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do. The issue is what happens after one has a machine that reaches that point.
But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?
Irrelevant to the question at hand, which is what would happen if a machine had such capabilities. But, if you insist on discussing this issue also, machines with human-like abilities could be very helpful. For example, one might be able to train one of them to do some task, and then make multiple copies of it, much more efficient than individually training lots of humans. Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)
Why is it distinct? Whether doing something is an error determines if it’s beneficial to obtain ability and willingness to do it.
It’s distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn’t impact the ethical calculation as it would if we were sending a person.
(I think that’s what JoshuaZ was getting at. The “distinct question” would presumably be that of the AI’s potential personhood.)
Um… we already do all that to a pretty high extent and we don’t need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that’s all you need.
There are a large number of tasks where the expertise level needed by current technology is woefully insufficient. Anything that has a strong natural language requirement for example.
Oh fun, we’re talking about my advisers’ favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP.
But here’s the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It’s making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.
It would surprise me if human-level natural-language processing were possible without sitting on top of a fairly sophisticated and robust world-model.
I mean, just as an example, consider how much a system has to know about the world to realize that in your next-to-last sentence, “It’s” is most likely a typo for “Isn’t.”
Granted that one could manually construct and maintain such a model rather than build tools that maintain it automatically based on ongoing observations, but the latter seems like it would pay off over time.