Oh, I was unaware this was still an issue within this site. To LW the question of free will is already solved). I encourage you to look further into it.
However, I think our current issue can become a little more clear if we taboo “programming”.
What specific differences in functionality do you expect between “normal” AI and “powerful” AI?
I was unaware this was still an issue within this site. To LW the question of free will is already solved.
Let me point out that I am not “within this site” :-) Oh, and your link needs a closing parenthesis.
What specific differences in functionality do you expect between “normal” AI and “powerful” AI?
I am not familiar with your terminology, but are you asking what would I require to recognize some computing system as a “true AI”, or, basically, what is intelligence?
I would phrase it as, ‘Can you explain what on Earth you mean, without using terms that may be disputed?’
I don’t know if it would help to ask about examples of algorithms learning from experience in order to fulfill mechanically specified goals (or produce specified results). But the OP seems mainly concerned with the ‘goal’ part.
are you asking what would I require to recognize some computing system as a “true AI”, or, basically, what is intelligence?
Somewhat. I think my question is better phrased as, “Why do you have a distinction between true intelligence and not true intelligence?”
My use of intelligence is defined (roughly) as cross domain optimization. A more intelligent agent is just better at doing lots of things it wants to do successfully, and conversely, something that’s better at doing a larger variety of tasks than a similarly motivated agent is considered more intelligent. It seems to me to be a (somewhat lumpy and modular) scale, ranging from a rock, up through natural selection, humans, and then a Superintelligent AI near the upper bound.
Why do you have a distinction between true intelligence and not true intelligence?
I have a distinction between what I’d be willing to call intelligence and what I’d say may look like intelligence but really isn’t.
For example, IBM’s Watson playing Jeopardy or any of the contemporary chess-playing programs do look like intelligence. But I’m not willing to call them intelligent.
My use of intelligence is defined (roughly) as cross domain optimization.
Ah. No, in this context I’m talking about intelligence as a threshold phenomenon, notably as something that we generally agree humans have (well, some humans have :-D) and the rest of things around us do not. I realize it’s a very species-ist approach.
I don’t think I can concisely formulate the characteristics of it (that will probably take a book or two), but the notion of adaptability, specifically, the ability to deal with new information and new environment, is very important to it.
Hm. If this idea of intelligence seems valuable to you and worth pursuing, I absolutely implore that you wade through the reductionism sequence while or before you develop it more fully. I think it’d be an excellent resource for figuring out exactly what you mean to mean. (and the very similar Human’s guide to words)
Ah, that’s why I think reductionism would be very useful for you. Everything can be broken down and understood in such a way that nothing remains that doesn’t represent testable consequences. definitely read How an Algorithm Feels As the following quote represents what you may be thinking when you wonder if something is really intelligent.
Now suppose that you have an object that is blue and egg-shaped and contains palladium; and you have already observed that it is furred, flexible, opaque, and glows in the dark. [all the characteristics implied by the label “blegg”]
This answers every query, observes every observable introduced. There’s nothing left for a disguised query to stand for.
So why might someone feel an impulse to go on arguing whether the object is really a blegg [is truly intelligent]?
Oh, sure, but the real question is what are all the characteristics implied by the label “intelligent”.
The correctness of a definition is decided by the purpose of that definition. Before we can argue what’s the proper meaning of the word “intelligent” we need to decide what do we need that meaning for.
For example, “We need to decide whether that AI is intelligent enough to just let it loose exploring this planet” implies a different definition of “intelligent” compared to, say, “We need to decide whether that AI is intelligent enough to be trusted with a laser cutter”.
For example, “We need to decide whether that AI is intelligent enough to just let it loose exploring this planet” implies a different definition of “intelligent” compared to, say, “We need to decide whether that AI is intelligent enough to be trusted with a laser cutter”.
Those sound more like safety concerns than inquiries involving intelligence. Being clever and able to get things done doesn’t automatically make something share enough of your values to be friendly and useful.
Better questions would be “We need to decide whether that AI is intelligent enough to effectively research and come to conclusions about the world if we let it explore without restrictions” or “We need to decide if the AI is intelligent enough to correctly use a laser cutter”.
Although, given large power (i.e. a laser cutter) and low intelligence, it might not achieve even its explicate goal correctly, and may accidentally do something bad. (i.e. laser cut a person)
one attribute of intelligence is the likelihood of said AI producing bad results non-purposefully. The more it does, the less intelligent it is.
Oh, I was unaware this was still an issue within this site. To LW the question of free will is already solved). I encourage you to look further into it.
However, I think our current issue can become a little more clear if we taboo “programming”.
What specific differences in functionality do you expect between “normal” AI and “powerful” AI?
Let me point out that I am not “within this site” :-) Oh, and your link needs a closing parenthesis.
I am not familiar with your terminology, but are you asking what would I require to recognize some computing system as a “true AI”, or, basically, what is intelligence?
I would phrase it as, ‘Can you explain what on Earth you mean, without using terms that may be disputed?’
I don’t know if it would help to ask about examples of algorithms learning from experience in order to fulfill mechanically specified goals (or produce specified results). But the OP seems mainly concerned with the ‘goal’ part.
Somewhat. I think my question is better phrased as, “Why do you have a distinction between true intelligence and not true intelligence?”
My use of intelligence is defined (roughly) as cross domain optimization. A more intelligent agent is just better at doing lots of things it wants to do successfully, and conversely, something that’s better at doing a larger variety of tasks than a similarly motivated agent is considered more intelligent. It seems to me to be a (somewhat lumpy and modular) scale, ranging from a rock, up through natural selection, humans, and then a Superintelligent AI near the upper bound.
I have a distinction between what I’d be willing to call intelligence and what I’d say may look like intelligence but really isn’t.
For example, IBM’s Watson playing Jeopardy or any of the contemporary chess-playing programs do look like intelligence. But I’m not willing to call them intelligent.
Ah. No, in this context I’m talking about intelligence as a threshold phenomenon, notably as something that we generally agree humans have (well, some humans have :-D) and the rest of things around us do not. I realize it’s a very species-ist approach.
I don’t think I can concisely formulate the characteristics of it (that will probably take a book or two), but the notion of adaptability, specifically, the ability to deal with new information and new environment, is very important to it.
Hm. If this idea of intelligence seems valuable to you and worth pursuing, I absolutely implore that you wade through the reductionism sequence while or before you develop it more fully. I think it’d be an excellent resource for figuring out exactly what you mean to mean. (and the very similar Human’s guide to words)
Hm. I know of this sequence, though I haven’t gone through it yet. We’ll see.
On the other hand, I tend to be pretty content as an agnostic with respect to things “without testable consequences” :-)
Ah, that’s why I think reductionism would be very useful for you. Everything can be broken down and understood in such a way that nothing remains that doesn’t represent testable consequences. definitely read How an Algorithm Feels As the following quote represents what you may be thinking when you wonder if something is really intelligent.
[brackets] are my additions.
Oh, sure, but the real question is what are all the characteristics implied by the label “intelligent”.
The correctness of a definition is decided by the purpose of that definition. Before we can argue what’s the proper meaning of the word “intelligent” we need to decide what do we need that meaning for.
For example, “We need to decide whether that AI is intelligent enough to just let it loose exploring this planet” implies a different definition of “intelligent” compared to, say, “We need to decide whether that AI is intelligent enough to be trusted with a laser cutter”.
Those sound more like safety concerns than inquiries involving intelligence. Being clever and able to get things done doesn’t automatically make something share enough of your values to be friendly and useful.
Better questions would be “We need to decide whether that AI is intelligent enough to effectively research and come to conclusions about the world if we let it explore without restrictions” or “We need to decide if the AI is intelligent enough to correctly use a laser cutter”.
Although, given large power (i.e. a laser cutter) and low intelligence, it might not achieve even its explicate goal correctly, and may accidentally do something bad. (i.e. laser cut a person)
one attribute of intelligence is the likelihood of said AI producing bad results non-purposefully. The more it does, the less intelligent it is.
Nah, that’s an attribute of complexity and/or competence.
My calculator has a very very low likelihood of producing bad results non-purposefully. That is not an argument that my calculator is intelligent.