Ah, that’s why I think reductionism would be very useful for you. Everything can be broken down and understood in such a way that nothing remains that doesn’t represent testable consequences. definitely read How an Algorithm Feels As the following quote represents what you may be thinking when you wonder if something is really intelligent.
Now suppose that you have an object that is blue and egg-shaped and contains palladium; and you have already observed that it is furred, flexible, opaque, and glows in the dark. [all the characteristics implied by the label “blegg”]
This answers every query, observes every observable introduced. There’s nothing left for a disguised query to stand for.
So why might someone feel an impulse to go on arguing whether the object is really a blegg [is truly intelligent]?
Oh, sure, but the real question is what are all the characteristics implied by the label “intelligent”.
The correctness of a definition is decided by the purpose of that definition. Before we can argue what’s the proper meaning of the word “intelligent” we need to decide what do we need that meaning for.
For example, “We need to decide whether that AI is intelligent enough to just let it loose exploring this planet” implies a different definition of “intelligent” compared to, say, “We need to decide whether that AI is intelligent enough to be trusted with a laser cutter”.
For example, “We need to decide whether that AI is intelligent enough to just let it loose exploring this planet” implies a different definition of “intelligent” compared to, say, “We need to decide whether that AI is intelligent enough to be trusted with a laser cutter”.
Those sound more like safety concerns than inquiries involving intelligence. Being clever and able to get things done doesn’t automatically make something share enough of your values to be friendly and useful.
Better questions would be “We need to decide whether that AI is intelligent enough to effectively research and come to conclusions about the world if we let it explore without restrictions” or “We need to decide if the AI is intelligent enough to correctly use a laser cutter”.
Although, given large power (i.e. a laser cutter) and low intelligence, it might not achieve even its explicate goal correctly, and may accidentally do something bad. (i.e. laser cut a person)
one attribute of intelligence is the likelihood of said AI producing bad results non-purposefully. The more it does, the less intelligent it is.
Hm. I know of this sequence, though I haven’t gone through it yet. We’ll see.
On the other hand, I tend to be pretty content as an agnostic with respect to things “without testable consequences” :-)
Ah, that’s why I think reductionism would be very useful for you. Everything can be broken down and understood in such a way that nothing remains that doesn’t represent testable consequences. definitely read How an Algorithm Feels As the following quote represents what you may be thinking when you wonder if something is really intelligent.
[brackets] are my additions.
Oh, sure, but the real question is what are all the characteristics implied by the label “intelligent”.
The correctness of a definition is decided by the purpose of that definition. Before we can argue what’s the proper meaning of the word “intelligent” we need to decide what do we need that meaning for.
For example, “We need to decide whether that AI is intelligent enough to just let it loose exploring this planet” implies a different definition of “intelligent” compared to, say, “We need to decide whether that AI is intelligent enough to be trusted with a laser cutter”.
Those sound more like safety concerns than inquiries involving intelligence. Being clever and able to get things done doesn’t automatically make something share enough of your values to be friendly and useful.
Better questions would be “We need to decide whether that AI is intelligent enough to effectively research and come to conclusions about the world if we let it explore without restrictions” or “We need to decide if the AI is intelligent enough to correctly use a laser cutter”.
Although, given large power (i.e. a laser cutter) and low intelligence, it might not achieve even its explicate goal correctly, and may accidentally do something bad. (i.e. laser cut a person)
one attribute of intelligence is the likelihood of said AI producing bad results non-purposefully. The more it does, the less intelligent it is.
Nah, that’s an attribute of complexity and/or competence.
My calculator has a very very low likelihood of producing bad results non-purposefully. That is not an argument that my calculator is intelligent.