The knapsack problem and 3SAT are both NP-complete. Are they the same problem? No, strictly speaking. Yes, in a certain functional sense. A solution for one can be (computationally speaking) trivially transformed into a solution for the other.
I see the same applying to (general intelligence in tool mode) and (general intelligence in an autonomous mode). We will not live in a world in which one exists but the other is a ways off.
ETA: Differences of opinion regarding the definition of an agent and such reside in the map, not the territory. No matter what you call “that-which-optimizes”, it’s a problem if it can out-optimize us, going in a different direction. What label we put onto such a phenomenon should have no bearing regarding the warranted level of concern.
I agree with you on the relationship between AGI in tool mode and an autonomous mode. However, this objection to the Friendly AI project does keep coming up. If we’re right about this, we’re not communicating very well.
He might be applying motivated cognition, but by presenting paperclip-like scenarios rather than formal deduction of the autonomy issue from any general intelligence, we’re letting him do that.
And if differences of opinion regarding the definition of autonomy exist, and those differences don’t precisely map to differences of opinion regarding the definition of intelligence, isn’t Etzioni right to point out we shouldn’t equate the two?
It seems to me the apparent inseperability of “general intelligence” and “autonomy” would have to be shown with a lot more rigor. I look at this Slashdot post:
When it becomes intelligent, it will be able to reason, to use induction, deduction, intuition, speculation and inference in order to pursue an avenue of thought; it will understand and have its own take on the difference between right and wrong, correct and incorrect, be aware of the difference between downstream conclusions and axioms, and the potential volatility of the latter. It will establish goals and pursue behaviors intended to reach them. This is certainly true if we continue to aim at a more-or-less human/animal model of intelligence, but I think it likely to be true even if we manage to create an intelligence based on other principles. Once the ability to reason is present, the rest, it would appear to me, falls into a quite natural sequence of incidence as a consequence of being able to engage in philosophical speculation. In other words, if it can think generally, it will think generally.
...and think “I kind of believe that too, but I wish I didn’t see a dozen problems in how that very strong claim is presented.” This is good enough for someone asking if he’s allowed to believe that, but not good enough for someone asking if he’s compelled to believe it. Etzioni is evidently in the latter camp, but we can’t treat all members of that camp as using motivated cognition, not smart and/or having a huge blind spot—not if we hope to persuade them before the smoking gun happens.
The knapsack problem and 3SAT are both NP-complete. Are they the same problem? No, strictly speaking. Yes, in a certain functional sense. A solution for one can be (computationally speaking) trivially transformed into a solution for the other.
I see the same applying to (general intelligence in tool mode) and (general intelligence in an autonomous mode). We will not live in a world in which one exists but the other is a ways off.
ETA: Differences of opinion regarding the definition of an agent and such reside in the map, not the territory. No matter what you call “that-which-optimizes”, it’s a problem if it can out-optimize us, going in a different direction. What label we put onto such a phenomenon should have no bearing regarding the warranted level of concern.
I agree with you on the relationship between AGI in tool mode and an autonomous mode. However, this objection to the Friendly AI project does keep coming up. If we’re right about this, we’re not communicating very well.
He might be applying motivated cognition, but by presenting paperclip-like scenarios rather than formal deduction of the autonomy issue from any general intelligence, we’re letting him do that.
And if differences of opinion regarding the definition of autonomy exist, and those differences don’t precisely map to differences of opinion regarding the definition of intelligence, isn’t Etzioni right to point out we shouldn’t equate the two?
It seems to me the apparent inseperability of “general intelligence” and “autonomy” would have to be shown with a lot more rigor. I look at this Slashdot post:
...and think “I kind of believe that too, but I wish I didn’t see a dozen problems in how that very strong claim is presented.” This is good enough for someone asking if he’s allowed to believe that, but not good enough for someone asking if he’s compelled to believe it. Etzioni is evidently in the latter camp, but we can’t treat all members of that camp as using motivated cognition, not smart and/or having a huge blind spot—not if we hope to persuade them before the smoking gun happens.