There’s another flaw in the model which I presented, which is that I was only thinking about goals which conflict with other agents’ goals. “Solve problem x for $5”-type tasks may not fall into that category, but may still require a lot of “intelligence” to solve. (Although narrow intelligence may be enough).
Good point—I’d missed that particular subtlety.
There’s another flaw in the model which I presented, which is that I was only thinking about goals which conflict with other agents’ goals. “Solve problem x for $5”-type tasks may not fall into that category, but may still require a lot of “intelligence” to solve. (Although narrow intelligence may be enough).