The thing that is most like an agent in the Tool AI scenario is not the computer and software that it is running. The agent is the combination of the human (which is of course very much like an agent) together with the computer-and-software that constitutes the tool. Holden’s argument is that this combination agent is safer somehow. (Perhaps it is more familiar; we can judge intention of the human component with facial expression, for example.)
The claim that Tool AI is an obvious answer to the Friendly AI problem is a paper tiger that Eliezer demolished. However, there’s a weaker claim, that SIAI is not thinking about Tool AI much if at all, and that it would be worthwhile to think about (e.g. because it already routinely exists), which Eliezer didn’t really answer.
However, there’s a weaker claim, that SIAI is not thinking about Tool AI much if at all, and that it would be worthwhile to think about (e.g. because it already routinely exists), which Eliezer didn’t really answer.
Answering that was the point of section 3. Summary: Lots of other people also have their own favored solutions they think are obvious, none of which are also Tool AI. You shouldn’t really expect that SIAI would have addressed your particular idea before you or anyone else even talked about it.
If nobody’s considered it as an option before, isn’t that more reason to take it seriously? Low-hanging fruit is seldom found near well-traveled paths.
It’s been discussed in conversation as one among many topics at places like SIAI and FHI, but not singled out as something to write a large chunk of a 50,000 word piece about, ahead of other things.
The argument is not that SIAI should not have to address the idea at all, but that they should not have to have already addressed the idea before anyone ever proposed it. The bulk of the article did address the idea, this one section explained why that particular idea wasn’t addressed before.
I don’t think that makes sense in this context. AI is still a largely unsolved, mysterious business. Any low-hanging fruit that’s been is still there, because we haven’t even been able to pick a single apple yet.
It seems that way because AI keeps getting redefined as what we haven’t figured out yet. If you told some ancient Arabic scholar that, in the modern day, we can build things out of mostly metal and oil and sand that have enough knowledge of medicine or astronomy or chess or even just math to compete with the greatest human experts, machines that can plot a route across a convoluted city or stumble but remain standing when kicked or recognize different people by looking at their faces or the way they walk, he’d think we have that “homunculus” business pretty much under control.
The thing that is most like an agent in the Tool AI scenario is not the computer and software that it is running. The agent is the combination of the human (which is of course very much like an agent) together with the computer-and-software that constitutes the tool. Holden’s argument is that this combination agent is safer somehow. (Perhaps it is more familiar; we can judge intention of the human component with facial expression, for example.)
The claim that Tool AI is an obvious answer to the Friendly AI problem is a paper tiger that Eliezer demolished. However, there’s a weaker claim, that SIAI is not thinking about Tool AI much if at all, and that it would be worthwhile to think about (e.g. because it already routinely exists), which Eliezer didn’t really answer.
Answering that was the point of section 3. Summary: Lots of other people also have their own favored solutions they think are obvious, none of which are also Tool AI. You shouldn’t really expect that SIAI would have addressed your particular idea before you or anyone else even talked about it.
If nobody’s considered it as an option before, isn’t that more reason to take it seriously? Low-hanging fruit is seldom found near well-traveled paths.
It’s been discussed in conversation as one among many topics at places like SIAI and FHI, but not singled out as something to write a large chunk of a 50,000 word piece about, ahead of other things.
The argument is not that SIAI should not have to address the idea at all, but that they should not have to have already addressed the idea before anyone ever proposed it. The bulk of the article did address the idea, this one section explained why that particular idea wasn’t addressed before.
I don’t think that makes sense in this context. AI is still a largely unsolved, mysterious business. Any low-hanging fruit that’s been is still there, because we haven’t even been able to pick a single apple yet.
It seems that way because AI keeps getting redefined as what we haven’t figured out yet. If you told some ancient Arabic scholar that, in the modern day, we can build things out of mostly metal and oil and sand that have enough knowledge of medicine or astronomy or chess or even just math to compete with the greatest human experts, machines that can plot a route across a convoluted city or stumble but remain standing when kicked or recognize different people by looking at their faces or the way they walk, he’d think we have that “homunculus” business pretty much under control.