Then it does seem like your AI arguments are playing reference class tennis with a reference class of “conscious beings”. For me, the force of the Tool AI argument is that there’s no reason to assume that AGI is going to behave like a sci-fi character. For example, if something like On Intelligence turns out to be true, I think the algorithms it describes will be quite generally intelligent but hardly capable of rampaging through the countryside. It would be much more like Holden’s Tool AI: you’d feed it data, it’d make predictions, you could choose to use the predictions.
(This is, naturally, the view of that school of AI implementers. Scott Brown: “People often
seem to conflate having intelligence with having volition. Intelligence without volition is just information.”)
Then it does seem like your AI arguments are playing reference class tennis with a reference class of “conscious beings”. For me, the force of the Tool AI argument is that there’s no reason to assume that AGI is going to behave like a sci-fi character. For example, if something like On Intelligence turns out to be true, I think the algorithms it describes will be quite generally intelligent but hardly capable of rampaging through the countryside. It would be much more like Holden’s Tool AI: you’d feed it data, it’d make predictions, you could choose to use the predictions.
(This is, naturally, the view of that school of AI implementers. Scott Brown: “People often seem to conflate having intelligence with having volition. Intelligence without volition is just information.”)