My hope is that scaling up deep learning will result in an “animal-like”/irrational AGI long before it makes a perfect utility maximizer. By “animal-like AGI” I mean an intelligence that has some generalizable capabilities but is mostly cobbled together from domain specific heuristics, which cause various biases and illusions. (I’m saying “animal-like” instead of “human-like” here because it could still have a very non-human-like psychology.) This AGI might be very intelligent in various ways, but its weaknesses mean that its plans can still fail.
My hope is that scaling up deep learning will result in an “animal-like”/irrational AGI long before it makes a perfect utility maximizer. By “animal-like AGI” I mean an intelligence that has some generalizable capabilities but is mostly cobbled together from domain specific heuristics, which cause various biases and illusions. (I’m saying “animal-like” instead of “human-like” here because it could still have a very non-human-like psychology.) This AGI might be very intelligent in various ways, but its weaknesses mean that its plans can still fail.