which animals cannot do at all, they can’t write computer code or a mathematical paper
This is not obvious to me (at least not for some senses of the word “could”). Animals cannot be motivated into attempting to solve these tasks, and they cannot study maths or programming. If they could do those things, then it is not at all clear to me that they wouldn’t be able to write code or maths papers. To make this more specific; insofar as humans rely on a capacity for general problem-solving in order to do maths and programming, it would not surprise me if many animals also have this capacity to a sufficient extent, but that it cannot be directed in the right way. Note that animals even outperform humans at some general cognitive tasks. For example, chimps have a much better short-term memory than humans.
Moreover, we know a lot about human performance at those tasks, and it’s abysmal, even for top humans, and for AI research as a field.
Abysmal, compared to what? Yes, we can see that it is abysmal compared to what would in principle be information-theoretically possible. However, this doesn’t tell us very much about whether or not it is abysmal compared to what is computationally possible.
The problem of finding the minimal complexity hypothesis for a given set of data is not computationally tractable. For Kolmogorov complexity, it is uncomputable, but even for Boolean complexity, it is at least exponentially difficult (depending a bit on how exactly the problem is formalised). This means that in order to reason effectively about large amounts of data, it is (presumably) necessary to model most of it using low-fidelity methods, and then (potentially) use various heuristics in order to determine what pieces of information deserve more attention. I would therefore expect a “saturated” AI system to also frequently miss things that look obvious in hindsight.
So it seems that, at least, there is quite a bit of room for a large initial boost over the current human-equivalent capacity.
I agree that AI systems have many clear and obvious advantages, and that e.g. simply running them at a higher clock speed will give you a clear boost regardless of what assumptions we make about the “quality” of their cognition compared to that of humans. The question I’m concerned with is whether or not a takeoff scenario is better modeled as “AI quickly bootstraps to incomprehensible, Godlike intelligence through recursive self-improvement”, or whether it is better modeled as “economic growth suddenly goes up by a lot”. All the obvious advantages of AI systems are compatible with the latter.
This is not obvious to me (at least not for some senses of the word “could”). Animals cannot be motivated into attempting to solve these tasks, and they cannot study maths or programming. If they could do those things, then it is not at all clear to me that they wouldn’t be able to write code or maths papers. To make this more specific; insofar as humans rely on a capacity for general problem-solving in order to do maths and programming, it would not surprise me if many animals also have this capacity to a sufficient extent, but that it cannot be directed in the right way. Note that animals even outperform humans at some general cognitive tasks. For example, chimps have a much better short-term memory than humans.
Abysmal, compared to what? Yes, we can see that it is abysmal compared to what would in principle be information-theoretically possible. However, this doesn’t tell us very much about whether or not it is abysmal compared to what is computationally possible.
The problem of finding the minimal complexity hypothesis for a given set of data is not computationally tractable. For Kolmogorov complexity, it is uncomputable, but even for Boolean complexity, it is at least exponentially difficult (depending a bit on how exactly the problem is formalised). This means that in order to reason effectively about large amounts of data, it is (presumably) necessary to model most of it using low-fidelity methods, and then (potentially) use various heuristics in order to determine what pieces of information deserve more attention. I would therefore expect a “saturated” AI system to also frequently miss things that look obvious in hindsight.
I agree that AI systems have many clear and obvious advantages, and that e.g. simply running them at a higher clock speed will give you a clear boost regardless of what assumptions we make about the “quality” of their cognition compared to that of humans. The question I’m concerned with is whether or not a takeoff scenario is better modeled as “AI quickly bootstraps to incomprehensible, Godlike intelligence through recursive self-improvement”, or whether it is better modeled as “economic growth suddenly goes up by a lot”. All the obvious advantages of AI systems are compatible with the latter.