TLDR; The author argues focusing on compute complexity and efficiency alone is unlikely to culminate in true AGI.
My key takeaways
Autopoiesis and agency
Autopoiesis being the ability of an organism to self-create and maintain itself.
Living systems have the capacity of setting their own goals on the other hand organisms, depend on external entities (mostly humans
Large v small worlds
Organisms navigate complex environments with undefined rules unlike AI which navigates in a “small” world confined to well-defined computational problems where everything including problem scope and relevance is pre-determined.
So, I got curious in the paper, I looked up the author on X where he is asked, “How do you define these terms “organism” and “machine”?” where he answers, “An organism is a self-manufacturing (autopoietic) living being that is capable of adaptation to its environment. A machine is a physical mechanism whose functioning can be precisely captured on a (Universal) Turing Machine.”
It sounds to me like the author isn’t thinking about near-future scenarios, just existing AI.
Making a machine autopoietic is straightforward if it’s got the right sort of intelligence. We haven’t yet made a machine with the right sort of intelligence to do it yet, but there are good reasons to think we’re close. AutoGPT and similar agents can roughly functionally understand a core instruction like “maintain, improve, and perpetuate your code base”, they’re just not quite smart enough to do it effectively. Yet. So engaging with the arguments for what remains between here and there is the critical bit. Maybe it’s around the corner, maybe it’s decades away. It comes down to the specifics. The general argument “Turing machines can’t host autopoietic agents” are obviously wrong.
I’m not sure if the author makes this argument, but your summary sounded like they do.
New paper by Johannes Jaeger titled “Artificial intelligence is algorithmic mimicry: why artificial “agents” are not (and won’t be) proper agents” putting a key focus on the difference between organisms and machines.
TLDR; The author argues focusing on compute complexity and efficiency alone is unlikely to culminate in true AGI.
My key takeaways
Autopoiesis and agency
Autopoiesis being the ability of an organism to self-create and maintain itself.
Living systems have the capacity of setting their own goals on the other hand organisms, depend on external entities (mostly humans
Large v small worlds
Organisms navigate complex environments with undefined rules unlike AI which navigates in a “small” world confined to well-defined computational problems where everything including problem scope and relevance is pre-determined.
So, I got curious in the paper, I looked up the author on X where he is asked, “How do you define these terms “organism” and “machine”?” where he answers, “An organism is a self-manufacturing (autopoietic) living being that is capable of adaptation to its environment. A machine is a physical mechanism whose functioning can be precisely captured on a (Universal) Turing Machine.”
You can read the full summary here.
It sounds to me like the author isn’t thinking about near-future scenarios, just existing AI.
Making a machine autopoietic is straightforward if it’s got the right sort of intelligence. We haven’t yet made a machine with the right sort of intelligence to do it yet, but there are good reasons to think we’re close. AutoGPT and similar agents can roughly functionally understand a core instruction like “maintain, improve, and perpetuate your code base”, they’re just not quite smart enough to do it effectively. Yet. So engaging with the arguments for what remains between here and there is the critical bit. Maybe it’s around the corner, maybe it’s decades away. It comes down to the specifics. The general argument “Turing machines can’t host autopoietic agents” are obviously wrong.
I’m not sure if the author makes this argument, but your summary sounded like they do.