There should be “more focus” on a lot of stuff including amoebas (this claim is almost contradictory if taken literally; I’m saying “focus” isn’t as subject to tradeoffs as it might seem like). But, I think you’re missing that bigger / higher / complexer / abstract / refineder things can be in some key ways simpler or easier to understand in their essences. Compare: “You can’t understand digital addition without understanding Mesopotamian clay token accounting”. There’s a lot of interesting stuff to be learned by studying the evolution of a sublunary instance of an abstract thing, but that doesn’t mean you can’t understand the abstract thing, possibly faster, by some other method. For example, you can try to read amoebas as participating in abstract logical structures, and that can be fruitful, but the humans are convenient in that sometimes they actually literally write out formal expressions of the logical structures.
Compare: “You can’t understand digital addition without understanding Mesopotamian clay token accounting”.
Well, if we didn’t understand digital addition and were only observing some strange electrical patterns on a mysterious blinking board, going back to the clay token accounting might not have been a bad idea. And we do not understand agency, so why not go back to basics?
I’m not arguing against studying amoebas, I’m arguing for also studying higher-level things including agency without first studying amoebas. Amoebas are simpler, which makes them easier to study, but they are also less agenty, and in some ways *less* simple *as agents*. It would be easier to understand an abstractly written program to perform addition, than to understand register readouts from a highly optimized program, even if the former never appears “in the wild” because it’s too computationally expensive.
going back to the clay token accounting might not have been a bad idea
I agree, as I said. But it would be a mistake to not also think at the abstract level; you can learn/invent digital addition just by trying to count stuff.
It’s a good point that there are trade-offs, and highly optimized programs, even if they perform a simple function, are hard to understand without “being inside” one. That’s one reason I linked a post about an even simpler and well understood potentially “agentic” system, the Game of Life, though it focuses on a different angle, not “let’s see what it takes to design a simple agent in this game”.
“You can’t understand digital addition without understanding Mesopotamian clay token accounting”
That’s sort of exactly correct? If you fully understand digital addition, then there’s going to be something at the core of clay token accounting that you already understand. Complex systems tend to be built on the same concepts as simpler systems that do the same thing. If you fully understand an elevator, then there’s no way that ropes & pulleys can still be a mystery to you, right? And to my knowledge, studying ropes & pulleys is a step in how we got to elevators, so it would make sense to me that going “back to basics”, i.e. simpler real models, could help us make something we’re still trying to build.
Even if I disagree with you, thank you for posing the example!
What do you disagree about? I agree that understanding addition implies that you understand something important about token accounting. I think there’s something about addition that is maybe best learned by studying token accounting or similar (understanding how minds come to practice addition). I also think much of the essence of [addition as addition itself] is best and most easily understood in a more normal way—practicing counting and computing things in everyday life—and *not* by studying anything specifically about Mesopotamian clay token accounting, because relative to much of the essence of addition, historical accounting systems are baroque with irrelevant detail, and are a precursor or proto form of practicing addition, hence don’t manifest the essence of addition in a refined and clear way.
I like your elevator example. I think it’s an open question whether / how amoebas are manifesting the same principles as (human, say) agency / mind / intelligence, i.e. to what extent amoebas are simpler models of the same thing (agent etc.) vs. models of something else (such as deficient agency, a subset of agency, etc.). I mean, my point isn’t that there’s some “amount” that amoebas “are agents” or whatever, that’s not exactly well-defined or interesting, my point is that the reasons we’re interested in agency make human agency much more interesting than amoeba agency, and this is not primarily a mistake; amoebas pretty much just don’t do fictive learning, logical inference, etc., even though if you try you can read into amoebas a sort of deficient/restricted form of these things.
I don’t know. Possibly something, probably nothing.
the essence of [addition as addition itself]…
The “essence of cognition” isn’t really available for us to study directly (so far as I know), except as a part of more complex processes. Finding many varied examples may help determine what is the “essence” versus what is just extraneous detail.
While intelligent agency in humans is definitely more interesting than in amoebas, knowing exactly why amoebas aren’t intelligent agents would tell you one detail about why humans are, and may thus tell you a trait that a hypothetical AGI would need to have.
knowing exactly why amoebas aren’t intelligent agents would tell you one detail about why humans are
Exactly, yeah; I think in the particular case of amoebas the benefit looks more like this, and it doesn’t so much look like amoebas positively exemplifying much that’s key about the kind of agency we’re interested in re/ alignment. Which is why I disagree with the OP.
There should be “more focus” on a lot of stuff including amoebas (this claim is almost contradictory if taken literally; I’m saying “focus” isn’t as subject to tradeoffs as it might seem like). But, I think you’re missing that bigger / higher / complexer / abstract / refineder things can be in some key ways simpler or easier to understand in their essences. Compare: “You can’t understand digital addition without understanding Mesopotamian clay token accounting”. There’s a lot of interesting stuff to be learned by studying the evolution of a sublunary instance of an abstract thing, but that doesn’t mean you can’t understand the abstract thing, possibly faster, by some other method. For example, you can try to read amoebas as participating in abstract logical structures, and that can be fruitful, but the humans are convenient in that sometimes they actually literally write out formal expressions of the logical structures.
Well, if we didn’t understand digital addition and were only observing some strange electrical patterns on a mysterious blinking board, going back to the clay token accounting might not have been a bad idea. And we do not understand agency, so why not go back to basics?
I’m not arguing against studying amoebas, I’m arguing for also studying higher-level things including agency without first studying amoebas. Amoebas are simpler, which makes them easier to study, but they are also less agenty, and in some ways *less* simple *as agents*. It would be easier to understand an abstractly written program to perform addition, than to understand register readouts from a highly optimized program, even if the former never appears “in the wild” because it’s too computationally expensive.
I agree, as I said. But it would be a mistake to not also think at the abstract level; you can learn/invent digital addition just by trying to count stuff.
It’s a good point that there are trade-offs, and highly optimized programs, even if they perform a simple function, are hard to understand without “being inside” one. That’s one reason I linked a post about an even simpler and well understood potentially “agentic” system, the Game of Life, though it focuses on a different angle, not “let’s see what it takes to design a simple agent in this game”.
“You can’t understand digital addition without understanding Mesopotamian clay token accounting”
That’s sort of exactly correct? If you fully understand digital addition, then there’s going to be something at the core of clay token accounting that you already understand. Complex systems tend to be built on the same concepts as simpler systems that do the same thing. If you fully understand an elevator, then there’s no way that ropes & pulleys can still be a mystery to you, right? And to my knowledge, studying ropes & pulleys is a step in how we got to elevators, so it would make sense to me that going “back to basics”, i.e. simpler real models, could help us make something we’re still trying to build.
Even if I disagree with you, thank you for posing the example!
What do you disagree about? I agree that understanding addition implies that you understand something important about token accounting. I think there’s something about addition that is maybe best learned by studying token accounting or similar (understanding how minds come to practice addition). I also think much of the essence of [addition as addition itself] is best and most easily understood in a more normal way—practicing counting and computing things in everyday life—and *not* by studying anything specifically about Mesopotamian clay token accounting, because relative to much of the essence of addition, historical accounting systems are baroque with irrelevant detail, and are a precursor or proto form of practicing addition, hence don’t manifest the essence of addition in a refined and clear way.
I like your elevator example. I think it’s an open question whether / how amoebas are manifesting the same principles as (human, say) agency / mind / intelligence, i.e. to what extent amoebas are simpler models of the same thing (agent etc.) vs. models of something else (such as deficient agency, a subset of agency, etc.). I mean, my point isn’t that there’s some “amount” that amoebas “are agents” or whatever, that’s not exactly well-defined or interesting, my point is that the reasons we’re interested in agency make human agency much more interesting than amoeba agency, and this is not primarily a mistake; amoebas pretty much just don’t do fictive learning, logical inference, etc., even though if you try you can read into amoebas a sort of deficient/restricted form of these things.
Good advice for learning in general.
I don’t know. Possibly something, probably nothing.
The “essence of cognition” isn’t really available for us to study directly (so far as I know), except as a part of more complex processes. Finding many varied examples may help determine what is the “essence” versus what is just extraneous detail.
While intelligent agency in humans is definitely more interesting than in amoebas, knowing exactly why amoebas aren’t intelligent agents would tell you one detail about why humans are, and may thus tell you a trait that a hypothetical AGI would need to have.
I’m glad you liked my elevator example!
Exactly, yeah; I think in the particular case of amoebas the benefit looks more like this, and it doesn’t so much look like amoebas positively exemplifying much that’s key about the kind of agency we’re interested in re/ alignment. Which is why I disagree with the OP.