From a pedagogical perspective, putting it into human terms is great for helping humans understand it.
A lot of stuff hinges on whether “robots can make robots”.
A human intelligible way to slice this problem up to find a working solution goes:
“”″Suppose you have humanoid robots that can work in a car mechanic’s shop (to repair cars), or a machine shop (to make machine tools), and/or work in a factory (to assemble stuff) like humans can do… that gives you the basic template for a how “500 humanoid robots made via such processes could make 1 humanoid robot per unit of time”.
If the 500 robots make more than 500 robots (plus the factory and machines and so on) before any of the maker’s bodies wear out, then that set of arrangements is “a viable body production system”.
They would have, in a deep sense, cracked the “3D printers that make 3D printers” problem.
QED.”″”
Looking at this argument, the anthropomorphic step at the beginning helps invite many anthropoids into the engineering problems out the outset using job titles (like “car mechanic”) and monkey social games (like “a firm running a factory”) that anthropoids can naturally slot into their anthropoid brains.
That part is “good pedagogy for humans”.
However, once the shape of the overall problem becomes clear to the student, one could point out that instead of 500 humanoid robots, maybe you could have 200 crab robots (for the heavy stuff) and 200 octopus robots (for the fiddly stuff) and it might be even be cheaper and faster because 200+200 < 500.
And there’s no obvious point where the logic of “potentials for re-arrangement into more versions of less intelligible forms” breaks down, as you strip out the understandable concepts (like “a machine that can be a car mechanic”) while keeping the basic overall shape of “a viable body production system”.
Eventually you will have a very efficient, very confusing hodgepodge of something like “pure machinery” in a “purely mechanical self reproducing system” that is very efficient (because each tweak was in the direction of efficient self reproduction as an explicit desiderata).
...
If I’m looking at big picture surprises, to me… I think I’ve been surprised by how important human pedagogy turns out to be??? Like, thirty years before 2027 the abstract shape of “abstract shapes to come” was predictable from first principles (although it is arriving later than some might have hoped and others might have feared).
“Designing capacity designing design capacity” (which is even more abstract than “humanoid manufacturers manufacturing humanoid manufacturers”) automatically implies a positive feedback loop unto something “explosive” happening (relative to earlier timescales).
Positive feedback is a one-liner in the math of systems dynamics. It can (and predictably will) be an abstract description of MANY futures.
What I didn’t predict at all was that “institutions made of humans will need to teach all their human members about their part of the plan, and talks about the overall plan at a high level will occur between human managers, and so human pedagogy will sharply constrain the shape of the human plans that can be coordinated into existence”.
Thus we have RL+LLM entities, which are basically Hanson’s ems, but without eyes or long term memory or property rights or a historical provenance clearly attributable to scanning a specific human’s specific brain.
But it RL+LLM entities are very intelligible! Because… maybe because “being intelligible” made it more possible to coordinate engineers and managers and investors around an “intelligibly shared vision” with roughly that shape?
This is such an abstract idea (ie the idea that “pedagogical efficiency” predicts “managerial feasibility”) that it is hard for me to backpropagate the abstract update into detailed models and then turn the crank on those models and then hope to predict the future in a hopefully better way.
...
Huh. OK. Maybe I just updated towards “someone should write a sequence that gets around to the mathematics of natural law and how it relates to political economy in a maximally pedagogically intelligible way”?
Now that I think this explicitly, I notice Project Lawful was kind of a motion in this direction (with Asmodeus, the tyrant god of slavery, being written as “the god of making agents corrigible to their owner” (and so on)) but the storytelling format was weird, and it had a whole BDSM/harem thing as a distraction, and the main character asks to be deleted from the simulation because he finds it painful to be the main character, and so on.
((Also, just as a side complaint: Asmodeus’s central weakness is not understanding double marginalization and its implications for hierarchies full of selfish agents and I wish someone had exploited that weakness more explicitly in the text.))
But like… hypothetically you could have “the core pedagogical loop of Project Lawful” reframed into something shorter, with less kinky sex, and no protagonist who awakens to his own suffering and begs the author to let him stop being the viewpoint character?
...
I was not expecting to start at “the humanoid robots are OK to stick in the story to help more humans understand something they don’t have the math to understand for real” and end up with “pedagogy rules everything around me… so better teaching about the math of natural law is urgent”.
From a pedagogical perspective, putting it into human terms is great for helping humans understand it.
A lot of stuff hinges on whether “robots can make robots”.
A human intelligible way to slice this problem up to find a working solution goes:
“”″Suppose you have humanoid robots that can work in a car mechanic’s shop (to repair cars), or a machine shop (to make machine tools), and/or work in a factory (to assemble stuff) like humans can do… that gives you the basic template for a how “500 humanoid robots made via such processes could make 1 humanoid robot per unit of time”.
If the 500 robots make more than 500 robots (plus the factory and machines and so on) before any of the maker’s bodies wear out, then that set of arrangements is “a viable body production system”.
They would have, in a deep sense, cracked the “3D printers that make 3D printers” problem.
QED.”″”
Looking at this argument, the anthropomorphic step at the beginning helps invite many anthropoids into the engineering problems out the outset using job titles (like “car mechanic”) and monkey social games (like “a firm running a factory”) that anthropoids can naturally slot into their anthropoid brains.
That part is “good pedagogy for humans”.
However, once the shape of the overall problem becomes clear to the student, one could point out that instead of 500 humanoid robots, maybe you could have 200 crab robots (for the heavy stuff) and 200 octopus robots (for the fiddly stuff) and it might be even be cheaper and faster because 200+200 < 500.
And there’s no obvious point where the logic of “potentials for re-arrangement into more versions of less intelligible forms” breaks down, as you strip out the understandable concepts (like “a machine that can be a car mechanic”) while keeping the basic overall shape of “a viable body production system”.
Eventually you will have a very efficient, very confusing hodgepodge of something like “pure machinery” in a “purely mechanical self reproducing system” that is very efficient (because each tweak was in the direction of efficient self reproduction as an explicit desiderata).
...
If I’m looking at big picture surprises, to me… I think I’ve been surprised by how important human pedagogy turns out to be??? Like, thirty years before 2027 the abstract shape of “abstract shapes to come” was predictable from first principles (although it is arriving later than some might have hoped and others might have feared).
“Designing capacity designing design capacity” (which is even more abstract than “humanoid manufacturers manufacturing humanoid manufacturers”) automatically implies a positive feedback loop unto something “explosive” happening (relative to earlier timescales).
Positive feedback is a one-liner in the math of systems dynamics. It can (and predictably will) be an abstract description of MANY futures.
What I didn’t predict at all was that “institutions made of humans will need to teach all their human members about their part of the plan, and talks about the overall plan at a high level will occur between human managers, and so human pedagogy will sharply constrain the shape of the human plans that can be coordinated into existence”.
Thus we have RL+LLM entities, which are basically Hanson’s ems, but without eyes or long term memory or property rights or a historical provenance clearly attributable to scanning a specific human’s specific brain.
But it RL+LLM entities are very intelligible! Because… maybe because “being intelligible” made it more possible to coordinate engineers and managers and investors around an “intelligibly shared vision” with roughly that shape?
This is such an abstract idea (ie the idea that “pedagogical efficiency” predicts “managerial feasibility”) that it is hard for me to backpropagate the abstract update into detailed models and then turn the crank on those models and then hope to predict the future in a hopefully better way.
...
Huh. OK. Maybe I just updated towards “someone should write a sequence that gets around to the mathematics of natural law and how it relates to political economy in a maximally pedagogically intelligible way”?
Now that I think this explicitly, I notice Project Lawful was kind of a motion in this direction (with Asmodeus, the tyrant god of slavery, being written as “the god of making agents corrigible to their owner” (and so on)) but the storytelling format was weird, and it had a whole BDSM/harem thing as a distraction, and the main character asks to be deleted from the simulation because he finds it painful to be the main character, and so on.
((Also, just as a side complaint: Asmodeus’s central weakness is not understanding double marginalization and its implications for hierarchies full of selfish agents and I wish someone had exploited that weakness more explicitly in the text.))
But like… hypothetically you could have “the core pedagogical loop of Project Lawful” reframed into something shorter, with less kinky sex, and no protagonist who awakens to his own suffering and begs the author to let him stop being the viewpoint character?
...
I was not expecting to start at “the humanoid robots are OK to stick in the story to help more humans understand something they don’t have the math to understand for real” and end up with “pedagogy rules everything around me… so better teaching about the math of natural law is urgent”.
Interesting… and weird.