1950s era computers likely couldn’t handle the complex AI tasks imagined here (doing image recognition; navigating rough Baffin Island terrain, finishing parts with hand tools, etc) without taking up much more than 1 meter cubed.
The AI tasks are not instantiated on the island. Even in Carl’s writeup, he has the AI stuff running in the cloud, and the local hardware is just used for simple embedded control functionality, and teleoperation for anything complex.
My point about macro-scale computers was with respect to von Neumann probes or Mars colonies where speed of light makes teleoperation from Earth difficult or impossible. But in those cases space constraints of 1950′s era computers aren’t limiting. Who cares if the AI control computer takes up the entire floor space of an 80km crater and runs at a measly 100kHz? It’s a starting point for better designs.
idk, you still have to fit video cameras and complex robotic arms and wifi equipment into that 1m^3 box, even if you are doing all the AI inference somewhere else! I have a much longer comment replying to the top-level post, where I try to analyze the concept of an autofac and what an optimized autofac design would really look like. Imagining a 100% self-contained design is a pretty cool intellectual exercise, but it’s hard to imagine a situation where it doesn’t make sense to import the most complex components from somewhere else (at least initially, until you can make computers that don’t take up 90% of your manufacturing output).
What do you need a video camera or industrial arm for?
In any case, a raspberry pi handles all of the mentioned tasks just fine, and doesn’t take up much space. For construction, a Stuart platform/crane is probably all that you would need, so long as the parts are designed to be assembled with such mechanisms.
Feynman is imagining lots of components being made with “hand tools”, in order to cut down on the amount of specialized machinery we need. So you’d want sophisticated manipulators to use the tools, move the components, clean up bits of waste, etc. Plus of course for gathering raw resources and navigating Canadian tundra. And you’d need video cameras for the system to look at what it’s doing (otherwise you’d only have feed-forward controls in many situations, which would probably cause lots of cascading errors).
I don’t know how big a rasberry pi would be if it had to be hand-assembled from transistors big enough to pick up individually. So maybe it’s doable!
You need sensor input. It doesn’t have to be visual imagery. Simple single datum sensors are much easier to work with.
Regarding arms and tooling, I understand now where you are coming from. He has a list of tools (CNC mill, lathe) that I wouldn’t necessarily call “hand-tools.” In almost all instances these are automated in factories without the use of robotic arms, and certainly not advanced dexterous capabilities.
He lists a robot with two arms at the beginning, but I took that to mean a simple three-segment arm in the mechanical engineering sense, which is capable of 6DOF motion, and a gripper to hold things in place. This is the minimal you would need for simple, straight-forward tele-operated assembly steps. And it would be used solely for assembly—the machines would use their own locking mechanisms to hold the part in place.
As I said though, with some proper thought into part design you don’t even need the three-segment robot arms, A gantry or Stuart platform would be sufficient.
Finally, the electronics are taken as vitamins to the system. You send a shipping container full of actual raspberry pi’s.
I was actually thinking of a pair of humanlike arms with many degrees of freedom, and one or more cameras looking at things. You can have dozens of single datum sensors, or one camera. It’s much cheaper. Similarly, once you have some robot arms, there’s no gain in including many single use motors. For example, when I include an arbor press, I don’t mean a motorized press. I mean a big lever that you grab with the robot arm and pull down, to press in a shaft or shape a screw head.
There are two CNC machine tools, to automate some part shaping while the robot does something else.
It is not at all obvious to me that one camera is “cheaper” than dozens of single datum sensors. The camera requires complicated, expensive, and error-prone image analysis software. The single datum sensor can be a simple PID control mechanism in a microcontroller.
Load up a video of a manufacturing or assembly line, and count how many human-dexterous robotic arms are in use. In most cases, you’ll find zero. Even 3D printers and CNC machines, which are supposed to be general-purpose, find no need for the complexity of an industrial arm, let alone something comparable to a human arm.
1950s era computers likely couldn’t handle the complex AI tasks imagined here (doing image recognition; navigating rough Baffin Island terrain, finishing parts with hand tools, etc) without taking up much more than 1 meter cubed.
The AI tasks are not instantiated on the island. Even in Carl’s writeup, he has the AI stuff running in the cloud, and the local hardware is just used for simple embedded control functionality, and teleoperation for anything complex.
My point about macro-scale computers was with respect to von Neumann probes or Mars colonies where speed of light makes teleoperation from Earth difficult or impossible. But in those cases space constraints of 1950′s era computers aren’t limiting. Who cares if the AI control computer takes up the entire floor space of an 80km crater and runs at a measly 100kHz? It’s a starting point for better designs.
idk, you still have to fit video cameras and complex robotic arms and wifi equipment into that 1m^3 box, even if you are doing all the AI inference somewhere else! I have a much longer comment replying to the top-level post, where I try to analyze the concept of an autofac and what an optimized autofac design would really look like. Imagining a 100% self-contained design is a pretty cool intellectual exercise, but it’s hard to imagine a situation where it doesn’t make sense to import the most complex components from somewhere else (at least initially, until you can make computers that don’t take up 90% of your manufacturing output).
What do you need a video camera or industrial arm for?
In any case, a raspberry pi handles all of the mentioned tasks just fine, and doesn’t take up much space. For construction, a Stuart platform/crane is probably all that you would need, so long as the parts are designed to be assembled with such mechanisms.
Feynman is imagining lots of components being made with “hand tools”, in order to cut down on the amount of specialized machinery we need. So you’d want sophisticated manipulators to use the tools, move the components, clean up bits of waste, etc. Plus of course for gathering raw resources and navigating Canadian tundra. And you’d need video cameras for the system to look at what it’s doing (otherwise you’d only have feed-forward controls in many situations, which would probably cause lots of cascading errors).
I don’t know how big a rasberry pi would be if it had to be hand-assembled from transistors big enough to pick up individually. So maybe it’s doable!
You need sensor input. It doesn’t have to be visual imagery. Simple single datum sensors are much easier to work with.
Regarding arms and tooling, I understand now where you are coming from. He has a list of tools (CNC mill, lathe) that I wouldn’t necessarily call “hand-tools.” In almost all instances these are automated in factories without the use of robotic arms, and certainly not advanced dexterous capabilities.
He lists a robot with two arms at the beginning, but I took that to mean a simple three-segment arm in the mechanical engineering sense, which is capable of 6DOF motion, and a gripper to hold things in place. This is the minimal you would need for simple, straight-forward tele-operated assembly steps. And it would be used solely for assembly—the machines would use their own locking mechanisms to hold the part in place.
As I said though, with some proper thought into part design you don’t even need the three-segment robot arms, A gantry or Stuart platform would be sufficient.
Finally, the electronics are taken as vitamins to the system. You send a shipping container full of actual raspberry pi’s.
I was actually thinking of a pair of humanlike arms with many degrees of freedom, and one or more cameras looking at things. You can have dozens of single datum sensors, or one camera. It’s much cheaper. Similarly, once you have some robot arms, there’s no gain in including many single use motors. For example, when I include an arbor press, I don’t mean a motorized press. I mean a big lever that you grab with the robot arm and pull down, to press in a shaft or shape a screw head.
There are two CNC machine tools, to automate some part shaping while the robot does something else.
It is not at all obvious to me that one camera is “cheaper” than dozens of single datum sensors. The camera requires complicated, expensive, and error-prone image analysis software. The single datum sensor can be a simple PID control mechanism in a microcontroller.
Load up a video of a manufacturing or assembly line, and count how many human-dexterous robotic arms are in use. In most cases, you’ll find zero. Even 3D printers and CNC machines, which are supposed to be general-purpose, find no need for the complexity of an industrial arm, let alone something comparable to a human arm.
We don’t build machines that way for a reason.