Musings on ways in which the analogy is deep, or at least seems deep to me right now: (It’s entirely possible my pattern-recognizer is overclocking and misfiring here… this is fun to think about though)
Automobiles move the cargo to the target. To do this, they move themselves. Agents improve the world according to some evaluation function. To do this, they improve their own power/capability/position.
You often don’t know what’s inside an automobile until the thing reaches its destination and gets unloaded. Similarly, you often don’t know what’s inside an agent until it achieves a large amount of power/capability/position and starts optimizing for final goals instead of instrumental goals.
Automobiles are useful because they are general-purpose; you can invest a ton of money and R&D in a factory that makes automobiles, and comparatively little money in roads or rails, and then you have a transportation solution that works for almost everything, even though for most particular things you would have been better off building a high-speed conveyor belt or something. Agents are useful because they are general-purpose; you can invest a ton of money and R&D to make an agent AGI or APS-AI, and then you can make copies of it to do almost any task, even though for most particular tasks you would have been better off building a specialized tool AI.
EDIT: Automobiles tend to be somewhat modular, with a part that holds cargo distinct from the parts that move the whole thing. Agents—at least, the ones we design—also tend to be modular in the same way, with a “utility function” or “value network” distinct from the “beliefs” and “Decision theory” or “MCTS code.” It’s less clear whether this is true of evolved automobiles though, or evolved agents. Maybe it’s mostly just true of intelligently designed agents/automobiles.
Musings on ways in which the analogy is deep, or at least seems deep to me right now: (It’s entirely possible my pattern-recognizer is overclocking and misfiring here… this is fun to think about though)
Automobiles move the cargo to the target. To do this, they move themselves. Agents improve the world according to some evaluation function. To do this, they improve their own power/capability/position.
You often don’t know what’s inside an automobile until the thing reaches its destination and gets unloaded. Similarly, you often don’t know what’s inside an agent until it achieves a large amount of power/capability/position and starts optimizing for final goals instead of instrumental goals.
Automobiles are useful because they are general-purpose; you can invest a ton of money and R&D in a factory that makes automobiles, and comparatively little money in roads or rails, and then you have a transportation solution that works for almost everything, even though for most particular things you would have been better off building a high-speed conveyor belt or something. Agents are useful because they are general-purpose; you can invest a ton of money and R&D to make an agent AGI or APS-AI, and then you can make copies of it to do almost any task, even though for most particular tasks you would have been better off building a specialized tool AI.
EDIT: Automobiles tend to be somewhat modular, with a part that holds cargo distinct from the parts that move the whole thing. Agents—at least, the ones we design—also tend to be modular in the same way, with a “utility function” or “value network” distinct from the “beliefs” and “Decision theory” or “MCTS code.” It’s less clear whether this is true of evolved automobiles though, or evolved agents. Maybe it’s mostly just true of intelligently designed agents/automobiles.