They are the results of that same rationalistic “individualism” which wants to see in everything the product of conscious individual reason.
I lean libertarian, and have long worn the “yay Hayek!” mantle, but, looking back… It seems like he’s unfairly using a) poorly-grounded attempts at large-scale social planning, to justify b) a philosophical, universal belief in the superiority of self-organizing systems over designed ones (i.e. even in building a robot).
Eliezer Yudkowsky has previous criticized b) in the context of Rodney Brooks’s preferred robotic architecture. In some contexts, a centrally-planned mechanism which is the product of conscious individual reason is a better way to go. The inferiority of planned economies is not due to the very general superiority of self-organization that Hayek is claiming here.
I’ve run across that argument a couple times, and my reply has been that all economies are planned. Some are planned by a small number of dumb humans with inadequate data, and others are planned by a very large number of dumb humans with more data, and the latter are called market economies.
The problem is in the size of the system, relative to human cognition. Using specialization and management can increase the size of the system we can manage, but not without limit. That is why a self-improving AI is a potential threat, it can increase the size of the system it can manage well beyond what we can understand. It is also why I don’t think provably Friendly AI is possible (though I hope I am wrong about that) and that GAI will be developed incrementally from specialized AIs or from general but less than intelligent systems. Also it is what gives me some hope for intelligence amplification to keep up with GAIs, at least for a while; we don’t need to start from scratch, just keep improving the size of systems we can manage.
Control and knowledge don’t care about scale. One can learn stuff about whole galaxies by observing them. When you want to “manage” an AI, the complexity of your concern is restricted to the complexity of your wish.
Size in describing a system isn’t about scale, it’s the number of interacting components and the complexity of their interactions. And I don’t understand what you mean in your second sentence, it doesn’t make sense to me.
A galaxy also isn’t “just” about scale: it does contain more stuff, more components (but how do you know that and what does it mean?). Second sentence: using a telescope to make precise observations.
I lean libertarian, and have long worn the “yay Hayek!” mantle, but, looking back… It seems like he’s unfairly using a) poorly-grounded attempts at large-scale social planning, to justify b) a philosophical, universal belief in the superiority of self-organizing systems over designed ones (i.e. even in building a robot).
Eliezer Yudkowsky has previous criticized b) in the context of Rodney Brooks’s preferred robotic architecture. In some contexts, a centrally-planned mechanism which is the product of conscious individual reason is a better way to go. The inferiority of planned economies is not due to the very general superiority of self-organization that Hayek is claiming here.
I’ve run across that argument a couple times, and my reply has been that all economies are planned. Some are planned by a small number of dumb humans with inadequate data, and others are planned by a very large number of dumb humans with more data, and the latter are called market economies.
Also a planned economy can choose to use markets when it predicts they will achieve the desired result cost effectively.
The problem is in the size of the system, relative to human cognition. Using specialization and management can increase the size of the system we can manage, but not without limit. That is why a self-improving AI is a potential threat, it can increase the size of the system it can manage well beyond what we can understand. It is also why I don’t think provably Friendly AI is possible (though I hope I am wrong about that) and that GAI will be developed incrementally from specialized AIs or from general but less than intelligent systems. Also it is what gives me some hope for intelligence amplification to keep up with GAIs, at least for a while; we don’t need to start from scratch, just keep improving the size of systems we can manage.
Control and knowledge don’t care about scale. One can learn stuff about whole galaxies by observing them. When you want to “manage” an AI, the complexity of your concern is restricted to the complexity of your wish.
Size in describing a system isn’t about scale, it’s the number of interacting components and the complexity of their interactions. And I don’t understand what you mean in your second sentence, it doesn’t make sense to me.
A galaxy also isn’t “just” about scale: it does contain more stuff, more components (but how do you know that and what does it mean?). Second sentence: using a telescope to make precise observations.