“Now, I am not explaining control systems merely to explain control systems. The relevance to rationality is that they funnel reality into a narrow path in configuration space by entirely arational means, and thus constitute a proof by example that this is possible.”
I don’t think you needed control systems to show this. Gravity itself is as much of a ‘control system’ - it minimizes the potential energy of the system! Heck, if you’re doing that, lots of laws of physics fit that definition—they narrow down the set of possible realities…
″ This must raise the question, how much of the neural functioning of a living organism, human or lesser, operates by similar means? ”
So, I’m still not sure what you mean by ‘similar means’.
We know the broad overview of how brains work—sensory neurons get triggered, they trigger other neurons, and through some web of complex things motor neurons eventually get triggered to give outputs. The stuff in the middle is hard; some of it can be described as “memory” (patterns that somehow represent past inputs), some can be represented various other abstractions. Control systems are probably good ways of interpreting a lot of combinations of neurons, and some have been brought up here. It seems unlikely that they would capture all of them—but if you stretch the analogy enough, perhaps it can.
“And how much of the functioning of an artificial organism must be designed to use these means? ”
Must? I’d guess absolutely none. The way you have described them, control systems are not basic—for ‘future perception’ to truly determine current actions, would break causality. So it’s certainly possible to describe/build an artificial organism without using control systems, though it seems like it would be pretty inconvenient and pointlessly harder than an already impossible problem given how useful you’re describing them to be.
“Now, I am not explaining control systems merely to explain control systems. The relevance to rationality is that they funnel reality into a narrow path in configuration space by entirely arational means, and thus constitute a proof by example that this is possible.”
I don’t think you needed control systems to show this. Gravity itself is as much of a ‘control system’ - it minimizes the potential energy of the system! Heck, if you’re doing that, lots of laws of physics fit that definition—they narrow down the set of possible realities…
″ This must raise the question, how much of the neural functioning of a living organism, human or lesser, operates by similar means? ”
So, I’m still not sure what you mean by ‘similar means’.
We know the broad overview of how brains work—sensory neurons get triggered, they trigger other neurons, and through some web of complex things motor neurons eventually get triggered to give outputs. The stuff in the middle is hard; some of it can be described as “memory” (patterns that somehow represent past inputs), some can be represented various other abstractions. Control systems are probably good ways of interpreting a lot of combinations of neurons, and some have been brought up here. It seems unlikely that they would capture all of them—but if you stretch the analogy enough, perhaps it can.
“And how much of the functioning of an artificial organism must be designed to use these means? ”
Must? I’d guess absolutely none. The way you have described them, control systems are not basic—for ‘future perception’ to truly determine current actions, would break causality. So it’s certainly possible to describe/build an artificial organism without using control systems, though it seems like it would be pretty inconvenient and pointlessly harder than an already impossible problem given how useful you’re describing them to be.