Adam, you will find above that I contrasted human design to biology, not to nature.
Trying to toss a human a poisoned credit-default swap is more like trying to outrun a tiger or punch it in the nose—it’s not an Outside Context Problem where the human simply doesn’t understand what you’re doing; rather, you’re opposing the human on the same level and it can fight back using its own abilities, if it thinks of doing so.
Robin: This really won’t do. I think what you want is think in terms of a production function, which describe a system’s output on a particular task as a function of its various inputs and features. Then we can talk about partial derivatives; rates at which output increases as a function of changes in inputs or features. The hard thing here is how to abstract well, how to collapse diverse tasks into similar task aggregates, and how to collapse diverse inputs and features into related input and feature aggregates. In particular, there is the challenge of how to identify which of those inputs or features count as its “intelligence.”
How do you measure output? As a raw quantity of material? As a narrow region of outcomes of equal or higher preference in an outcome space? Economists generally deal in quantities that are relatively fungible and liquid, but what about when the “output” is a hypothesis, a design for a new pharmaceutical, or an economic rescue plan? You can say “it’s worth what people will pay for it” but this just palms off the valuation problem on hedge-funds or other financial actors, which need their own way of measuring the value somehow.
There’s also a corresponding problem for complex inputs. As economists, you can to a large extent sit back and let the financial actors figure out how to value things, and you just measure the dollars. But the AI one tries to design is more in the position of actually being a hedge fund—the AI itself has to value resources and value outputs.
Economists tend to measure intermediate tasks that are taken for granted, but one of the key abilities of intelligence is to Jump Out Of The System and trace a different causal pathway to terminal values, eliminating intermediate tasks along the way. How do you measure fulfillment of terminal values if, for example, an AI or economy decides to eliminate money and replace it with something else? We haven’t always had money. And if there’s no assumption of money, how do you value inputs and outputs?
You run into problems with measuring the improbability of an outcome too, of course; I’m just saying that breaking up the system into subunits with an input-output diagram (which is what I think you’re proposing?) is also subject to questions, especially since one of the key activities of creative intelligence is breaking obsolete production diagrams.
Adam, you will find above that I contrasted human design to biology, not to nature.
Trying to toss a human a poisoned credit-default swap is more like trying to outrun a tiger or punch it in the nose—it’s not an Outside Context Problem where the human simply doesn’t understand what you’re doing; rather, you’re opposing the human on the same level and it can fight back using its own abilities, if it thinks of doing so.
How do you measure output? As a raw quantity of material? As a narrow region of outcomes of equal or higher preference in an outcome space? Economists generally deal in quantities that are relatively fungible and liquid, but what about when the “output” is a hypothesis, a design for a new pharmaceutical, or an economic rescue plan? You can say “it’s worth what people will pay for it” but this just palms off the valuation problem on hedge-funds or other financial actors, which need their own way of measuring the value somehow.
There’s also a corresponding problem for complex inputs. As economists, you can to a large extent sit back and let the financial actors figure out how to value things, and you just measure the dollars. But the AI one tries to design is more in the position of actually being a hedge fund—the AI itself has to value resources and value outputs.
Economists tend to measure intermediate tasks that are taken for granted, but one of the key abilities of intelligence is to Jump Out Of The System and trace a different causal pathway to terminal values, eliminating intermediate tasks along the way. How do you measure fulfillment of terminal values if, for example, an AI or economy decides to eliminate money and replace it with something else? We haven’t always had money. And if there’s no assumption of money, how do you value inputs and outputs?
You run into problems with measuring the improbability of an outcome too, of course; I’m just saying that breaking up the system into subunits with an input-output diagram (which is what I think you’re proposing?) is also subject to questions, especially since one of the key activities of creative intelligence is breaking obsolete production diagrams.