Another way to describe chaotic systems is steerable systems. The fact that they have sensitive dependence on initial conditions means that if you know the dynamics and current state of the system, you can steer them into future knowable states with arbitrarily weak influence.
If you knew the precise dynamics and state of a classically chaotic system, you could predict it. If it’s unpredictable in practice, you don’t know those things.
To clarify further: Without any steering, any finite level of precision in a chaotic system means that you have a corresponding finite horizon beyond which you have essentially zero information about the state of the system.
If you can influence the system even a tiny bit, there exists a finite precision of measurement and modelling that allows you to not just predict, but largely control the states as far into the future as you like.
It’s helpful to avoid second-person in statements like this. It matters a whole lot WHO is doing the predicting, and at least some visions of “superintelligent” include a relative advantage in collecting and processing dynamics and state information about systems.
Just because YOU can’t predict it at all doesn’t mean SOMETHING can’t predict it a bit.
That’s my confusion—why is “you” necessarily the same in both cases? Actually, what are the two cases again? In any case, the capabilities of a superintelligence with respect to comprehending and modeling/calculating extremely complex (chaotic) systems is exactly the sort of thing that is hard to know in advance.
There are LOTS of free lunches, from the perspective of currently-inefficient human-modeled activities. Tons of outcomes that machines can influence with more subtlety than humans can handle, toward outcomes that humans can define and measure just fine.
That’s my confusion—why is “you” necessarily the same in both cases?
Because those are the cases I am talking about.
intelligence with respect to comprehending and modeling/calculating extremely complex (chaotic) systems is exactly the sort of thing that is hard to know in advance.
yeah coming back to this again, something seems very wrong with this to me. if you know a lot about the system you can make a big ripple but if there are active controllers with tighter feedback loops they can compensate for your impact with much less intelligence unless your impact can reliably disable them. if they can make themselves reliably unpredictable to you eg by basing decisions on high quality randomness that they can trust you can’t influence (eg in a deterministic universe this might be the low bits of an isolated highly chaotic system), then they can make it extremely hard for your small intervention to accumulate into an impact that affects them—it can be made nearly impossible to interfere with another agent unless you manage to yourself inject an agent glider into the chaotic system, ie induce self repairing behavior that can implement closed loop control towards the outcomes you initially intended to achieve. certainly you don’t need to vary that many dimensions in order to get a fluid simulator to end up hitting a complicated target, but it gets less tractable fast if you aren’t allowed to keep checking back in and interfering again.
It’s not obvious that any system is chaotic at a physical level, to all theoretically possible measurement and prediction capabilities. It’s possible there’s quantum uncertainty and deterministic causality only, and “chaos” as determined-but-incalculable behavior is a description of the observer’s relationship to a phenomenon, not the phenomenon itself.
The question is whether a given superintelligence is powerful enough to comprehend and predict some important systems which are chaotic to current human capabilities.
chaos is not randomness. a deterministic universe still has sensitive dependence on initial conditions, the key trait of chaos. fluid dynamics is chaotic, so even arbitrarily superintelligent reasoners can’t get far ahead of physics before the sensitive dependence makes your prediction mismatch reality. this is true even if your mechanistic understanding is perfect and the universe isn’t random, so long as the system is in a chaotic regime and you don’t have perfect knowledge of its starting state.
Another way to describe chaotic systems is steerable systems. The fact that they have sensitive dependence on initial conditions means that if you know the dynamics and current state of the system, you can steer them into future knowable states with arbitrarily weak influence.
If you knew the precise dynamics and state of a classically chaotic system, you could predict it. If it’s unpredictable in practice, you don’t know those things.
To clarify further: Without any steering, any finite level of precision in a chaotic system means that you have a corresponding finite horizon beyond which you have essentially zero information about the state of the system.
If you can influence the system even a tiny bit, there exists a finite precision of measurement and modelling that allows you to not just predict, but largely control the states as far into the future as you like.
It’s helpful to avoid second-person in statements like this. It matters a whole lot WHO is doing the predicting, and at least some visions of “superintelligent” include a relative advantage in collecting and processing dynamics and state information about systems.
Just because YOU can’t predict it at all doesn’t mean SOMETHING can’t predict it a bit.
I don’t use “you” to mean.”me”.
The main point is that “you” is the same in both cases it might a well be “X”.
There’s no free lunch...no ability of an agent to control beyond that agent’s ability to predict.
That’s my confusion—why is “you” necessarily the same in both cases? Actually, what are the two cases again? In any case, the capabilities of a superintelligence with respect to comprehending and modeling/calculating extremely complex (chaotic) systems is exactly the sort of thing that is hard to know in advance.
There are LOTS of free lunches, from the perspective of currently-inefficient human-modeled activities. Tons of outcomes that machines can influence with more subtlety than humans can handle, toward outcomes that humans can define and measure just fine.
Because those are the cases I am talking about.
I didn’t say anything about superintellences.
yeah coming back to this again, something seems very wrong with this to me. if you know a lot about the system you can make a big ripple but if there are active controllers with tighter feedback loops they can compensate for your impact with much less intelligence unless your impact can reliably disable them. if they can make themselves reliably unpredictable to you eg by basing decisions on high quality randomness that they can trust you can’t influence (eg in a deterministic universe this might be the low bits of an isolated highly chaotic system), then they can make it extremely hard for your small intervention to accumulate into an impact that affects them—it can be made nearly impossible to interfere with another agent unless you manage to yourself inject an agent glider into the chaotic system, ie induce self repairing behavior that can implement closed loop control towards the outcomes you initially intended to achieve. certainly you don’t need to vary that many dimensions in order to get a fluid simulator to end up hitting a complicated target, but it gets less tractable fast if you aren’t allowed to keep checking back in and interfering again.
agreed, but only with ongoing intervention. if a system is chaotic, losing connection with it means it will stop doing what you said to.
It’s not obvious that any system is chaotic at a physical level, to all theoretically possible measurement and prediction capabilities. It’s possible there’s quantum uncertainty and deterministic causality only, and “chaos” as determined-but-incalculable behavior is a description of the observer’s relationship to a phenomenon, not the phenomenon itself.
The question is whether a given superintelligence is powerful enough to comprehend and predict some important systems which are chaotic to current human capabilities.
chaos is not randomness. a deterministic universe still has sensitive dependence on initial conditions, the key trait of chaos. fluid dynamics is chaotic, so even arbitrarily superintelligent reasoners can’t get far ahead of physics before the sensitive dependence makes your prediction mismatch reality. this is true even if your mechanistic understanding is perfect and the universe isn’t random, so long as the system is in a chaotic regime and you don’t have perfect knowledge of its starting state.