If you knew the precise dynamics and state of a classically chaotic system, you could predict it. If it’s unpredictable in practice, you don’t know those things.
To clarify further: Without any steering, any finite level of precision in a chaotic system means that you have a corresponding finite horizon beyond which you have essentially zero information about the state of the system.
If you can influence the system even a tiny bit, there exists a finite precision of measurement and modelling that allows you to not just predict, but largely control the states as far into the future as you like.
It’s helpful to avoid second-person in statements like this. It matters a whole lot WHO is doing the predicting, and at least some visions of “superintelligent” include a relative advantage in collecting and processing dynamics and state information about systems.
Just because YOU can’t predict it at all doesn’t mean SOMETHING can’t predict it a bit.
That’s my confusion—why is “you” necessarily the same in both cases? Actually, what are the two cases again? In any case, the capabilities of a superintelligence with respect to comprehending and modeling/calculating extremely complex (chaotic) systems is exactly the sort of thing that is hard to know in advance.
There are LOTS of free lunches, from the perspective of currently-inefficient human-modeled activities. Tons of outcomes that machines can influence with more subtlety than humans can handle, toward outcomes that humans can define and measure just fine.
That’s my confusion—why is “you” necessarily the same in both cases?
Because those are the cases I am talking about.
intelligence with respect to comprehending and modeling/calculating extremely complex (chaotic) systems is exactly the sort of thing that is hard to know in advance.
If you knew the precise dynamics and state of a classically chaotic system, you could predict it. If it’s unpredictable in practice, you don’t know those things.
To clarify further: Without any steering, any finite level of precision in a chaotic system means that you have a corresponding finite horizon beyond which you have essentially zero information about the state of the system.
If you can influence the system even a tiny bit, there exists a finite precision of measurement and modelling that allows you to not just predict, but largely control the states as far into the future as you like.
It’s helpful to avoid second-person in statements like this. It matters a whole lot WHO is doing the predicting, and at least some visions of “superintelligent” include a relative advantage in collecting and processing dynamics and state information about systems.
Just because YOU can’t predict it at all doesn’t mean SOMETHING can’t predict it a bit.
I don’t use “you” to mean.”me”.
The main point is that “you” is the same in both cases it might a well be “X”.
There’s no free lunch...no ability of an agent to control beyond that agent’s ability to predict.
That’s my confusion—why is “you” necessarily the same in both cases? Actually, what are the two cases again? In any case, the capabilities of a superintelligence with respect to comprehending and modeling/calculating extremely complex (chaotic) systems is exactly the sort of thing that is hard to know in advance.
There are LOTS of free lunches, from the perspective of currently-inefficient human-modeled activities. Tons of outcomes that machines can influence with more subtlety than humans can handle, toward outcomes that humans can define and measure just fine.
Because those are the cases I am talking about.
I didn’t say anything about superintellences.