Is the big picture here actually showing the limits of intelligence? Suppose you have some entity with very large intelligence, but it’s information about the billiards table comes from a 4k camera with a wide angle lens and it’s positioning error is nonzero. Obviously the pixel grid of the camera and the limited color resolution (you can do better than 1 pixel of positioning accuracy by looking at the color information around the edges of each ball) limit it.
This has n bits of precision for each ball position.
Then ANY algorithm, no matter how intelligent, cannot generate an anwer with more bits of precision than the input. (Assuming just 1 frame, you can extract a little more information from repeated observations or if you can move the camera)
So in terms of “winning the pool game”, there is a finite amount of policy quality beyond which the marginal gain is zero, your odds of winning do not increase any further.
This would be true in a general sense, limiting what superintelligence can do in our world.
Also just to be pedantic, a robot actuator also has finite control resolution, it will accept control packets at a fixed rate with a fixed number of bits of precision.
So you are limited by whichever has less bits: positioning accuracy or output actuator accuracy. So winning the pool game on the first shot is difficult and unlikely.
Yup, I think this is right, though I don’t know whether it applies to a literal game of pool since the balls start in a particular relatively-simple arrangement.
It means it’s dominated by tiny effects you may not be able to measure before the break. Once it’s down to simple 1 and 2 ball situations sure, the robot can sink every shot.
Sort of, though it depends on how much compute is used and how much error you have in your sensors. In practice, chaos can be important for prediction, but usually this isn’t as important for acting on the world.
It’s still outright limited by sensor error. This is a law of physics and well proven.
You can combined observations to extract more fractional bits but there are limits.
Noisy pool like xanatos manipulations of humans or bootstrapping nanoforges without adequate resources are precisely how superintelligence could kill us.
If it isn’t possible because it doesn’t have enough bits of information about biology to make the bioweapon, enough bits about human manipulation to manipulate people into behaving to it’s benefit, or bootstrap a nanoforge, then superintelligence is not so capable it can defeat all of us.
This is right, but a little vacuous without knowing how much sensor error you have, or how much error you can tolerate. I literally said it depends on sensor error, so how much intelligence is limited by matters, rather than asking if there’s a limit.
Is the big picture here actually showing the limits of intelligence? Suppose you have some entity with very large intelligence, but it’s information about the billiards table comes from a 4k camera with a wide angle lens and it’s positioning error is nonzero. Obviously the pixel grid of the camera and the limited color resolution (you can do better than 1 pixel of positioning accuracy by looking at the color information around the edges of each ball) limit it.
This has n bits of precision for each ball position.
Then ANY algorithm, no matter how intelligent, cannot generate an anwer with more bits of precision than the input. (Assuming just 1 frame, you can extract a little more information from repeated observations or if you can move the camera)
So in terms of “winning the pool game”, there is a finite amount of policy quality beyond which the marginal gain is zero, your odds of winning do not increase any further.
This would be true in a general sense, limiting what superintelligence can do in our world.
Also just to be pedantic, a robot actuator also has finite control resolution, it will accept control packets at a fixed rate with a fixed number of bits of precision.
So you are limited by whichever has less bits: positioning accuracy or output actuator accuracy. So winning the pool game on the first shot is difficult and unlikely.
Yup, I think this is right, though I don’t know whether it applies to a literal game of pool since the balls start in a particular relatively-simple arrangement.
It means it’s dominated by tiny effects you may not be able to measure before the break. Once it’s down to simple 1 and 2 ball situations sure, the robot can sink every shot.
Sort of, though it depends on how much compute is used and how much error you have in your sensors. In practice, chaos can be important for prediction, but usually this isn’t as important for acting on the world.
No...
It’s still outright limited by sensor error. This is a law of physics and well proven.
You can combined observations to extract more fractional bits but there are limits.
Noisy pool like xanatos manipulations of humans or bootstrapping nanoforges without adequate resources are precisely how superintelligence could kill us.
If it isn’t possible because it doesn’t have enough bits of information about biology to make the bioweapon, enough bits about human manipulation to manipulate people into behaving to it’s benefit, or bootstrap a nanoforge, then superintelligence is not so capable it can defeat all of us.
This is right, but a little vacuous without knowing how much sensor error you have, or how much error you can tolerate. I literally said it depends on sensor error, so how much intelligence is limited by matters, rather than asking if there’s a limit.