Bravo.
It doesn’t seem (ha!) that an AI could deduce our psychology from a video of a falling rock, not because of information bounds but because of uncorrelation—that video seems (ha!) equally likely to be from any number of alien species as from humans.
You’re not being creative enough. Think what the AI could figure out from a video of a falling rock. It could learn something about:
The strength of the gravitational field on our planet
The density of our atmosphere (from any error terms in the square law for the falling rock)
The chemical composition of our planet (from the appearence of the rock.)
The structure of our cameras (from things like lens flares, and any other artefacts.)
The chemical composition of whatever is illuminating the rock (by the spectra of the light)
The colors that we see in (our color cameras record things in RGB.)
For that matter, the fact that we see at all, instead of using sonar, etc.
And that’s just what I can think of with a mere human brain in five minutes
These would tell the AI a lot about our psychology.
Still, I really wouldn’t try it, unless I’d proven this (fat chance), or it was the only way to stop the world from blowing up tomorrow anyway.
Time and space resolution might be too low to allow a meaningful estimate of air resistance, especially if the camera angle doesn’t allow you to accurately determine the rock’s 3D shape.
Encoding the color in RGB eliminates spectra.
If it didn’t already have knowledge of the properties of minerals and elements, it would need to calculate them from first principles. Without looking into this specifically, I’d be surprised if it was computationally tractable, especially since the AI doesn’t know beforehand our fundamental physics or the values of relevant constants.
You’re not being creative enough. Think what the AI could figure out from a video of a falling rock. It could learn something about:
The strength of the gravitational field on our planet
The density of our atmosphere (from any error terms in the square law for the falling rock)
The chemical composition of our planet (from the appearence of the rock.)
The structure of our cameras (from things like lens flares, and any other artefacts.)
The chemical composition of whatever is illuminating the rock (by the spectra of the light)
The colors that we see in (our color cameras record things in RGB.)
For that matter, the fact that we see at all, instead of using sonar, etc.
And that’s just what I can think of with a mere human brain in five minutes
These would tell the AI a lot about our psychology.
Aren’t you glad you added that disclaimer?
I’m really late here, but a few problems:
Time and space resolution might be too low to allow a meaningful estimate of air resistance, especially if the camera angle doesn’t allow you to accurately determine the rock’s 3D shape.
Encoding the color in RGB eliminates spectra.
If it didn’t already have knowledge of the properties of minerals and elements, it would need to calculate them from first principles. Without looking into this specifically, I’d be surprised if it was computationally tractable, especially since the AI doesn’t know beforehand our fundamental physics or the values of relevant constants.