“Note that the data available to the system is the actual position and velocity measurements of the objects, rather than a video from a video camera, which would provide strictly more information, but be harder to process.”
Yes, I was pointing out the significance of this pre-processing, not trying to imply you didn’t mention it. “Would be harder to process” means they did most of the hard part before turning it over to the machine.
just introduce a term into the Hamiltonian for energy in the temperature and velocity of the air. Air resistance would make the problem harder
“Just”? I’m not sure you know what that words means ;-) The air functions as a thermodynamic reservoir ; you need precise equipment just to notice the change in air velocity and temperature, and even then, you’ve falling prey to exactly the criticism I made in my original comment. Simply by recognizing that temperature is relevant is itself difficult cognitive labor that you do for the machine. It can’t be evidence of the machine’s inferential capabilities except insofar as it has to account for one more variable.
And the more precise you have to be to notice this relevancy, the more cognitive labor you’re doing for the machine.
but I hereby predict that the Cornell team would be able to get their machine to work with significant air resistance. Will you email them this as a challenge?
First, they’re going to ignore a nobody like me. But yes, I will stick my neck out on this one. If the same measurement equipment is used, the same variables record, and the same huge prior given to “look for invariants”, I claim their method will choke (to be precisely defined later).
Okay, maybe that’s not what you meant. You meant that if you’re going to do even more of the cognitive labor for the machine by adding on equipment that notices the variables necessary to make conservation-of-energy approaches work, then it can still find the invariant and discover the equation of motion.
But my point is, when you, the human, focus the machine’s “attention” on precisely those observations that help the machine compress its description of its data, it’s not the machine doing the cognitive labor; it’s you.
Also, what do you say about the Cambridge/Aberystwith group?
Short answer: ditto.
Long answer: I think the biological sciences have been poor about expressing their results in a form that is conducive to the kind of regularity detection that machines like the Eureka machine do.
The point is that if Adam or the Cornell machine can do simple stuff orders of magnitude better than humans can,
And my point is that it flat out didn’t once you consider that the makers bypassed everything that humans had to do when discovering these laws and gave it as a neat package to the algorithm.
then my estimate of the probability that a “Superintelligence” would be able to do hard stuff like coming up with GR as a hypothesis and noticing that it is consistent with the motion of an apple in less than a second should go up.
Given enough processing speed, sure. But the test for intelligence would normalize for elementary processing operations. That is, the machine is more intelligent if it didn’t have to unnecessarily sweep through billions of longer hypotheses to get to the right one.
But hold on: if you truly do start from an untainted Occamian prior, you have to rule out many universes before you get to this one. In short, we don’t actually want truly general intelligence. Rather, we want intelligence with a strong prior tilted toward the working of this universe.
And my point is that it flat out didn’t once you consider that the makers bypassed everything that humans had to do when discovering these laws and gave it as a neat package to the algorithm.
But it did do something faster than a human could have done. I don’t claim that it invented physics: I claim that it quickly discovered the conserved quantities for a particular system albeit a system that was chosen in advance to be easy. But if I gave you the raw data that it had, and asked you to by hand write down a conserved quantity, you would take years.
But it did do something faster than a human could have done.
That’s enough to get a medal these days? ;-)
I don’t claim that it invented physics: I claim that it quickly discovered the conserved quantities for a particular system albeit a system that was chosen in advance to be easy. But if I gave you the raw data that it had, and asked you to by hand write down a conserved quantity, you would take years.
Okay, sure, but as long as we’re comparing feats from that baseline:
-Did the machine self-replicate?
-Did it defend itself against environmental threats?
-Did it find its own energy source?
-Did it persuade humans to grant it research funding?
Lest I be accused of being an AI goalpost mover, my point is just this: we don’t all live by our own strength. Everyone, and every machine, can do at least some narrow task very well. The problem is when you equate that narrow task with the intelligence that was necessary to get to that narrow task.
Will you email them this as a challenge? First, they’re going to ignore a nobody like me. But yes, I will stick my neck out on this one. If the same measurement equipment is used, the same variables record, and the same huge prior given to “look for invariants”, I claim their method will choke (to be precisely defined later).
But hold on: if you truly do start from an untainted Occamian prior, you have to rule out many universes before you get to this one. In short, we don’t actually want truly general intelligence. Rather, we want intelligence with a strong prior tilted toward the working of this universe.
Sure, we want to bias the machine quite strongly towards hypotheses that we believe. This would make the job of the SI easier.
Very true—but only if you can find a way to represent your knowledge in a way conducive to the SI’s Bayesian updating. At that point, however, you run into the problem of telling your SI knowledge that it couldn’t generate for itself.
Let’s say it found it had a high prior on the equations the Cornell team derived. But, for some reason, those equations seemed to be inapplicable to most featherless bipeds. Or even feathered bipeds! So, it wants to go back and identify the data that would have amplified the odds it assigned to those equations. Would it know to seek out heavy, double-pinned devices and track the linkages’ x and y positions?
Would it know when the equations even apply? Or would the prior just unnecessarily taint any future inferences about phenomena too many levels above Newtonian mechanics (i.e. social psychology)?
Would it know when the equations even apply? Or would the prior just unnecessarily taint any future inferences about phenomena too many levels above Newtonian mechanics (i.e. social psychology)?
Good point. That’s why you don’t want to go overboard with priors. However, even human psychology has underlying statistical laws governing it.
Yes, I was pointing out the significance of this pre-processing, not trying to imply you didn’t mention it. “Would be harder to process” means they did most of the hard part before turning it over to the machine.
“Just”? I’m not sure you know what that words means ;-) The air functions as a thermodynamic reservoir ; you need precise equipment just to notice the change in air velocity and temperature, and even then, you’ve falling prey to exactly the criticism I made in my original comment. Simply by recognizing that temperature is relevant is itself difficult cognitive labor that you do for the machine. It can’t be evidence of the machine’s inferential capabilities except insofar as it has to account for one more variable.
And the more precise you have to be to notice this relevancy, the more cognitive labor you’re doing for the machine.
First, they’re going to ignore a nobody like me. But yes, I will stick my neck out on this one. If the same measurement equipment is used, the same variables record, and the same huge prior given to “look for invariants”, I claim their method will choke (to be precisely defined later).
Okay, maybe that’s not what you meant. You meant that if you’re going to do even more of the cognitive labor for the machine by adding on equipment that notices the variables necessary to make conservation-of-energy approaches work, then it can still find the invariant and discover the equation of motion.
But my point is, when you, the human, focus the machine’s “attention” on precisely those observations that help the machine compress its description of its data, it’s not the machine doing the cognitive labor; it’s you.
Short answer: ditto.
Long answer: I think the biological sciences have been poor about expressing their results in a form that is conducive to the kind of regularity detection that machines like the Eureka machine do.
And my point is that it flat out didn’t once you consider that the makers bypassed everything that humans had to do when discovering these laws and gave it as a neat package to the algorithm.
Given enough processing speed, sure. But the test for intelligence would normalize for elementary processing operations. That is, the machine is more intelligent if it didn’t have to unnecessarily sweep through billions of longer hypotheses to get to the right one.
But hold on: if you truly do start from an untainted Occamian prior, you have to rule out many universes before you get to this one. In short, we don’t actually want truly general intelligence. Rather, we want intelligence with a strong prior tilted toward the working of this universe.
But it did do something faster than a human could have done. I don’t claim that it invented physics: I claim that it quickly discovered the conserved quantities for a particular system albeit a system that was chosen in advance to be easy. But if I gave you the raw data that it had, and asked you to by hand write down a conserved quantity, you would take years.
That’s enough to get a medal these days? ;-)
Okay, sure, but as long as we’re comparing feats from that baseline:
-Did the machine self-replicate?
-Did it defend itself against environmental threats?
-Did it find its own energy source?
-Did it persuade humans to grant it research funding?
Lest I be accused of being an AI goalpost mover, my point is just this: we don’t all live by our own strength. Everyone, and every machine, can do at least some narrow task very well. The problem is when you equate that narrow task with the intelligence that was necessary to get to that narrow task.
So you have emailed them?
… no?
Ok, I will.
ok. Perhaps better not to.
\ >:-(
Sure, we want to bias the machine quite strongly towards hypotheses that we believe. This would make the job of the SI easier.
Very true—but only if you can find a way to represent your knowledge in a way conducive to the SI’s Bayesian updating. At that point, however, you run into the problem of telling your SI knowledge that it couldn’t generate for itself.
Let’s say it found it had a high prior on the equations the Cornell team derived. But, for some reason, those equations seemed to be inapplicable to most featherless bipeds. Or even feathered bipeds! So, it wants to go back and identify the data that would have amplified the odds it assigned to those equations. Would it know to seek out heavy, double-pinned devices and track the linkages’ x and y positions?
Would it know when the equations even apply? Or would the prior just unnecessarily taint any future inferences about phenomena too many levels above Newtonian mechanics (i.e. social psychology)?
Good point. That’s why you don’t want to go overboard with priors. However, even human psychology has underlying statistical laws governing it.