All: So far, people can solve individual CAPTCHA generation methods, but the problem I’m referring to is being able to solve any CAPTCHA that a human can. A captcha can be made arbitrarily much more difficult for a computer, while at the same time making it only slightly more difficult for a human. (And of course, there’s the nagging issue of how O/B’s captcha, er, works. “But it doesn’t keep out Silas!”) Moreover, arbitrary object recognition is much more general and difficult than character recognition. Actually achieving a solution to it would have much of the same difficulties as the Turing Test, since the identity of an object can hinge on human-specific contextual knowledge in the picture.
I confess I wasn’t aware of AIXI, but after Googling (you have to give additional keywords for it to show up, and Wikipedia doesn’t mention it, nor Solomonoff induction specifically), it appears to be the algorithm for optimal behavior in interaction with an arbitrary, unknown environment, to satisfy a utility curve. So, this does show how given unbounded computation time, it is possible, via a method we understand, to identify objects.
However, it still doesn’t mean Eliezer is treating the free will/socks cases equally. If he can count the existence of an algorithm (which his brain is not using, given that it completes the problem quickly) that would identify arbitrary objects as proof that he understands the image-recognition step in “believing I’m wearing socks”, I could, just the same, say:
“I understand why I think I have free will. Given unboundedly large but finite computing power, an AIXI program could explain to me what cognitive architecture gives the feeling of free will. Problem solved.”
All: So far, people can solve individual CAPTCHA generation methods, but the problem I’m referring to is being able to solve any CAPTCHA that a human can. A captcha can be made arbitrarily much more difficult for a computer, while at the same time making it only slightly more difficult for a human. (And of course, there’s the nagging issue of how O/B’s captcha, er, works. “But it doesn’t keep out Silas!”) Moreover, arbitrary object recognition is much more general and difficult than character recognition. Actually achieving a solution to it would have much of the same difficulties as the Turing Test, since the identity of an object can hinge on human-specific contextual knowledge in the picture.
I confess I wasn’t aware of AIXI, but after Googling (you have to give additional keywords for it to show up, and Wikipedia doesn’t mention it, nor Solomonoff induction specifically), it appears to be the algorithm for optimal behavior in interaction with an arbitrary, unknown environment, to satisfy a utility curve. So, this does show how given unbounded computation time, it is possible, via a method we understand, to identify objects.
However, it still doesn’t mean Eliezer is treating the free will/socks cases equally. If he can count the existence of an algorithm (which his brain is not using, given that it completes the problem quickly) that would identify arbitrary objects as proof that he understands the image-recognition step in “believing I’m wearing socks”, I could, just the same, say:
“I understand why I think I have free will. Given unboundedly large but finite computing power, an AIXI program could explain to me what cognitive architecture gives the feeling of free will. Problem solved.”