Wouldn’t the AI have to discover that it is something to be solved, first? Give a kid such a puzzle and she’s likelier to put it in her mouth then even try.
You’re right, and I think that this is a mistake a lot of people make when thinking about AI—they assume that the fact that they’re intelligent means they also know a lot. Like the child, their specific knowledge (such as the fact that there is something to solve), is something they have to learn, or be taught, over time.
It could be built in. I agree. But the child is curious about it’s texture and taste than how the pieces fit together. I had to show my child a puzzle and solve it in front of her to get her to understand it.
But the child is curious about it’s texture and taste than how the pieces fit together.
But as you see, there was an initial curiosity there. They may not be able to make certain leaps that lead them to things they would be curious about, but once you help them make the leap they are then curious on their own.
Also, there are plenty of things some people just aren’t curious about, or interested in. You can only bring someone so far, after which they are either curious or not.
It would be very interesting to do the same thing with an AI, just give it a basic curiosity about certain things, and watch how it develops.
just give it a basic curiosity about certain things
What’s “curiosity”? I don’t think we can just say “just” yet, when we can’t even explain this concept to a hypothetical human-minus-curiosity. (Wanting to learn more? What does it mean to actively learn about something?)
I think it would need some genetic algorithm in order to figure out about how “close” it is to the solution, then make a tree structure where it figures out what happens after every combination of however many moves, and it does the one that looks closest to the solution.
It would update the algorithm based on how close it is to the closest solution. For example, if it’s five moves away from something that looks about 37 moves away from finishing, then it’s about 42 moves away now.
The problem with this is that when you start it, it will have no idea how close anything is to the solution except for the solution, and there’s no way it’s getting to that by chance.
Essentially, you’d have to cheat and start by giving it almost solved Rubik’s cubes, and slowly giving it more randomized ones. It won’t learn on its own, but you can teach it pretty easily.
A less cheating-ish solution is to use some reasonable-seeming heuristic to guess how close you are to a solution. For example, you could just count the number of squares “in the right place” after a move sequence.
(First post, bear with me.. find the site very interesting :)
I do agree!
But actually I would model the problem with what is known in some circles as a closed-loop controller, and specifically with a POMDP. Then apply RealTime Dynamic Prog. by embedding an heuristic without having to visit all the states in order to compute the rough but optimal h*.
Another way could be done by means of a graphical model, and more specifically a DAG would be quite nicely suited to the problem. Apply a simulated annealing approach (Ising model!) and when you reach “thermal equilibrium” by having minimized some energy functional you get the solution. Obviously this approach would involve learning the parameters of the model, instead of modelling the problem as in my first proposed approach.
Exactly the difficulty of solving a Rubik’s cube is that it doesn’t respond to heuristics. A cube can be 5 moves from solved and yet look altogether a mess, whereas a cube with all but one corner correct is still some 20 moves away from complete (by the methods I looked up at least). In general, -humans- solve a Rubik’s cube by memorizing sequences of moves with certain results, and then string these sub-solutions together. An AI, though, probably has the computational power to brute force a solution much faster than it could manipulate the cube.
The more interesting question (I think) is how it figures out a model for the cube in the first place. What makes the cube a good problem is that it’s designed to match human pattern intuitions (in that we prefer the colors to match, and we quickly notice the seams that we can rotate through), but an AI has no such intuitions.
Exactly the difficulty of solving a Rubik’s cube is that it doesn’t respond to heuristics. A cube can be 5 moves from solved and yet look altogether a mess, whereas a cube with all but one corner correct is still some 20 moves away from complete (by the methods I looked up at least).
I don’t know the methods you used, but the only ones I know of have certain “steps” where you can easily tell what step it’s on. For example, by one method, anything that’s five moves away will have all but two sides complete.
Consider how this could be tested. One would write a program that generates a virtual rubik’s cube, and passes this on to the AI to be solved (this avoids the complexity of first having to learn how to control robotic hands). It can’t just randomly assign colours to sides, lest it end up with an unsolveable cube. Hence, the preparatory program starts with a solved cube, and then applies a random sequence of moves to it.
This will almost certainly be done on the same computer as the AI is running on. A good AI, therefore, should be able to learn to inspect its own working memory, and observe other running threads on the system—it will simply observe the moves used to shuffle the cube, and can then easily reverse them if asked.
It is possible, of course, for test conditions to be altered to avoid this solution. That would, I think, be a mistake—the AI will be able to learn a lot from inspecting its own running processes (combined with the research that led to its development), and this behaviour should (in a known Friendly AI) be encouraged.
the problem with this is the state space is so large that it cannot explore every transition, so it can’t follow transitions backwards in a straight forward manner as you’ve proposed. It needs some kind of intuition to minimize the search space, to generalize it.
Unfortunately I’m not sure what that would look like. :(
I wasn’t suggesting that the AI might try to calculate the reverse sequence of moves. I was suggesting that, if the cube-shuffling program is running on the same computer, then the AI might learn to cheat by, in effect, looking over the shoulder of the cube-shuffler and simply writing down all the moves in a list; then it can ‘solve’ the cube by simply running the list backwards.
I’m not 100% sure of the mechanism of said observations, but I’m assuming a real AI would be able to do things on a computer that we can’t—much as we can easily recognise an object in an image.
You’re assuming the AI has terminal access. Just because our brains are implemented as neurons doesn’t mean we can manipulate matter on a cellular scale.
Forgive me for latching onto the example, but how would an AI discover how to solve a Rubik’s cube? Does anyone have a good answer?
Wouldn’t the AI have to discover that it is something to be solved, first? Give a kid such a puzzle and she’s likelier to put it in her mouth then even try.
Unless I’m being obtuse.
You’re right, and I think that this is a mistake a lot of people make when thinking about AI—they assume that the fact that they’re intelligent means they also know a lot. Like the child, their specific knowledge (such as the fact that there is something to solve), is something they have to learn, or be taught, over time.
Curiosity could be built-in, I don’t see the problem with that.
It seems to be built-in for humans—we don’t learn to be curious, though we can learn not to be.
It could be built in. I agree. But the child is curious about it’s texture and taste than how the pieces fit together. I had to show my child a puzzle and solve it in front of her to get her to understand it.
Then she took off with it. YMMV.
Good point, though.
But as you see, there was an initial curiosity there. They may not be able to make certain leaps that lead them to things they would be curious about, but once you help them make the leap they are then curious on their own.
Also, there are plenty of things some people just aren’t curious about, or interested in. You can only bring someone so far, after which they are either curious or not.
It would be very interesting to do the same thing with an AI, just give it a basic curiosity about certain things, and watch how it develops.
What’s “curiosity”? I don’t think we can just say “just” yet, when we can’t even explain this concept to a hypothetical human-minus-curiosity. (Wanting to learn more? What does it mean to actively learn about something?)
I had the same problem.
I think it would need some genetic algorithm in order to figure out about how “close” it is to the solution, then make a tree structure where it figures out what happens after every combination of however many moves, and it does the one that looks closest to the solution.
It would update the algorithm based on how close it is to the closest solution. For example, if it’s five moves away from something that looks about 37 moves away from finishing, then it’s about 42 moves away now.
The problem with this is that when you start it, it will have no idea how close anything is to the solution except for the solution, and there’s no way it’s getting to that by chance.
Essentially, you’d have to cheat and start by giving it almost solved Rubik’s cubes, and slowly giving it more randomized ones. It won’t learn on its own, but you can teach it pretty easily.
A less cheating-ish solution is to use some reasonable-seeming heuristic to guess how close you are to a solution. For example, you could just count the number of squares “in the right place” after a move sequence.
(First post, bear with me.. find the site very interesting :)
I do agree!
But actually I would model the problem with what is known in some circles as a closed-loop controller, and specifically with a POMDP. Then apply RealTime Dynamic Prog. by embedding an heuristic without having to visit all the states in order to compute the rough but optimal h*.
Another way could be done by means of a graphical model, and more specifically a DAG would be quite nicely suited to the problem. Apply a simulated annealing approach (Ising model!) and when you reach “thermal equilibrium” by having minimized some energy functional you get the solution. Obviously this approach would involve learning the parameters of the model, instead of modelling the problem as in my first proposed approach.
Quite geeky, excuse me!
Exactly the difficulty of solving a Rubik’s cube is that it doesn’t respond to heuristics. A cube can be 5 moves from solved and yet look altogether a mess, whereas a cube with all but one corner correct is still some 20 moves away from complete (by the methods I looked up at least). In general, -humans- solve a Rubik’s cube by memorizing sequences of moves with certain results, and then string these sub-solutions together. An AI, though, probably has the computational power to brute force a solution much faster than it could manipulate the cube.
The more interesting question (I think) is how it figures out a model for the cube in the first place. What makes the cube a good problem is that it’s designed to match human pattern intuitions (in that we prefer the colors to match, and we quickly notice the seams that we can rotate through), but an AI has no such intuitions.
I don’t know the methods you used, but the only ones I know of have certain “steps” where you can easily tell what step it’s on. For example, by one method, anything that’s five moves away will have all but two sides complete.
Consider how this could be tested. One would write a program that generates a virtual rubik’s cube, and passes this on to the AI to be solved (this avoids the complexity of first having to learn how to control robotic hands). It can’t just randomly assign colours to sides, lest it end up with an unsolveable cube. Hence, the preparatory program starts with a solved cube, and then applies a random sequence of moves to it.
This will almost certainly be done on the same computer as the AI is running on. A good AI, therefore, should be able to learn to inspect its own working memory, and observe other running threads on the system—it will simply observe the moves used to shuffle the cube, and can then easily reverse them if asked.
It is possible, of course, for test conditions to be altered to avoid this solution. That would, I think, be a mistake—the AI will be able to learn a lot from inspecting its own running processes (combined with the research that led to its development), and this behaviour should (in a known Friendly AI) be encouraged.
the problem with this is the state space is so large that it cannot explore every transition, so it can’t follow transitions backwards in a straight forward manner as you’ve proposed. It needs some kind of intuition to minimize the search space, to generalize it.
Unfortunately I’m not sure what that would look like. :(
(Wow, this was from a while back)
I wasn’t suggesting that the AI might try to calculate the reverse sequence of moves. I was suggesting that, if the cube-shuffling program is running on the same computer, then the AI might learn to cheat by, in effect, looking over the shoulder of the cube-shuffler and simply writing down all the moves in a list; then it can ‘solve’ the cube by simply running the list backwards.
Oh I see: for that specific instance of the task.
I’d like to see someone make this AI, I want to know how it could be done.
Observe the contents of RAM as it’s changing?
I’m not 100% sure of the mechanism of said observations, but I’m assuming a real AI would be able to do things on a computer that we can’t—much as we can easily recognise an object in an image.
You’re assuming the AI has terminal access. Just because our brains are implemented as neurons doesn’t mean we can manipulate matter on a cellular scale.