I can try. Or, at least give a sketch. (Hand-waving ahead …)
The Ants problem—if I’m understanding it correctly—is a problem of coordinated action. We have a community of ants, and the community has some goals: collecting food, taking over opposing hills, defending friendly hills. Imagine you are an ant in the community. What does rational behavior look like for you?
I think that is already enough to launch us on lots of hard problems:
What does winning look like for a single ant in the Ants game? Does winning for a single ant even make sense or is winning completely parasitic on the community or colony in this case? Does that tell us anything about humans?
If all of the ants in my community share the same decision theory and preferences, will the colony succeed or fail? Why?
If the ants have different decision theories and/or different preferences, how can they work together? (In this case, working together isn’t very hard to describe … it’s not like the ants fight themselves, but we might ask what kinds of communities work well—i.e. is there an optimal assortment of decision theories and/or preferences for individuals?)
If the ants have different preferences, how might we apply results like Arrow’s Theorem or how might we work around it?
...
So, there’s a hand-wavy sketch of what I had in mind. But I don’t know, is it still too vague to be useful?
EDIT: I should say that I realize the game works with a bot controlling the whole colony, but I don’t think that changes the problems in principle, anyway. But maybe I’m missing something there.
The Ants problem—if I’m understanding it correctly—is a problem of coordinated action.
One of the interesting aspects of the winning entry post-mortem is the description of how dumb and how local the basic strategy the winner used:
There’s been a lot of talking about overall strategies. Unfortunately, i don’t really have one. I do not make decisions based on the number of ants i have or the size of my territory, my bot does not play different when it’s losing or winning, it does not even know that. I also never look which turn it is, in the first turn everything is done exactly the same as in the 999th turn. I treat all enemies the same, even in combat situations and i don’t save any hill locations.
Other than moving ants away from my hills via missions, every move i make depends entirely on the local environment of the ant.
Agreed. We can certainly do better than that. Unless I have a major life-event before the next AI challenge, I’ll enter and get the LW community involved in the effort.
Yes, the write-up is very interesting. But while the strategy was very local, he did end up having mechanisms for coordinating action between ants with otherwise pretty simple decision rules, especially for combat. At least, that’s the way it looks to me. Did you mean for your comment to be a criticism of what I wrote? If so, could you say a bit more?
If the ants have different decision theories and/or different preferences, how can they work together?
EDIT: I should say that I realize the game works with a bot controlling the whole colony, but I don’t think that changes the problems in principle, anyway.
What?
The ants are not even close to individuals. They’re dots. They’re dots that you move around.
The ants are not even close to individuals. They’re dots. They’re dots that you move around.
This just seems like a failure of imagination to me.
You could think of the game as just pushing around dots. But if you write a rule for pushing the dots that works for each dot and has no global constraints, then you are treating the dots like individuals with individual decision rules.
Example. On each turn, roll a fair four-sided die. If the result is ‘1’, go North. If the result is ‘2’, go South. Etc.
The effect is to push around all the dots each turn. But it’s not at all silly to describe what you would be coding here as giving each ant a very simple decision rule. Any global behavior—behavior exhibited by the colony—is due to each ant having this specific decision rule.
If you want a colony filled with real individuals, tweak the dumb rule by weighting the die in a new (slight) way for each new ant generated. Then every ant will have a slightly different decision rule.
Note that I am not trying to say anything smart about what rule(s) should be implemented for the ants, only illustrating the thought that it is not crazy—and might even be helpful—to think about the ants as individuals with individual decision rules.
I can try. Or, at least give a sketch. (Hand-waving ahead …)
The Ants problem—if I’m understanding it correctly—is a problem of coordinated action. We have a community of ants, and the community has some goals: collecting food, taking over opposing hills, defending friendly hills. Imagine you are an ant in the community. What does rational behavior look like for you?
I think that is already enough to launch us on lots of hard problems:
What does winning look like for a single ant in the Ants game? Does winning for a single ant even make sense or is winning completely parasitic on the community or colony in this case? Does that tell us anything about humans?
If all of the ants in my community share the same decision theory and preferences, will the colony succeed or fail? Why?
If the ants have different decision theories and/or different preferences, how can they work together? (In this case, working together isn’t very hard to describe … it’s not like the ants fight themselves, but we might ask what kinds of communities work well—i.e. is there an optimal assortment of decision theories and/or preferences for individuals?)
If the ants have different preferences, how might we apply results like Arrow’s Theorem or how might we work around it?
...
So, there’s a hand-wavy sketch of what I had in mind. But I don’t know, is it still too vague to be useful?
EDIT: I should say that I realize the game works with a bot controlling the whole colony, but I don’t think that changes the problems in principle, anyway. But maybe I’m missing something there.
One of the interesting aspects of the winning entry post-mortem is the description of how dumb and how local the basic strategy the winner used:
Interesting reading, overall.
EDIT: Another example of overthinking it: http://lesswrong.com/lw/8ay/ai_challenge_ants/56ug One wonders if the winner could understand even half those links.
Agreed. We can certainly do better than that. Unless I have a major life-event before the next AI challenge, I’ll enter and get the LW community involved in the effort.
What makes you think there’s much better to be done? Some games or problems just aren’t very deep, like Tic-tac-toe.
The winning program ignored a lot of information, and there weren’t enough entries to convince me that the information couldn’t be used efficiently.
Yes, the write-up is very interesting. But while the strategy was very local, he did end up having mechanisms for coordinating action between ants with otherwise pretty simple decision rules, especially for combat. At least, that’s the way it looks to me. Did you mean for your comment to be a criticism of what I wrote? If so, could you say a bit more?
What?
The ants are not even close to individuals. They’re dots. They’re dots that you move around.
This just seems like a failure of imagination to me.
You could think of the game as just pushing around dots. But if you write a rule for pushing the dots that works for each dot and has no global constraints, then you are treating the dots like individuals with individual decision rules.
Example. On each turn, roll a fair four-sided die. If the result is ‘1’, go North. If the result is ‘2’, go South. Etc.
The effect is to push around all the dots each turn. But it’s not at all silly to describe what you would be coding here as giving each ant a very simple decision rule. Any global behavior—behavior exhibited by the colony—is due to each ant having this specific decision rule.
If you want a colony filled with real individuals, tweak the dumb rule by weighting the die in a new (slight) way for each new ant generated. Then every ant will have a slightly different decision rule.
Note that I am not trying to say anything smart about what rule(s) should be implemented for the ants, only illustrating the thought that it is not crazy—and might even be helpful—to think about the ants as individuals with individual decision rules.
-