I’m not sure why you’re so bothered by that article. There’s nothing wrong with my game theory, as far as I can tell, and I think historically, the phenomenon described must have played some role in the evolution of intelligence. So why should I retract it?
Smart players know that if they make the “smart” “thing to do on predictably non-public rounds” be to defect, then non-smart players will predict this even though they can’t predict which rounds are non-public; so instead they choose to make the “smart” thing (that is, the output of this “smart” decision computation) be to cooperate.
The smart players can still lose out in a case where dumb players are also too dumb to simulate the smart players, have the mistaken belief that smart players will defect, and yet know infallibly who the smart players are; but this doesn’t seem quite so much the correctable fault of the smart players as before.
But it’s only you who had in the first place the idea that smart players would defect on predictably private rounds, and you got that from a mistaken game theory in which agents only took into account the direct physical consequences of their actions, rather than the consequences of their decision computations having a particular Platonic output.
The smart players can still lose out in a case where dumb players are also too dumb to simulate the smart players, have the mistaken belief that smart players will defect, and yet know infallibly who the smart players are; but this doesn’t seem quite so much the correctable fault of the smart players as before.
As I wrote in the article and also above, I was mainly concerned about the evolution of intelligence. Wouldn’t you agree that up to now, there have been plenty of dumb players who can’t simulate the smart players? Their belief that smart players will defect is not mistaken in that case. Smart players should defect in predictably non-public rounds if they can’t be simulated, because the decision of the other player is then logically independent of their decision.
The dumb players don’t need know much game theory, BTW. After they encounter a few smart players who defect in non-public rounds, they should learn this.
After they encounter a few smart players who defect in non-public rounds, they should learn this.
Unless the smart players didn’t defect in non-public rounds, in which case the dumb players who can only look at their behavior wouldn’t become prejudiced against smart players, and everyone is happy.
But if some of the smart players are still causal decision theorists, and the dumb players can’t distinguish a TDT from a CDT but can distinguish a TDT from a dumb player, then your reward will be based on other people’s assumption that your decision is correlated with something that it really isn’t. Which brings us back to “the mistaken belief that smart players will defect”.
Unless the smart players didn’t defect in non-public rounds, in which case the dumb players who can only look at their behavior wouldn’t become prejudiced against smart players, and everyone is happy.
But notice that this isn’t evolutionarily stable. If a mutation causes a smart player to start defecting in non-public rounds, then it would have an advantage. On the other hand, smart players defecting in non-public rounds is evolutionarily stable. So either TDT also implies that smart players should play defect in non-public rounds, or TDT could never have arisen in the first place by evolution. (I’m not sure which is the case yet, but the disjunction must be true.) I conclude that “the mistaken belief that smart players will defect” isn’t really mistaken.
But notice that this isn’t evolutionarily stable. If a mutation causes a smart player to start defecting in non-public rounds, then it would have an advantage.
Evolutionary stability isn’t about TDT because organisms don’t simulate each other. You, however, are running a very small and simple computation in your own mind when you conclude “smart players should defect on non-public rounds”. But this is assuming the smart player is calculating in a way that doesn’t take into account your simple simulation of them, and your corresponding reaction. So you are not using TDT in your own head here, you are simulating a “smart” CDT decision agent—and CDT agents can indeed be harmed by increased knowledge or intelligence, like being told on which rounds an Omega is filling a Newcomb box “after” rather than “before” their decision. TDT agents, however, win—unless you have mistaken beliefs about them that don’t depend on their real actions, but that’s a genuine fault in you rather than anything dependent on the TDT decision process; and you’ll also suffer when the TDT agents calculate that you are not correctly computing what a TDT agent does, meaning your action is not in fact dependent on the output of their computation.
TDT could never have arisen in the first place by evolution
It didn’t.
Evolutionary biology built humans to have a sense of honor, which isn’t the same thing, but reflects our ancestral inability to calculate the unobserved rounds with exactitude.
TDT can arise in many ways—e.g. a CDT agent who believes they will in the future face Newcomblike problems will self-modify to use TDT for all Newcomblike problems dependent on decisions made after the instant of CDT self-modification, i.e., “use TDT for problems dependent on my decision after 9am on Tuesday and CDT for all problems dependent on decisions before then”. This is inelegant, and a simple application of the unknown meta-decision-theory that wakes up and realizes this is stupid, says “Just use TDT throughout”. A truepure CDT agent would never realize this and would just end up with an ugly and awkward decision theory in descendants, which points up the importance of the meta-problem.
But evolutionary dynamics simply are not decision-theory dynamics. You might as well point out that no interstellar travel could arise by evolutionary biology because there’s no incremental advantage to getting halfway to another solar system.
I think my earlier comments may not have been as clear as they could be. Let me back off and try again. We should distinguish between two different questions:
Is my article correct and relevant within the context of the past evolution of intelligence?
What happens from now on?
I don’t think you’ve given any arguments against 1. Since TDT didn’t arise from evolution, and it wasn’t invented until recently, clearly TDT-related arguments aren’t relevant as far as question 1 is concerned. So again, I see no reason to retract the article.
As for 2, I have some doubts about this:
“Just use TDT throughout”
I’m trying to explore it using this puzzle. Do you have any thoughts on it?
Woah, it took me a long time to parse “Smart Losers”. The technical parts of the article seem to be correct, but as for its evolutional relevance… In your scenario, being smart doesn’t hurt you, being known to be smart does; so it’s most advantageous to be “secretly smart”. So if your conclusions were correct, we’d probably see many adaptations aimed at concealing our intelligence from people we interact with.
So if your conclusions were correct, we’d probably see many adaptations aimed at concealing our intelligence from people we interact with.
Not if the cost of concealing intelligence was too high. Our ancestors lived in tribes with a lot of gossip. Trying to conceal intelligence would have entailed pretending to be dumb at virtually all times, which implies giving up most of the benefits of being intelligent.
Trying to conceal intelligence would have entailed pretending to be dumb at virtually all times, which implies giving up most of the benefits of being intelligent.
There would still be benefits if your model is at all accurate and there are ‘secret rounds’ in ordinary human life. Just pretend to be stupid in public and then be smart in private rounds. To frustrate this, one would need to assume that the additional smartness costs too much. (It is so expensive that it outweighs the gains, or the gains are minimal so any cost outweighs them.)
It seems reasonable to me that there are private rounds in real life and that smartness is a net win.
It does seem convoluted. Smart agents lose when they face smarter agents, when the game is rigged against them, when they are unlucky, when they play games in which brains don’t matter—and probably in numerous other cases.
That’s why we have so many bacteria on the planet. They are mega-stupid, but they reproduce quickly, and can live in a huge variety of environments. They play games where being smart is heavily penalised.
PS again:
Don’t forget to retract: http://www.weidai.com/smart-losers.txt
Smart agents win.
I’m not sure why you’re so bothered by that article. There’s nothing wrong with my game theory, as far as I can tell, and I think historically, the phenomenon described must have played some role in the evolution of intelligence. So why should I retract it?
Smart players know that if they make the “smart” “thing to do on predictably non-public rounds” be to defect, then non-smart players will predict this even though they can’t predict which rounds are non-public; so instead they choose to make the “smart” thing (that is, the output of this “smart” decision computation) be to cooperate.
The smart players can still lose out in a case where dumb players are also too dumb to simulate the smart players, have the mistaken belief that smart players will defect, and yet know infallibly who the smart players are; but this doesn’t seem quite so much the correctable fault of the smart players as before.
But it’s only you who had in the first place the idea that smart players would defect on predictably private rounds, and you got that from a mistaken game theory in which agents only took into account the direct physical consequences of their actions, rather than the consequences of their decision computations having a particular Platonic output.
As I wrote in the article and also above, I was mainly concerned about the evolution of intelligence. Wouldn’t you agree that up to now, there have been plenty of dumb players who can’t simulate the smart players? Their belief that smart players will defect is not mistaken in that case. Smart players should defect in predictably non-public rounds if they can’t be simulated, because the decision of the other player is then logically independent of their decision.
The dumb players don’t need know much game theory, BTW. After they encounter a few smart players who defect in non-public rounds, they should learn this.
Unless the smart players didn’t defect in non-public rounds, in which case the dumb players who can only look at their behavior wouldn’t become prejudiced against smart players, and everyone is happy.
But if some of the smart players are still causal decision theorists, and the dumb players can’t distinguish a TDT from a CDT but can distinguish a TDT from a dumb player, then your reward will be based on other people’s assumption that your decision is correlated with something that it really isn’t. Which brings us back to “the mistaken belief that smart players will defect”.
But notice that this isn’t evolutionarily stable. If a mutation causes a smart player to start defecting in non-public rounds, then it would have an advantage. On the other hand, smart players defecting in non-public rounds is evolutionarily stable. So either TDT also implies that smart players should play defect in non-public rounds, or TDT could never have arisen in the first place by evolution. (I’m not sure which is the case yet, but the disjunction must be true.) I conclude that “the mistaken belief that smart players will defect” isn’t really mistaken.
Evolutionary stability isn’t about TDT because organisms don’t simulate each other. You, however, are running a very small and simple computation in your own mind when you conclude “smart players should defect on non-public rounds”. But this is assuming the smart player is calculating in a way that doesn’t take into account your simple simulation of them, and your corresponding reaction. So you are not using TDT in your own head here, you are simulating a “smart” CDT decision agent—and CDT agents can indeed be harmed by increased knowledge or intelligence, like being told on which rounds an Omega is filling a Newcomb box “after” rather than “before” their decision. TDT agents, however, win—unless you have mistaken beliefs about them that don’t depend on their real actions, but that’s a genuine fault in you rather than anything dependent on the TDT decision process; and you’ll also suffer when the TDT agents calculate that you are not correctly computing what a TDT agent does, meaning your action is not in fact dependent on the output of their computation.
It didn’t.
Evolutionary biology built humans to have a sense of honor, which isn’t the same thing, but reflects our ancestral inability to calculate the unobserved rounds with exactitude.
TDT can arise in many ways—e.g. a CDT agent who believes they will in the future face Newcomblike problems will self-modify to use TDT for all Newcomblike problems dependent on decisions made after the instant of CDT self-modification, i.e., “use TDT for problems dependent on my decision after 9am on Tuesday and CDT for all problems dependent on decisions before then”. This is inelegant, and a simple application of the unknown meta-decision-theory that wakes up and realizes this is stupid, says “Just use TDT throughout”. A true pure CDT agent would never realize this and would just end up with an ugly and awkward decision theory in descendants, which points up the importance of the meta-problem.
But evolutionary dynamics simply are not decision-theory dynamics. You might as well point out that no interstellar travel could arise by evolutionary biology because there’s no incremental advantage to getting halfway to another solar system.
I think my earlier comments may not have been as clear as they could be. Let me back off and try again. We should distinguish between two different questions:
Is my article correct and relevant within the context of the past evolution of intelligence?
What happens from now on?
I don’t think you’ve given any arguments against 1. Since TDT didn’t arise from evolution, and it wasn’t invented until recently, clearly TDT-related arguments aren’t relevant as far as question 1 is concerned. So again, I see no reason to retract the article.
As for 2, I have some doubts about this:
I’m trying to explore it using this puzzle. Do you have any thoughts on it?
Woah, it took me a long time to parse “Smart Losers”. The technical parts of the article seem to be correct, but as for its evolutional relevance… In your scenario, being smart doesn’t hurt you, being known to be smart does; so it’s most advantageous to be “secretly smart”. So if your conclusions were correct, we’d probably see many adaptations aimed at concealing our intelligence from people we interact with.
Not if the cost of concealing intelligence was too high. Our ancestors lived in tribes with a lot of gossip. Trying to conceal intelligence would have entailed pretending to be dumb at virtually all times, which implies giving up most of the benefits of being intelligent.
There would still be benefits if your model is at all accurate and there are ‘secret rounds’ in ordinary human life. Just pretend to be stupid in public and then be smart in private rounds. To frustrate this, one would need to assume that the additional smartness costs too much. (It is so expensive that it outweighs the gains, or the gains are minimal so any cost outweighs them.)
It seems reasonable to me that there are private rounds in real life and that smartness is a net win.
Why do they have to know infallibly?
I do not think the article suggests any non-toy scenario where such situations might have reasonably arisen.
My personal favorite reason for “why are we not more intelligent species” is that the smart ones don’t breed enough :)
So how do the the smart agents win that game? It has too many plot twists for me to follow.
It does seem convoluted. Smart agents lose when they face smarter agents, when the game is rigged against them, when they are unlucky, when they play games in which brains don’t matter—and probably in numerous other cases.
That’s why we have so many bacteria on the planet. They are mega-stupid, but they reproduce quickly, and can live in a huge variety of environments. They play games where being smart is heavily penalised.
Smart agents win what? They certainly don’t win the “who has the lightest brain” competition.
Slogans are silly ;-)