Two things: 1) A medium-sized correction, and 2) a clarification of something that wasn’t clear to me at first.
1) The correction (more of an expansion of a model to include a second-order effect) is on this bit:
simplicio: Ah, I’ve heard of this. It’s called a Keynesian beauty contest, where everyone tries to pick the contestant they expect everyone else to pick. A parable illustrating the massive, pointless circularity of the paper game called the stock market, where there’s no objective except to buy the pieces of paper you’ll think other people will want to buy.
cecie: No, there are real returns on stocks—usually in the forms of buybacks and acquisitions, nowadays, since dividends are tax-disadvantaged. If the stock market has the nature of a self-fulfilling prophecy, it’s only to the extent that high stock prices directly benefit companies, by letting the company get more capital or issue bonds at lower interest. If not for the direct effect that stock prices had on company welfare, it wouldn’t matter at all to a 10-year investor what other investors believe today. If stock prices had zero effect on company welfare, you’d be happy to buy the stock that nobody else believed in, and wait for that company to have real revenues and retained assets that everyone else could see 10 years later.
simplicio: But nobody invests on a 10-year horizon! Even pension companies invest to manage the pension manager’s bonus this year!
visitor: Surely the recursive argument is obvious? If most managers invest with 1-year lookahead, a smarter manager can make a profit in 1 year by investing with a 2-year lookahead, and can continue to extract value until there’s no predictable change from 2-year prices to 1-year prices.
We can edit the model to account for this in a couple ways. But this depends on whether investors are killing companies, and what the die off rate is for larger, fortune 500 companies. As I understand it, the big companies mostly aren’t optimizing for long term survival, but there are a few that are. I’d expect most to optimizing for expected revenue at the expense of gambler’s ruin, especially because the government subsidizes risk-taking by paying debts after bankruptcy.
(I’m not necessarily against bankruptcy though, as I understand that it makes companies much more willing to do business with each other, since they know they’ll get paid.)
I don’t know the details, but I’d lean a lot further toward the full Simplicio position than the chapter above does. That is, markets really are at least partly a Keynesian beauty contest / self-fulfilling prophecy.
2) Also, this section was confusing for me at first:
When the Red politicians do something that Red-haters really dislike, that gives the Blue politicians more leeway to do additional things that Red-haters mildly dislike, which can give the Red politicians more leeway of their own, and so the whole thing slides sideways.
simplicio: Looking at the abstract of that Abramowitz and Webster paper, isn’t one of their major findings that this type of hate-based polarization has increased a great deal over the last twenty years?
cecie: Well, yes. I don’t claim to know exactly why that happened, but I suspect the Internet had something to do with it.
In the US, the current two parties froze into place in the early twentieth century—before then, there was sometimes turnover (or threatened turnover). I suspect that the spread of radio broadcasting had something to do with the freeze. If you imagine a country in the pre-telegraph days, then it might be possible for third-party candidates to take hold in one state, then in nearby states, and so a global change starts from a local nucleus. A national radio system makes politics less local.
Let me make sure I understand, by stating the model explicitly: before effective communication, we were polarized locally but not so much nationally. You might have green and purple tribes in one state, and orange and yellow tribes in another. Now, as society is less regional, all these micro-tribalisms are aligning. I’m envisioning this as a magnetic field orienting thousands of tiny regions on a floppy disk, flipping them from randomized to aligned.
visitor: Maybe it’s naive of me… but I can’t help but think… that surely there must be some breaking point in this system you describe, of voting for the less bad of two awful people, where the candidates just get worse and worse over time.
Ah, I understand what you were getting at with the leeway model now. To state the axioms it’s built from explicitly, coalitions form mainly based on a common outgroup. (Look at the Robbers Cave experiment. Look at studies showing that a shared dislikes builds friendships faster than a shared interests.)
So, if we vote mainly based on popularity contests involving which politician appears to have the highest tribal overlap with us, rather than policy, then the best way for politicians to signal that they are in our ingroup is to be offensive to our outgroup. that’s what leads to a signaling dynamic where politicians just get more and more hateful.
It’s not clear what the opposing forces are. There must be something, or we’d instantly race to the bottom over just a couple election cycles. Maybe politicians are just slow on adopting such a repulsive strategy? Maybe, as Eliezer suggests, it’s that politicians used to have to balance being maximally repulsive to lots of tiny local factions, but now are free to maximally repulsive to one of two single, fairly unified outgroups?
It’s not clear what the opposing forces are. There must besomething, or we’d instantly race to the bottom over just a couple election cycles. Maybe politicians are just slow on adopting such a repulsive strategy?
One relevant factor from the dialogue is that the Overton window limits politicians’ ability to creatively offend each other; “serious” policies and behaviors will tend to be relatively mainstream, traditional, and non-outrageous.
Thanks. The Overton Window stuff was mainly about why First Past The Post might be stuck in metaphorical molasses, and I hadn’t generalized the concept to other things yet.
Side note: this also gives an interesting glimpse into what it feels like from the inside to have one’s conceptual framework become more interconnected. Tools and mental models can exist happily side by side without interacting, even while explicitly wondering about a gap in one’s model that could be filled by another tool/model you already know.
It takes some activation energy (in the form of Actually Trying, ie thinking about it and only it for 5+ minutes by the clock), and then maybe you’ll get lucky enough to try the right couple pieces in the right geometry, and get model that that makes sense on reflection.
This suggests that re-reading the book later might be high-value, since it would help increase the cross-linking in my Bayesian net or whatever it is our brains think with.
Two things: 1) A medium-sized correction, and 2) a clarification of something that wasn’t clear to me at first.
1) The correction (more of an expansion of a model to include a second-order effect) is on this bit:
It’s hard to see how 10-year time horizons could be common enough to overcome the self-fulfilling-prophecy effect in the entire stock market, when a third of all companies will be gone or taken over in 5 years. :p
We can edit the model to account for this in a couple ways. But this depends on whether investors are killing companies, and what the die off rate is for larger, fortune 500 companies. As I understand it, the big companies mostly aren’t optimizing for long term survival, but there are a few that are. I’d expect most to optimizing for expected revenue at the expense of gambler’s ruin, especially because the government subsidizes risk-taking by paying debts after bankruptcy.
(I’m not necessarily against bankruptcy though, as I understand that it makes companies much more willing to do business with each other, since they know they’ll get paid.)
I don’t know the details, but I’d lean a lot further toward the full Simplicio position than the chapter above does. That is, markets really are at least partly a Keynesian beauty contest / self-fulfilling prophecy.
2) Also, this section was confusing for me at first:
Let me make sure I understand, by stating the model explicitly: before effective communication, we were polarized locally but not so much nationally. You might have green and purple tribes in one state, and orange and yellow tribes in another. Now, as society is less regional, all these micro-tribalisms are aligning. I’m envisioning this as a magnetic field orienting thousands of tiny regions on a floppy disk, flipping them from randomized to aligned.
Ah, I understand what you were getting at with the leeway model now. To state the axioms it’s built from explicitly, coalitions form mainly based on a common outgroup. (Look at the Robbers Cave experiment. Look at studies showing that a shared dislikes builds friendships faster than a shared interests.)
So, if we vote mainly based on popularity contests involving which politician appears to have the highest tribal overlap with us, rather than policy, then the best way for politicians to signal that they are in our ingroup is to be offensive to our outgroup. that’s what leads to a signaling dynamic where politicians just get more and more hateful.
It’s not clear what the opposing forces are. There must be something, or we’d instantly race to the bottom over just a couple election cycles. Maybe politicians are just slow on adopting such a repulsive strategy? Maybe, as Eliezer suggests, it’s that politicians used to have to balance being maximally repulsive to lots of tiny local factions, but now are free to maximally repulsive to one of two single, fairly unified outgroups?
One relevant factor from the dialogue is that the Overton window limits politicians’ ability to creatively offend each other; “serious” policies and behaviors will tend to be relatively mainstream, traditional, and non-outrageous.
Thanks. The Overton Window stuff was mainly about why First Past The Post might be stuck in metaphorical molasses, and I hadn’t generalized the concept to other things yet.
Side note: this also gives an interesting glimpse into what it feels like from the inside to have one’s conceptual framework become more interconnected. Tools and mental models can exist happily side by side without interacting, even while explicitly wondering about a gap in one’s model that could be filled by another tool/model you already know.
It takes some activation energy (in the form of Actually Trying, ie thinking about it and only it for 5+ minutes by the clock), and then maybe you’ll get lucky enough to try the right couple pieces in the right geometry, and get model that that makes sense on reflection.
This suggests that re-reading the book later might be high-value, since it would help increase the cross-linking in my Bayesian net or whatever it is our brains think with.