basil.halperin
Isn’t the fact that Manifold is not really a real-money prediction market very important here? If there was real money on the table, for example, it’s less likely that the 1/1/26 market would have been “forgotten”—the original traders would have had money on the line to discipline their attention.
Every time someone calls Manifold (or Metaculus) a “prediction market”, god kills an arbitrageur [even though both platforms are still great!].
Since the multiple upvotes seem indicate multiple people looking for an explanation: a link
This isn’t REALLY the point of your (nice) piece, but the title provides an opportunity to plant a flag and point out: “predictably updating” is not necessarily bad or irrational. Unfortunately I don’t have time to write up the full argument right now, hopefully eventually, but, TLDR:
Bayesian rational learning about a process can be very slow...
...which leads to predictable updating...
...especially when the long-run dynamics underlying the process are slow-moving.
In macroeconomics, this has recently been discussed in detail by Farmer, Nakamura, and Steinsson in the context of “medusa charts” that seem to show financial markets ‘predictably updating’ about interest rates.
But I imagine this issue has been discussed elsewhere—this is not an ‘economic phenomenon’ per se, it’s just a property of Bayesian updating on processes with a slow-moving nonstationary component.
Scott Sumner offers some comments here FWIW, copying and pasting:
I certainly believe the BOJ policy had the effect of boosting Japan’s real GDP, but the figure cited by Yudkowsky (“trillions of dollars”) seems excessive.
A few points:
1. In the long run, money is neutral. Hence monetary stimulus won’t impact the long run level of Japan’s RGDP or employment.
2. There’s a lot of evidence that Kuroda’s policies boosted Japan’s NGDP.
So here’s the issue. How much evidence is there that faster NGDP growth boosted Japan’s real economy (and employment) for a period of time? (Alternatively, how flexible are Japanese wages and prices?)
I’d say there is substantial evidence. Japanese stocks responded as if the policy was boosting growth. Unemployment fell to levels well below the 2006 boomlet. Also, keep in mind that growth in Japan’s working age population slowed sharply in the past decade, so trend RGDP growth is slowing substantially. Growth held up better after the 2014 tax increase than after the previous (1997?) version. Thus if Yudkowsky’s evidence was too cursory, so is this critique.
To summarize, the article makes some good points, but only shows that Yudkowsky might be wrong, not that he is wrong. I still think there’s lots of evidence that he was right and the pessimists at the BOJ were wrong, even if he exaggerates the benefits.
As an aside, he mentions my name. But people with very different views on monetary policy effectiveness—such as Paul Krugman (2018)—also see the evidence as clearly suggesting that Kuroda’s policy worked to some extent.
(There’s lots more I could say, but I’m on vacation.)
1. Very interesting, thanks, I think this is the first or second most interesting comment we’ve gotten.
2. I see that you are suggesting this as a possibility, rather than a likelihood, but I’ll note at least for other readers that—I would bet against this occurring, given central banks’ somewhat successful record at maintaining stable inflation and desire to avoid deflation. But it’s possible!
3. Also, I don’t know if inflation-linked bonds in the other countries we sample—UK/Canada/Australia—have the deflation floor. Maybe they avoid this issue.
4. Long-term inflation swaps (or better yet, options) could test this hypothesis! i.e. by showing the market’s expectation of future inflation (or the full [risk-neutral] distribution, with options).
AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years
Against using stock prices to forecast AI timelines
(A confusing way of writing “probability”)
“log odds” : “probability”
::
“epistemic status” : “confidence level”
(there are useful reasons to talk about log odds instead of probabilities, as in the post @Morpheus links to, but it also does seem like there’s some gratuitous use of jargon going on)
- 9 Apr 2022 8:12 UTC; 1 point) 's comment on April 2022 Welcome & Open Thread by (
Thanks, this gives me another chance to try to lay out this argument (which is extra-useful because I don’t think I’ve hit upon the clearest way of making the point yet):
People are made of atoms. People make choices. Nothing is inconsistent about that.
Absolutely. But “choice”, like agency, is a property of the map not of the territory. If you full specify the initial position of all of the atoms making up my body and their velocities, etc. -- then clearly it’s not useful to speak of me making any choices. You are in the position of Laplace’s demon: you know where all my atoms are right now, you know where they will be in one second, and the second after that, and so on.
We can only meaningfully talk about the concept of choice from a position of partial ignorance.
(Here I’m speaking from a Newtonian framing, with atoms and velocities, but you could translate this to QM.)
Similarly. If you performed your experiment and made an atom-by-atom copy of me, then you know that I will make the same choice as my clone. It doesn’t make sense to talk from your perspective about how I should make my “choice”—what I and my clone will do is already baked in by the law of motion for my atoms, from the assumption that you know we’re atom-by-atom copies.
(If “I” am operating from an ignorant perspective, then “I” can still talk about “making a choice” from “my” perspective.)
Does that make sense, do you see what I’m trying to say? Do you see any flaws if so?
Newcomb’s problem is just a standard time consistency problem
Here’s a related old comment from @Anders_H that I think frames the issue nicely, for my own reference at the very least:
Any decision theory depends on the concept of choice: If there is no choice, there is no need for a decision theory. I have seen a quote attributed to Pearl to the effect that we can only talk about “interventions” at a level of abstraction where free will is apparent. This seems true of any decision theory.
(He goes on to say—less relevantly for the discussion here, but again I like the framing so am recording to remind future-me—“CDT and TDT differ in how they operationalize choice, and therefore whether the decision theories are consistent with free will. In Causal Decision theory, the agents choose actions from a choice set. In contrast, from my limited understanding of TDT/UDT, it seems as if agents choose their source code. This is not only inconsistent with my (perhaps naive) subjective experience of free will, it also seems like it will lead to an incoherent concept of “choice” due to recursion.”)
Re: the perfect deterministic twin prisoner’s dilemma:
You’re a deterministic AI system, who only wants money for yourself (you don’t care about copies of yourself). The authorities make a perfect copy of you, separate you and your copy by a large distance, and then expose you both, in simulation, to exactly identical inputs (let’s say, a room, a whiteboard, some markers, etc). You both face the following choice: either (a) send a million dollars to the other (“cooperate”), or (b) take a thousand dollars for yourself (“defect”).
If we say there are two atoms in two separate rooms, with the same initial positions and velocities, we of course can’t talk about the atoms “choosing” to fly one direction or another. And yet the two atoms will always “do the same thing”. Does the movement of one of the atoms “cause” the movement of the other atom? No, the notion of “cause” is not a concept that has meaning at this layer of description of reality. There is no cause, just atoms obeying the equations of motion.
Similarly: If we say two people are the same at the atomic (or whatever) level, we can no longer speak about a notion of “choice” at all. To talk about choice is to mix up levels of abstraction.
---
Let me restate it another way.
“Choice” is not a concept that exists at the ground level of reality. There is no concept of “choice” in the equations of physics. “Deciding to make a choice” can only be discussed at a higher level of abstraction; but insisting that my twin and I run the same code is talking at a lower level, at the ground truth of reality.
If we’re talking at that lower level, there *is no notion of choice*.
---
Restated yet another way: Since the attempted discussion is at two different levels of reality, the situation is just ill-posed (a la “what happens when an unstoppable force meets an unmovable object”).
(Or as you put it: “various questions tend to blur together here – and once we pull them apart, it’s not clear to me how much substantive (as opposed to merely verbal) debate remains.”)
This is excellent, thank you! I don’t know of a solution to this problem, but FWIW it seems that webclippers somewhat break on these—e.g. (1) Instapaper doesn’t show the footnote number in the body of the text, only the footnote text at the end of the post; (2) Pocket shows the footnote number in the body of the text, but no where shows the footnote text itself.
Thanks again!
Interesting thanks—an ongoing RCT, ending in September this year, looks relevant.
It’s a complement, not a substitute:
I find Anki/spaced repetition extremely useful for mastering the vocabulary of a foreign language (or, in non-language settings, getting the basics down pat)
But speaking fluently requires—un(?)-surprisingly—actually speaking
But mastering those basics is extremely useful!
As Michael Nielsen puts it: imagine trying to write a French sonnet if you have to look up the translation for every word you think of using. Mastering the rote basics is essential, in many settings, for mastery of the larger project—and that’s what spaced repetition does well.
(The titular insight seems pretty deep, thanks for sharing this)
This is not exactly central to your main argument, but I think it’s worth pointing out, since this is something I see even economists I really respect like Scott Sumner being imprecise about: Even if markets are efficient (and I agree they pretty much are!), then prices can still be predictable.
This is the standard view in academic asset pricing theory. The trick is that: under the EMH, risk-adjusted returns must follow a random walk, not that returns themselves must follow a random walk. I have an essay explaining this in more detail for the curious.
Very cool!
To deal with the imperfect compliance of the randomization, you could use the “instrumental variables” approach. In this case, since it is (one-sided) noncompliance in an experiment, this amounts to:
Using all of your data (ie, not subsetting the data to periods in which you complied with randomization)
Dividing the observed treatment effect by the fraction of time in which you complied (if I understand correctly, this is 0.5)
I emphasize that this is a very simple econometric technique and does not rely on unreasonable assumptions (“Wald estimator” is another search term here).