It seems like you’re saying that the practical weakness of forecasters vs experts is their inability to make numerous causal forecasts. Personally, I think the causal issue is the main issue, whereas you think it is that the predictions are so numerous. But they are not always numerous—sometimes you can affect big changes by intervening at a few pivot points, such as at elections. And the idea that you can avoid dealing with causal interventions by conditioning on every parent is usually not practical, because conditioning on every parent/confounder means that you have to make too many predictions, whereas you can just run one RCT.
RyanCarey
You could test this to some extent by asking the forecasters to predict more complicated causal questions. If they lose most of their edge, then you may be right.
I don’t think the capital being locked up is such a big issue. You can just invest everyone’s money in bonds, and then pay the winner their normal return multiplied by the return of the bonds.
A bigger issue is that you seem to only be describing conditional prediction markets, rather than ones that truly estimate causal quantities, like P(outcome|do(event)). To see this, note that the economy will go down IF Biden is elected, whereas it is not decreased much by causing Biden to be elected. The issue is that economic performance causes Biden to be unpopular to a much greater extent than Biden shapes the economy. To eliminate confounders, you need to randomiser the action (the choice of president), or deploy careful causal identification startegies (such as careful regression discontinuity analysis, or controlling for certain variables, given knowledge of the causal structure of the data generating process). I discuss this a little more here.
I would do thumbs up/down for good/bad, and tick/cross for correct/incorrect.
Survey re AIS/LTism office in NYC
What do you want to spend most of your time on? What do you think would be the most useful things to spend most of your time on (from a longtermist standpoint)?
You say two things that seem in conflict with one another.
[Excerpt 1] If a system is well-described by a causal diagram, then it satisfies a complex set of statistical relationships. For example … To an evidential decision theorist, these kinds of statistical relationships are the whole story about causality, or at least about its relevance to decisions.
[Excerpt 2] [Suppose] that there is a complicated causal diagram containing X and Y, such that my beliefs satisfy all of the statistical relationships implied by that causal diagram. EDT recommends maximizing the conditional expectation of Y, conditioned on all the inputs to X. [emphasis added]In [1], you say that the EDT agent only cares about the statistical relationships between variables, i.e. P(V) over the set of variables V in a Bayes net—a BN that apparently need not even be causal—nothing more.
In [2], you say that the EDT agent needs to know the parents of X. This indicates that the agent needs to know something that is not entailed by P(V), and something that is apparently causal.
Maybe you want the agent to know some causal relationships, i.e. the relationships with decision-parents, but not others?
Under these conditions, it’s easy to see that intervening on X is the same as conditioning on X.
This is true for decisions that are in the support, given the assignment to the parents, but not otherwise. CDT can form an opinion about actions that “never happen”, whereas EDT cannot.
Many people don’t realize how effective migraine treatments are. High-dose aspirin, tryptans, and preventers all work really well, and can often reduce migraine severity by 50-90%.
Also, most don’t yet realise how effective semaglutide is for weight loss, due to the fact that weight loss drugs have generally been much less effective, or had much worse side-effects previously.
Balding treatments (finasteride and topical minoxodil) are also pretty good for a lot of people.
Another possibility is that most people were reluctant to read, summarise, or internalise Putin’s writing on Ukraine due to finding it repugnant, because they aren’t decouplers.
Off the top of my head, maybe it’s because Metaculus’ presents medians, and the median user neither investigates the issue much, nor trusts those who do (Matt Y, Scott A) and just roughly follows base rates. I also feel there was some wishful thinking, and that to some extent, the fullness of the invasion was at least somewhat intrinsically surprising.
Nice idea. But if you set C at like 10% of the correct price, then you’re going to sell 90% of the visas on the first day for way too cheap, so you can lose almost all of the market surplus.
Yeah I think in practice auctioning every day or two would be completely adequate—that’s much less than the latency involved in dealing with lawyers and other aspects of the process. So now I’m mostly just curious about whether there’s a theory built up for these kinds of problems in the continuous time case.
My feeble attempts here.
[Question] Mechanism design / queueing theory for government to sell visas
Yes. And, the transformer-based WordTune is complementary—better for copyediting/rephrasing, rather than narrow grammatical correctness.
We do not have a scientific understanding of how to tell a superintelligent machine to “solve problem X, without doing something horrible as a side effect”, because we cannot describe mathematically what “something horrible” actually means to us...
Similar to how utility theory (from von Neumann and so on) is excellent science/mathematics despite our not being able to state what utility is. AI Alignment hopes to tell us how to align AI, not the target to aim for. Choosing the target is also a necessary task, but it’s not the focus of the field.
In terms of trying to formulate rigorous and consistent definitions, a major goal of the Causal Incentives Working Group is to analyse features of different problems using consistent definitions and a shared framework. In particular, our paper “Path-specific Objectives for Safer Agent Incentives” (AAAI-2022) will go online in about month, and should serve to organize a handful of papers in AIS.
Exactly. Really, the title should be “Six specializations makes you world-class at a combination of skills that is probably completely useless.” Really, productivity is a function of your skills. The fact that you are “world class” in a random combination of skills is only interesting if people are systematically under-estimating the degree to which random skills can be usefully combined. If there are reasons to believe that, then I would be interested in reading about it.
Transformer models (like GPT-3) are generators of human-like text, so they can be modeled as quantilizers. However, any quantiliser guarantees are very weak, because they quantilise with very low q, equal to the likelihood that a human would generate that prompt.
For past work on causal conceptions of corrigibility, you should check out this by Jessica Taylor. Quite similar.