In the previous post, I talked about several “anchors” that we could use to think about future ML systems, including current ML systems, humans, ideal optimizers, and complex systems.
In fact, I think we should be using all of these anchors (and any others we can think of) to reason about future ML systems. This is based on ideas from forecasting, where successful forecasters usually average over many worldviews and reference classes rather than focusing on a single reference class. However, we should also be discerning and weight anchors more if they seem like a better match for what we want to predict.
Below, I’ll say what I personally think about most of the anchors we discussed so far, by assigning a numerical “weight” to each one. While these weights aren’t perfect (the actual weight I’d use depends on the particular question), they hopefully provide a clear overall picture that is easy to agree/disagree with.
Here are the rough weights I came up with:
Anchor | Weight |
---|---|
Current ML | 4 |
Complex systems | 3 |
Thought experiments | 2 |
Evolution | 0.5 |
The economy | 0.4 |
Humans | 0.3 |
Corporations | 0.2 |
Biological systems | 0.2 |
Non-human animals | 0.1 |
I primarily rely on Current ML, Complex Systems, and Thought Experiments, in a 4:3:2 ratio. In particular, I assign about twice as much weight to Current ML as to Thought Experiments, but I think the opposite ratio is also defensible. However, many people seem to implicitly put almost all their weight on Current ML, or almost all their weight on Thought Experiments. They have something like a 5:1 or 1:5 ratio, or even greater. I think neither of these stances is defensible, and I would be interested in anyone who disagrees writing up the case for assigning extreme weights (in either direction).
Relatedly, my last two posts were essentially an argument against a 5:1 ratio in favor of Current ML—first by arguing that Current ML often misses important developments, and second by arguing that thought experiments can sometimes catch these.[1]
Aside from this, my biggest disagreement with others would be assigning significant weight to the “Complex Systems” anchor, which I think most people overlook.
Finally, all anchors that correspond to a broad reference class (Current ML, Complex Systems, Thought Experiments) get significantly more weight than any anchor that is a single example (e.g. humans).[2] I give serious consideration to hypotheses generated by any of these three anchors. In particular, if I can’t strongly rule out the hypothesis after one hour of thought, I think there’s at least a 30% chance that it will eventually come to be supported by the other two anchors as well.
I’d be interested in others posting their relative weights, and pointing out any instances where they think I’m wrong.
A later post, on the value of empirical findings, also offers an argument against a 1:5 ratio towards thought experiments. ↩︎
The only exception is that “non-human animals” gets very low weight, partly because they are hard to study and partly because I expect future systems to be more capable than humans, not less. ↩︎
I find it difficult to do that in the absence of a more specific question + more specific aspect of the anchor. For example, I think that future ML systems will probably be like humans in certain respects and unlike humans in other respects. Ditto for everything else on the list.
I think it might even be counterproductive to think along those lines—in the same way that asking me to ponder the question “On a scale of 1-10, how pro-Israel am I?” might make me actively stupider when it comes to answering factual questions about Israel. (See Scott Alexander’s blog post “Ethnic Tensions and Meaningless Arguments”.)
Maybe, but I think some people would disagree strongly with this list even in the abstract (putting almost no weight on Current ML, or putting way more weight on humans, or something else). I agree that it’s better to drill down into concrete disagreements, but I think right now there are implicit strong disagreements that are not always being made explicit, and this is a quick way to draw them out.