whatever else you might imagine would give you a “mainline”.
As I understand it, when you “talk about the mainline”, you’re supposed to have some low-entropy (i.e. confident) view on how the future goes, such that you can answer very different questions X, Y and Z about that particular future, that are all correlated with each other, and all get (say) > 50% probability. (Idk, as I write this down, it seems so obviously a bad way to reason that I feel like I must not be understanding it correctly.)
I think this is roughly how I’m thinking about things sometimes, tho I’d describe the mainline as the particle with plurality weight (which is a weaker condition than >50%). [I don’t know how Eliezer thinks about things; maybe it’s like this? I’d be interested in hearing his description.]
I think this is also a generator of disagreements about what sort of things are worth betting on; when I imagine why I would bail with “the future is hard to predict”, it’s because the hypotheses/particles I’m considering have clearly defined X, Y, and Z variables (often discretized into bins or ranges) but not clearly defined A, B, and C variables (tho they might have distributions over those variables), because if you also conditioned on those you would have Too Many Particles. And when I imagine trying to contrast particles on features A, B, and C, as they all make weak predictions we get at most a few bits of evidence to update their weights on, whereas when we contrast them on X, Y, and Z we get many more bits, and so it feels more fruitful to reason about.
But to the extent this is right, I’m actually quite confused why anyone thinks “talk about the mainline” is an ideal to which to aspire. What makes you expect that? It’s certainly not privileged based on what we know about idealized rationality; idealized rationality tells you to keep a list of hypotheses that you perform Bayesian updating over.
I mean, the question is which direction we want to approach Bayesianism from, given that Bayesianism is impossible (as you point out later in your comment). On the one hand, you could focus on ‘updating’, and have lots of distributions that aren’t grounded in reality but which are easy to massage when new observations come in, and on the other hand, you could focus on ‘hypotheses’, and have as many models of the situation as you can ground, and then have to do something much more complicated when new observations come in.
[Like, a thing I find helpful to think about here is where the motive power from Aumann’s Agreement Theorem comes from, which is that when I say 40% A, you know that my private info is consistent with an update of the shared prior whose posterior is 40%, and when you take the shared prior and update on your private info and that my private info is consistent with 40% and your posterior is 60% A, then I update to 48% A, that’s what happened when I further conditioned on knowing that your private info is consistent with that update, and so on. Like we both have to be manipulating functions on the whole shared prior for every update!]
For what it’s worth, I think both styles are pretty useful in the appropriate context. [I am moderately confident this is a situation where it’s worth doing the ‘grounded-in-reality’ particle-filtering approach, i.e. hitting the ‘be concrete’ and ‘be specific’ buttons over and over, and then once you’ve built out one hypothesis doing it again with new samples.]
The thing that I am confused by is the notion that you should always have a mainline, especially about something as complicated and uncertain as the future.
I don’t think I believe the ‘should always have a mainline’ thing, but I do think I want to defend the weaker claim of “it’s worth having a mainline about this.” Like, I think if you’re starting a startup, it’s really helpful to have a ‘mainline plan’ wherein the whole thing actually works, even if you ascribe basically no probability to it going ‘exactly to plan’. Plans are useless, planning is indispensable.
[Also I think it’s neat that there’s a symmetry here about complaining about the uncertainty of the future, which makes sense if we’re both trying to hold onto different pieces of Bayesianism while looking at the same problem.]
If you define “mainline” as “particle with plurality weight”, then I think I was in fact “talking on my mainline” at some points during the conversation, and basically everywhere that I was talking about worlds (instead of specific technical points or intuition pumps) I was talking about “one of my top 10 particles”.
I think I responded to every request for concreteness with a fairly concrete answer. Feel free to ask me for more concreteness in any particular story I told during the conversation.
I think this is roughly how I’m thinking about things sometimes, tho I’d describe the mainline as the particle with plurality weight (which is a weaker condition than >50%). [I don’t know how Eliezer thinks about things; maybe it’s like this? I’d be interested in hearing his description.]
I think this is also a generator of disagreements about what sort of things are worth betting on; when I imagine why I would bail with “the future is hard to predict”, it’s because the hypotheses/particles I’m considering have clearly defined X, Y, and Z variables (often discretized into bins or ranges) but not clearly defined A, B, and C variables (tho they might have distributions over those variables), because if you also conditioned on those you would have Too Many Particles. And when I imagine trying to contrast particles on features A, B, and C, as they all make weak predictions we get at most a few bits of evidence to update their weights on, whereas when we contrast them on X, Y, and Z we get many more bits, and so it feels more fruitful to reason about.
I mean, the question is which direction we want to approach Bayesianism from, given that Bayesianism is impossible (as you point out later in your comment). On the one hand, you could focus on ‘updating’, and have lots of distributions that aren’t grounded in reality but which are easy to massage when new observations come in, and on the other hand, you could focus on ‘hypotheses’, and have as many models of the situation as you can ground, and then have to do something much more complicated when new observations come in.
[Like, a thing I find helpful to think about here is where the motive power from Aumann’s Agreement Theorem comes from, which is that when I say 40% A, you know that my private info is consistent with an update of the shared prior whose posterior is 40%, and when you take the shared prior and update on your private info and that my private info is consistent with 40% and your posterior is 60% A, then I update to 48% A, that’s what happened when I further conditioned on knowing that your private info is consistent with that update, and so on. Like we both have to be manipulating functions on the whole shared prior for every update!]
For what it’s worth, I think both styles are pretty useful in the appropriate context. [I am moderately confident this is a situation where it’s worth doing the ‘grounded-in-reality’ particle-filtering approach, i.e. hitting the ‘be concrete’ and ‘be specific’ buttons over and over, and then once you’ve built out one hypothesis doing it again with new samples.]
I don’t think I believe the ‘should always have a mainline’ thing, but I do think I want to defend the weaker claim of “it’s worth having a mainline about this.” Like, I think if you’re starting a startup, it’s really helpful to have a ‘mainline plan’ wherein the whole thing actually works, even if you ascribe basically no probability to it going ‘exactly to plan’. Plans are useless, planning is indispensable.
[Also I think it’s neat that there’s a symmetry here about complaining about the uncertainty of the future, which makes sense if we’re both trying to hold onto different pieces of Bayesianism while looking at the same problem.]
If you define “mainline” as “particle with plurality weight”, then I think I was in fact “talking on my mainline” at some points during the conversation, and basically everywhere that I was talking about worlds (instead of specific technical points or intuition pumps) I was talking about “one of my top 10 particles”.
I think I responded to every request for concreteness with a fairly concrete answer. Feel free to ask me for more concreteness in any particular story I told during the conversation.