I assign a decent probability to this sequence (of which I think this is the best post) being the most important contribution of 2022. I am however really not confident of that, and I do feel a bit stuck on how to figure out where to apply and how to confirm the validity of ideas in this sequence.
Despite the abstract nature, I think if there are indeed arguments to do something closer to Kelly betting with one’s resources, even in the absence of logarithmic returns to investment, then that would definitely have huge effects on how I think about my own life’s plans, and about how humanity should allocate its resources.
Separately, I also think this sequence is pushing on a bunch of important seams in my model of agency and utility maximization in a way that I expect to become relevant to understanding the behavior of superintelligent systems, though I am even less confident of this than the rest of this review.
I do feel a sense of sadness that I haven’t seen more built on the ideas of this sequence, or seen people give their own take on it. I certainly feel a sense that I would benefit a lot if I saw how the ideas in this sequence landed with people, and would appreciate figuring out the implications of the proof sketches outlined here.
+1 on the sequence being on the best things in 2022.
You may enjoy additional/somewhat different take on this from population/evolutionary biology (and here). (To translate the map you can think about yourself as the population of myselves. Or, in the opposite direction, from a gene-centric perspective it obviously makes sense to think about the population as a population of selves)
Part of the irony here is evolution landed on the broadly sensible solution (geometric rationality). Hower, after almost every human doing the theory got somewhat confused by the additive linear EV rationality maths, what most animals and also often humans on S1 level do got interpreted as ‘cognitive bias’ - in the spirit of assuming obviously stupid evolution not being able to figure out linear argmax over utility algorithms in a a few billion years.
I guess not much engagement is caused by - the relation between ‘additive’ vs ‘multiplicative’ picture being deceptively simple in formal way - the conceptual understanding of what’s going on and why being quite tricky; one reason is I guess our S1 / brain hardware runs almost entirely in the multiplicative / log world; people train their S2 understanding on linear additive picture; as Scott explains, maths formalism fails us
I assign a decent probability to this sequence (of which I think this is the best post) being the most important contribution of 2022. I am however really not confident of that, and I do feel a bit stuck on how to figure out where to apply and how to confirm the validity of ideas in this sequence.
Despite the abstract nature, I think if there are indeed arguments to do something closer to Kelly betting with one’s resources, even in the absence of logarithmic returns to investment, then that would definitely have huge effects on how I think about my own life’s plans, and about how humanity should allocate its resources.
Separately, I also think this sequence is pushing on a bunch of important seams in my model of agency and utility maximization in a way that I expect to become relevant to understanding the behavior of superintelligent systems, though I am even less confident of this than the rest of this review.
I do feel a sense of sadness that I haven’t seen more built on the ideas of this sequence, or seen people give their own take on it. I certainly feel a sense that I would benefit a lot if I saw how the ideas in this sequence landed with people, and would appreciate figuring out the implications of the proof sketches outlined here.
+1 on the sequence being on the best things in 2022.
You may enjoy additional/somewhat different take on this from population/evolutionary biology (and here). (To translate the map you can think about yourself as the population of myselves. Or, in the opposite direction, from a gene-centric perspective it obviously makes sense to think about the population as a population of selves)
Part of the irony here is evolution landed on the broadly sensible solution (geometric rationality). Hower, after almost every human doing the theory got somewhat confused by the additive linear EV rationality maths, what most animals and also often humans on S1 level do got interpreted as ‘cognitive bias’ - in the spirit of assuming obviously stupid evolution not being able to figure out linear argmax over utility algorithms in a a few billion years.
I guess not much engagement is caused by
- the relation between ‘additive’ vs ‘multiplicative’ picture being deceptively simple in formal way
- the conceptual understanding of what’s going on and why being quite tricky; one reason is I guess our S1 / brain hardware runs almost entirely in the multiplicative / log world; people train their S2 understanding on linear additive picture; as Scott explains, maths formalism fails us