What actual choice is informed by considering this decision problem, what motivates posing it? (I don’t know, which is why I originally ruled that out as a possible interpretation, and suggested a less straightforward interpretation that involved an actual choice.)
Consider economic development, population growth, technological development, basic science progress. My intuition is that all of these things are good, but that seems at odds with this analysis. If the new analysis is right, that would cause me to reconsider those valuations, which are relevant to things like “how useful is the science I do” or “how useful is improving the pace of scientific progress in general” or etc.
I think your interpretation at the end is correct.
I think your interpretation at the end is correct.
OK.
Your conclusion still leaves me confused. In any given choice considered by itself, the scale of value also doesn’t make sense, you need to compare with something outside of that choice to say that the difference between the available options is insignificant. In a decision problem, you zoom in, not give up. So what is the difference between today and tomorrow insignificant in comparison with? Any personal-level change is much smaller, likely also much smaller than predictable differences in the world if I had bothered to find them. So to make that judgment, it seems necessary to involve something like the (low) expected impact of your actions on progress, and at that point I lose track of the (hypothetical) argument.
The conclusion is that the effects of progress are small compared with anything that has an appreciable effect on the future (for someone with aggregative, time-insensitive values). If we break down an action as the sum of two changes—one parallel to progress, one orthogonal to it—the effects of the orthogonal part are typically going to be much larger than the effects of the parallel part.
Originally I ended with a discussion of x-risk reduction, but it became unwieldy and I didn’t want to put time in. Perhaps I should end with a link to some discussion of future-shaping elsewhere.
It seems like you’re drawing the same conclusion as them but perhaps through a new argument, but it’s confusing that you don’t cite them. It’s also unclear to me whether the division of changes into V1, V2, V3 in that post is a useful one. I think the classification you gave in a later post makes a lot more sense (aside from the missing “philosophical progress” which you’ve since added).
Consider economic development, population growth, technological development, basic science progress. My intuition is that all of these things are good, but that seems at odds with this analysis. If the new analysis is right, that would cause me to reconsider those valuations, which are relevant to things like “how useful is the science I do” or “how useful is improving the pace of scientific progress in general” or etc.
I think your interpretation at the end is correct.
OK.
Your conclusion still leaves me confused. In any given choice considered by itself, the scale of value also doesn’t make sense, you need to compare with something outside of that choice to say that the difference between the available options is insignificant. In a decision problem, you zoom in, not give up. So what is the difference between today and tomorrow insignificant in comparison with? Any personal-level change is much smaller, likely also much smaller than predictable differences in the world if I had bothered to find them. So to make that judgment, it seems necessary to involve something like the (low) expected impact of your actions on progress, and at that point I lose track of the (hypothetical) argument.
The conclusion is that the effects of progress are small compared with anything that has an appreciable effect on the future (for someone with aggregative, time-insensitive values). If we break down an action as the sum of two changes—one parallel to progress, one orthogonal to it—the effects of the orthogonal part are typically going to be much larger than the effects of the parallel part.
Originally I ended with a discussion of x-risk reduction, but it became unwieldy and I didn’t want to put time in. Perhaps I should end with a link to some discussion of future-shaping elsewhere.
Maybe it would help if you cited Nick Bostrom’s differential technological development and Luke Muehlhauser and Anna Salamon’s differential intellectual progress explained how your idea is related to them?
It seems like you’re drawing the same conclusion as them but perhaps through a new argument, but it’s confusing that you don’t cite them. It’s also unclear to me whether the division of changes into V1, V2, V3 in that post is a useful one. I think the classification you gave in a later post makes a lot more sense (aside from the missing “philosophical progress” which you’ve since added).