After spending some time thinking about the result from the correct math, here are my conclusions:
You claimed that the percentage of total utility attained in the first 10 years was independent of the level of time discounting. This is clearly not the case, as the percentage of total utility attained in the first T years with time discounting factor a is given by (1 - exp(-aT)*(1 + aT + ½(aT)²)). The expression -exp(-aT)*(1 + aT + ½(aT)²) (the difference between the previous expression and 1) goes to zero within double precision when the combined factor aT ≈ 745.13322.
For any T<∞, then, we can find a level of exponential time discounting such that we should care about the future (at least out to that time T). You provided no real justification for why we should choose an especially high level, e.g. a = 100*ln(2). This model, when calculated correctly, does not support your assertions in the general case. Getting to a more specific case which would support your assertions requires motivating a specific level of time discounting, which you did not accomplish with an arbitrary decision about “Planck-tech years.”
I agree with the math; I disagree that my time-discounting constant is arbitrary.
I take my empirical doubling time from the average returns on investment in the Western world; that is the correct time-discounting to use in our current environment, as computed by the collective intelligence of all the investors on Earth. Anticipating that human-level software will eventually operate at 1000 times the speed of a human is a conservative figure that I do not believe it is necessary to make any arguments to defend. If I said 1 billion instead of 1000, I might be on shaky ground.
Also, note that with the new improved math, if I say there is no difference in subjective time, I still get 97% of my utility in 10 years. If I say there is a speedup of 100, I get all but 2*10-27 of it in 10 years. This is worse than before!
(I’m upvoting this comment because it enlightened me, even though I take issue with part of it.)
I take my empirical doubling time from the average returns on investment in the Western world; that is the correct time-discounting to use in our current environment, as computed by the collective intelligence of all the investors on Earth.
Anticipating that human-level software will evantually operate at 1000 times the speed of a human is a conservative figure that I do not believe it is necessary to make any arguments to defend.
The revised math shows that the percentage of total utility within T years depends on the level of time discounting. Because your conclusion comes directly from that result, I think it’s important to spend some time motivating your chosen level of time discounting.
Also, note that with the new improved math, if I say there is no difference in subjective time, I still get 97% of my utility in 10 years.
For a = 0.1*ln(2), the value of the integral from t=0..10 is ~24008. The value of the integral from t=0..∞ is ~720667. There is an order of magnitude difference between those two values. 97% of the utility comes after the first 10 years if there’s no difference in subjective time.
The revised math shows that the percentage of total utility within T years depends on the level of time discounting. Because your conclusion comes directly from that result, I think it’s important to spend some time motivating your chosen level of time discounting.
Yes, I agree, and I just did.
97% of the utility comes after the first 10 years if there’s no difference in subjective time.
We must be evaluating different integrals. I wrote my calculations up in the main post. I’m evaluating −120e^(-at)(t^2/a + 2t/a^2 + 2/a^3) from t=0 to whatever, where a=.1ln2. For t=0..10 this is 5805; for t=0..infinity it is 6006. What are you evaluating?
You know that with a halving time of 10 years, if you evaluate the function once every 10 years, half of the total utility would come at 10 years; so the 97% after 10 years figure doesn’t pass the sanity check.
I just plugged your expression directly into Matlab, in case there was a typo in the form that I was using, and I get the same result that I was getting before. I agree with your calculation for Y=100, though.
Edit: Wolfram Alpha’s results for t=0..10 and t=0..∞.
Neat! I enter it differently, but still get the same result. It seems either my math is wrong, or Wolfram alpha is wrong. Since Wolfram Alpha agrees with me for Y=100 while disagreeing with me for Y=1, I think my math is probably right, and something funny happens with Wolfram Alpha for the Y=1 case.
But I’m not going to take the time to figure it out for a post with 3 votes. This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.
This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.
For the record, I dispute your causal model of the audience’s response.
In particular, I dispute your model of the audience’s moral reasoning as to what is inevitably being approved of or disapproved of by expressions of approval or disapproval of your actions relating to the post.
I also dispute your model of the audience’s factual and moral reasoning about the gravity of the problem you suggest. I dispute specifically your model of the audience’s process of choosing to suppose that non-exponential weighting functions could be considered sufficiently indicative of potential solutions as to justify relative unconcern. (This is because I dispute your model of the utility function structures initially familiar to the audience. As part of this, I dispute your model of their descriptions of discounting functions, according to which it apparently would be impossible for them to intend to refer to a function which was to be applied on a prespecified absolute timescale, without being translated to start at an agent’s present time. If that was not your model, then I dispute your confusing apparent claim that such functions, if non-exponential, must be dynamically inconsistent.)
I am concerned that the errors in your model of the audience, if left unchallenged, will only serve to reinforce in you the apparent resentful, passive-aggressive self-righteousness which would have largely been itself the cause of the misinterpretations which led to those errors originally. This self-reinforcing effect might create needless mutual epistemic alienation.
Neat! I enter it differently, but still get the same result. It seems either my math is wrong, or Wolfram alpha is wrong. Since Wolfram Alpha agrees with me for Y=100 while disagreeing with me for Y=1, I think my math is probably right, and something funny happens with Wolfram Alpha for the Y=1 case.
I get my result with Matlab, Wolfram Alpha/Mathematica, Maple, Google calculator, and my TI-84+ graphing calculator. The more likely conclusion is that your math is off for the Y=1 case.
This is a critical topic, but LessWrong hates it.
I think you have neglected the presentation of the topic as a confounding variable in that analysis.
After spending some time thinking about the result from the correct math, here are my conclusions:
You claimed that the percentage of total utility attained in the first 10 years was independent of the level of time discounting. This is clearly not the case, as the percentage of total utility attained in the first T years with time discounting factor a is given by (1 - exp(-aT)*(1 + aT + ½(aT)²)). The expression -exp(-aT)*(1 + aT + ½(aT)²) (the difference between the previous expression and 1) goes to zero within double precision when the combined factor aT ≈ 745.13322.
For any T<∞, then, we can find a level of exponential time discounting such that we should care about the future (at least out to that time T). You provided no real justification for why we should choose an especially high level, e.g. a = 100*ln(2). This model, when calculated correctly, does not support your assertions in the general case. Getting to a more specific case which would support your assertions requires motivating a specific level of time discounting, which you did not accomplish with an arbitrary decision about “Planck-tech years.”
I agree with the math; I disagree that my time-discounting constant is arbitrary.
I take my empirical doubling time from the average returns on investment in the Western world; that is the correct time-discounting to use in our current environment, as computed by the collective intelligence of all the investors on Earth. Anticipating that human-level software will eventually operate at 1000 times the speed of a human is a conservative figure that I do not believe it is necessary to make any arguments to defend. If I said 1 billion instead of 1000, I might be on shaky ground.
Also, note that with the new improved math, if I say there is no difference in subjective time, I still get 97% of my utility in 10 years. If I say there is a speedup of 100, I get all but 2*10-27 of it in 10 years. This is worse than before!
(I’m upvoting this comment because it enlightened me, even though I take issue with part of it.)
I already observed:
I do not think that interest rates are really a reflection of human temporal discounting. Why would anyone think that they were?
The revised math shows that the percentage of total utility within T years depends on the level of time discounting. Because your conclusion comes directly from that result, I think it’s important to spend some time motivating your chosen level of time discounting.
For a = 0.1*ln(2), the value of the integral from t=0..10 is ~24008. The value of the integral from t=0..∞ is ~720667. There is an order of magnitude difference between those two values. 97% of the utility comes after the first 10 years if there’s no difference in subjective time.
Yes, I agree, and I just did.
We must be evaluating different integrals. I wrote my calculations up in the main post. I’m evaluating −120e^(-at)(t^2/a + 2t/a^2 + 2/a^3) from t=0 to whatever, where a=.1ln2. For t=0..10 this is 5805; for t=0..infinity it is 6006. What are you evaluating?
You know that with a halving time of 10 years, if you evaluate the function once every 10 years, half of the total utility would come at 10 years; so the 97% after 10 years figure doesn’t pass the sanity check.
I just plugged your expression directly into Matlab, in case there was a typo in the form that I was using, and I get the same result that I was getting before. I agree with your calculation for Y=100, though.
Edit: Wolfram Alpha’s results for t=0..10 and t=0..∞.
Neat! I enter it differently, but still get the same result. It seems either my math is wrong, or Wolfram alpha is wrong. Since Wolfram Alpha agrees with me for Y=100 while disagreeing with me for Y=1, I think my math is probably right, and something funny happens with Wolfram Alpha for the Y=1 case.
But I’m not going to take the time to figure it out for a post with 3 votes. This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.
For the record, I dispute your causal model of the audience’s response.
In particular, I dispute your model of the audience’s moral reasoning as to what is inevitably being approved of or disapproved of by expressions of approval or disapproval of your actions relating to the post.
I also dispute your model of the audience’s factual and moral reasoning about the gravity of the problem you suggest. I dispute specifically your model of the audience’s process of choosing to suppose that non-exponential weighting functions could be considered sufficiently indicative of potential solutions as to justify relative unconcern. (This is because I dispute your model of the utility function structures initially familiar to the audience. As part of this, I dispute your model of their descriptions of discounting functions, according to which it apparently would be impossible for them to intend to refer to a function which was to be applied on a prespecified absolute timescale, without being translated to start at an agent’s present time. If that was not your model, then I dispute your confusing apparent claim that such functions, if non-exponential, must be dynamically inconsistent.)
I am concerned that the errors in your model of the audience, if left unchallenged, will only serve to reinforce in you the apparent resentful, passive-aggressive self-righteousness which would have largely been itself the cause of the misinterpretations which led to those errors originally. This self-reinforcing effect might create needless mutual epistemic alienation.
This is a critical topic, but not as critical as how much karma you get on LessWrong? Please care about karma less.
I get my result with Matlab, Wolfram Alpha/Mathematica, Maple, Google calculator, and my TI-84+ graphing calculator. The more likely conclusion is that your math is off for the Y=1 case.
I think you have neglected the presentation of the topic as a confounding variable in that analysis.
Where could your mistake be? If it nowhere to be seen, it is possible that there isn’t one. In that case it is quite a crisis here.