The revised math shows that the percentage of total utility within T years depends on the level of time discounting. Because your conclusion comes directly from that result, I think it’s important to spend some time motivating your chosen level of time discounting.
Yes, I agree, and I just did.
97% of the utility comes after the first 10 years if there’s no difference in subjective time.
We must be evaluating different integrals. I wrote my calculations up in the main post. I’m evaluating −120e^(-at)(t^2/a + 2t/a^2 + 2/a^3) from t=0 to whatever, where a=.1ln2. For t=0..10 this is 5805; for t=0..infinity it is 6006. What are you evaluating?
You know that with a halving time of 10 years, if you evaluate the function once every 10 years, half of the total utility would come at 10 years; so the 97% after 10 years figure doesn’t pass the sanity check.
I just plugged your expression directly into Matlab, in case there was a typo in the form that I was using, and I get the same result that I was getting before. I agree with your calculation for Y=100, though.
Edit: Wolfram Alpha’s results for t=0..10 and t=0..∞.
Neat! I enter it differently, but still get the same result. It seems either my math is wrong, or Wolfram alpha is wrong. Since Wolfram Alpha agrees with me for Y=100 while disagreeing with me for Y=1, I think my math is probably right, and something funny happens with Wolfram Alpha for the Y=1 case.
But I’m not going to take the time to figure it out for a post with 3 votes. This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.
This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.
For the record, I dispute your causal model of the audience’s response.
In particular, I dispute your model of the audience’s moral reasoning as to what is inevitably being approved of or disapproved of by expressions of approval or disapproval of your actions relating to the post.
I also dispute your model of the audience’s factual and moral reasoning about the gravity of the problem you suggest. I dispute specifically your model of the audience’s process of choosing to suppose that non-exponential weighting functions could be considered sufficiently indicative of potential solutions as to justify relative unconcern. (This is because I dispute your model of the utility function structures initially familiar to the audience. As part of this, I dispute your model of their descriptions of discounting functions, according to which it apparently would be impossible for them to intend to refer to a function which was to be applied on a prespecified absolute timescale, without being translated to start at an agent’s present time. If that was not your model, then I dispute your confusing apparent claim that such functions, if non-exponential, must be dynamically inconsistent.)
I am concerned that the errors in your model of the audience, if left unchallenged, will only serve to reinforce in you the apparent resentful, passive-aggressive self-righteousness which would have largely been itself the cause of the misinterpretations which led to those errors originally. This self-reinforcing effect might create needless mutual epistemic alienation.
Neat! I enter it differently, but still get the same result. It seems either my math is wrong, or Wolfram alpha is wrong. Since Wolfram Alpha agrees with me for Y=100 while disagreeing with me for Y=1, I think my math is probably right, and something funny happens with Wolfram Alpha for the Y=1 case.
I get my result with Matlab, Wolfram Alpha/Mathematica, Maple, Google calculator, and my TI-84+ graphing calculator. The more likely conclusion is that your math is off for the Y=1 case.
This is a critical topic, but LessWrong hates it.
I think you have neglected the presentation of the topic as a confounding variable in that analysis.
Yes, I agree, and I just did.
We must be evaluating different integrals. I wrote my calculations up in the main post. I’m evaluating −120e^(-at)(t^2/a + 2t/a^2 + 2/a^3) from t=0 to whatever, where a=.1ln2. For t=0..10 this is 5805; for t=0..infinity it is 6006. What are you evaluating?
You know that with a halving time of 10 years, if you evaluate the function once every 10 years, half of the total utility would come at 10 years; so the 97% after 10 years figure doesn’t pass the sanity check.
I just plugged your expression directly into Matlab, in case there was a typo in the form that I was using, and I get the same result that I was getting before. I agree with your calculation for Y=100, though.
Edit: Wolfram Alpha’s results for t=0..10 and t=0..∞.
Neat! I enter it differently, but still get the same result. It seems either my math is wrong, or Wolfram alpha is wrong. Since Wolfram Alpha agrees with me for Y=100 while disagreeing with me for Y=1, I think my math is probably right, and something funny happens with Wolfram Alpha for the Y=1 case.
But I’m not going to take the time to figure it out for a post with 3 votes. This is a critical topic, but LessWrong hates it. Matthew 7:6 comes to mind.
For the record, I dispute your causal model of the audience’s response.
In particular, I dispute your model of the audience’s moral reasoning as to what is inevitably being approved of or disapproved of by expressions of approval or disapproval of your actions relating to the post.
I also dispute your model of the audience’s factual and moral reasoning about the gravity of the problem you suggest. I dispute specifically your model of the audience’s process of choosing to suppose that non-exponential weighting functions could be considered sufficiently indicative of potential solutions as to justify relative unconcern. (This is because I dispute your model of the utility function structures initially familiar to the audience. As part of this, I dispute your model of their descriptions of discounting functions, according to which it apparently would be impossible for them to intend to refer to a function which was to be applied on a prespecified absolute timescale, without being translated to start at an agent’s present time. If that was not your model, then I dispute your confusing apparent claim that such functions, if non-exponential, must be dynamically inconsistent.)
I am concerned that the errors in your model of the audience, if left unchallenged, will only serve to reinforce in you the apparent resentful, passive-aggressive self-righteousness which would have largely been itself the cause of the misinterpretations which led to those errors originally. This self-reinforcing effect might create needless mutual epistemic alienation.
This is a critical topic, but not as critical as how much karma you get on LessWrong? Please care about karma less.
I get my result with Matlab, Wolfram Alpha/Mathematica, Maple, Google calculator, and my TI-84+ graphing calculator. The more likely conclusion is that your math is off for the Y=1 case.
I think you have neglected the presentation of the topic as a confounding variable in that analysis.
Where could your mistake be? If it nowhere to be seen, it is possible that there isn’t one. In that case it is quite a crisis here.