I know this image is pretty much pure applause light and no substance, but I think it might serve as a great talking point to post on Facebook for all my less rational friends.
Not by me! One friend immediately replied, describing living forever as an ‘unbelievably horrible concept’. An argument developed in which he claimed to be opposed to medicine in general because it results in overpopulation. A few friends who are LW readers and/or transhumanists joined in and it grew a bit one-sided. I tried to inoffensively explain that his arguments seemed to be neglecting the fact that the other human problems we have (which would be exacerbated by cured aging) might also be solvable, failed at the ‘inoffensive’ part, and he got angry at me for thinking I knew what he was thinking better than he did and then abandoned the thread. Afterwards a few of the people who’ve ‘already seen the light’ discussed a few interesting points about the topic, like hard limits on energy consumption/use, etc.
I’ll take partial blame—I didn’t work hard enough to maintain civility, in my own posts or in the atmosphere in general. I have previously argued with this person about life extension etc and found that he pattern matches very promptly into the ‘typical’ opposer—who summons up every problem they can connect to the idea immediately without any thought for plausibility or relevance.
Overall it was a bit of a disappointment but some friends who I don’t think would have been exposed to the idea much otherwise did put some ‘likes’ on a few comments, which is heartening.
Not necessarily. If the awesome way of dying involves saving billions of people’s lives, and living forever involves humanity going nearly extinct with all of the survivors (including yourself) being tortured for eternity...
Right, so think of this graph as actually being two graphs: one of the situation you describe, and one of the situation you do not describe. Then we blend these two graphs together according to the probability of each situation occurring in order to most accurately represent the future...
Then put error bars on them for sqrt(E(awesomeness^2|living forever) - E(awesomeness|living forever)^2) and sqrt(E(awesomeness^2|dying in an awesome way) - E(awesomeness|dying in an awesome way)^2). :-)
That test is only useful if you’re interested in illustrating exceptions to the norm. The graph, I think, does a brilliant job of illustrating normalized expectations.
I would assume that for most generalizations, it either shouldn’t be a generalization, or else it’s meant to illustrate normalized expectations. So the test seems useless unless you simply need to demonstrate that, duh, generalizations tend to have exceptions.
‘Generalizations’ is an interesting word in as much as it expresses nearly the opposite meaning depending on the kind of person who speaks it (and to a lesser extent, the context).
Can you expand on what you actually mean by that? I’ve always taken a generalization to mean “a broad statement that is true for the majority (but not all) specific instances of the group”. For instance, one can generalize that humans have 2 arms—this, despite there being a number of exceptions, and the average (mean) values being less than 2.
I can only assume this is a logarithmic scale, or something.
I know this image is pretty much pure applause light and no substance, but I think it might serve as a great talking point to post on Facebook for all my less rational friends.
Having seen the outcome of your plan, I think we have learned that talking points are only great when you have strictly great people talking.
Would it be considered impolite to ask what happened?
Not by me! One friend immediately replied, describing living forever as an ‘unbelievably horrible concept’. An argument developed in which he claimed to be opposed to medicine in general because it results in overpopulation. A few friends who are LW readers and/or transhumanists joined in and it grew a bit one-sided. I tried to inoffensively explain that his arguments seemed to be neglecting the fact that the other human problems we have (which would be exacerbated by cured aging) might also be solvable, failed at the ‘inoffensive’ part, and he got angry at me for thinking I knew what he was thinking better than he did and then abandoned the thread. Afterwards a few of the people who’ve ‘already seen the light’ discussed a few interesting points about the topic, like hard limits on energy consumption/use, etc.
I’ll take partial blame—I didn’t work hard enough to maintain civility, in my own posts or in the atmosphere in general. I have previously argued with this person about life extension etc and found that he pattern matches very promptly into the ‘typical’ opposer—who summons up every problem they can connect to the idea immediately without any thought for plausibility or relevance.
Overall it was a bit of a disappointment but some friends who I don’t think would have been exposed to the idea much otherwise did put some ‘likes’ on a few comments, which is heartening.
I used to agree with that.
If you want it to be pithier (and more accessible?) you can just shorten “living forever” to “living.”
And switch the order, too, perhaps.
Not necessarily. If the awesome way of dying involves saving billions of people’s lives, and living forever involves humanity going nearly extinct with all of the survivors (including yourself) being tortured for eternity...
Right, so think of this graph as actually being two graphs: one of the situation you describe, and one of the situation you do not describe. Then we blend these two graphs together according to the probability of each situation occurring in order to most accurately represent the future...
Then put error bars on them for sqrt(E(awesomeness^2|living forever) - E(awesomeness|living forever)^2) and sqrt(E(awesomeness^2|dying in an awesome way) - E(awesomeness|dying in an awesome way)^2). :-)
That seems to me like something you could contrive to say about any generally-true comparison...
Yes, which is why it’s an awesome go to test to apply to generalizations. Death, extinction, torture forever, saving billions of people!
That test is only useful if you’re interested in illustrating exceptions to the norm. The graph, I think, does a brilliant job of illustrating normalized expectations.
I would assume that for most generalizations, it either shouldn’t be a generalization, or else it’s meant to illustrate normalized expectations. So the test seems useless unless you simply need to demonstrate that, duh, generalizations tend to have exceptions.
‘Generalizations’ is an interesting word in as much as it expresses nearly the opposite meaning depending on the kind of person who speaks it (and to a lesser extent, the context).
Can you expand on what you actually mean by that? I’ve always taken a generalization to mean “a broad statement that is true for the majority (but not all) specific instances of the group”. For instance, one can generalize that humans have 2 arms—this, despite there being a number of exceptions, and the average (mean) values being less than 2.