E.g. It takes a mere smart person to realize that spending $10^10 on designing new kinds of lipstick and spending $0 on curing aging is not a sane allocation of effort.
But if you try to actually correct that failure, you end up locked away in an underfunded lab on a shoestring budget, poor and ridiculed by the very people you’re trying to help, who go on with the old plan whilst deriding your publicly as immoral and selfish.
Bitterness doesn’t help anything. If publically declaring yourself wanting to save mankind and looking for support doesn’t work, pivot and find some other way to achieve your goals.
If the problem is that people can’t process the complex chain of logic necessary to understand existential risk, work on IA. Start on working on fixing certain types of brain disease that you think might be beneficial for the rest of humanity as well. For example my brain feels tired after certain activities, and I don’t like to think. Why? Is it because I have depleted some nutrient in my brain chemistry? Can this be regulated in some fashion. This must be a chronic problem for some people, so you might be able to get funding.
Be careful not to use “bitterness is bad” as a way to indulge in anti-epistemology, e.g.
“If I thought that humanity doesn’t care about its own future, then I’d be bitter, and bitterness is bad, ergo humanity does, in fact, care about its own future”
Did my response look like that? I was trying to convey the idea that you can use existing factors in society to achieve the goals you want, even if humanity doesn’t care about the goals. In the first case it was leveraging disease prevention and then relying on the use of medical technology for self-enhancement that has happened previously (which I elided).
The benefits of following that path is determined by how much you think that people not being interested in existential risk reduction is due to their brains shutting down when people talk to them about it and how much you think it is due to conflicts with their other interests. I’d guess a little of both, but probably more interests. That we have lots of smart people here, suggests that there is something in humanity that can become interested in existential risk reduction given sufficient brain power. So I wouldn’t expect a vast awakening, but I think it would help the cause.
To give another example of how you might achieve your goals even if society doesn’t share them. Take aging, if Aubrey de Grey could get some of his proposed techniques to work on just the skin of humans and actually keep skin healthy and young (even while we degrade on the inside), he would get mountains of cash from the many women who want to keep looking young. Admittedly he couldn’t muck around with marrow and things (I forget his exact plans), but he should be able to do better than the current “anti-aging creams”. Then he needs to find another group of people that want to keep their muscles young (men?). And do it piecemeal.
At no point relying on people wanting to live forever. Think sneakier :)
Sure, I agree with the principle of using whatever resources are available to achieve whatever your goals are. It’s just important to keep background facts “clean”, i.e. not skewed by what your current near-term goal is.
Yes, but “humanity cares about its own future” is such a vague statement that you can accurately believe it either way, depending on how you interpret it. So I don’t see anything wrong with interpreting it so as to be less bitter.
E.g. It takes a mere smart person to realize that spending $10^10 on designing new kinds of lipstick and spending $0 on curing aging is not a sane allocation of effort.
But if you try to actually correct that failure, you end up locked away in an underfunded lab on a shoestring budget, poor and ridiculed by the very people you’re trying to help, who go on with the old plan whilst deriding your publicly as immoral and selfish.
Bitterness doesn’t help anything. If publically declaring yourself wanting to save mankind and looking for support doesn’t work, pivot and find some other way to achieve your goals.
If the problem is that people can’t process the complex chain of logic necessary to understand existential risk, work on IA. Start on working on fixing certain types of brain disease that you think might be beneficial for the rest of humanity as well. For example my brain feels tired after certain activities, and I don’t like to think. Why? Is it because I have depleted some nutrient in my brain chemistry? Can this be regulated in some fashion. This must be a chronic problem for some people, so you might be able to get funding.
Or in other words don’t go on this path..
Be careful not to use “bitterness is bad” as a way to indulge in anti-epistemology, e.g.
“If I thought that humanity doesn’t care about its own future, then I’d be bitter, and bitterness is bad, ergo humanity does, in fact, care about its own future”
Negative emotions are to me a warning sign, not to avoid some truth, but to uncover some falsehood.
That was really well stated.
Did my response look like that? I was trying to convey the idea that you can use existing factors in society to achieve the goals you want, even if humanity doesn’t care about the goals. In the first case it was leveraging disease prevention and then relying on the use of medical technology for self-enhancement that has happened previously (which I elided).
The benefits of following that path is determined by how much you think that people not being interested in existential risk reduction is due to their brains shutting down when people talk to them about it and how much you think it is due to conflicts with their other interests. I’d guess a little of both, but probably more interests. That we have lots of smart people here, suggests that there is something in humanity that can become interested in existential risk reduction given sufficient brain power. So I wouldn’t expect a vast awakening, but I think it would help the cause.
To give another example of how you might achieve your goals even if society doesn’t share them. Take aging, if Aubrey de Grey could get some of his proposed techniques to work on just the skin of humans and actually keep skin healthy and young (even while we degrade on the inside), he would get mountains of cash from the many women who want to keep looking young. Admittedly he couldn’t muck around with marrow and things (I forget his exact plans), but he should be able to do better than the current “anti-aging creams”. Then he needs to find another group of people that want to keep their muscles young (men?). And do it piecemeal.
At no point relying on people wanting to live forever. Think sneakier :)
Sure, I agree with the principle of using whatever resources are available to achieve whatever your goals are. It’s just important to keep background facts “clean”, i.e. not skewed by what your current near-term goal is.
Yes, but “humanity cares about its own future” is such a vague statement that you can accurately believe it either way, depending on how you interpret it. So I don’t see anything wrong with interpreting it so as to be less bitter.
^ Anti-epistemology ^