I think that maybe there is an analogous pair of statements for contrarian viewpoints.
The “Strong anti-contrarian hypothesis”—the world contains no severe commons problems and is generally being run rationally and well
The “Weak anti-contrarian Hypothesis”—there are no opportunities for an individual or small group to actually benefit from (or even stand a non-negligible chance of succeeding at) solving the severe commons problems and irrationalities that we do have.
E.g. It takes a mere smart person to realize that spending $10^10 on designing new kinds of lipstick and spending $0 on curing aging is not a sane allocation of effort.
But if you try to actually correct that failure, you end up locked away in an underfunded lab on a shoestring budget, poor and ridiculed by the very people you’re trying to help, who go on with the old plan whilst deriding your publicly as immoral and selfish.
Bitterness doesn’t help anything. If publically declaring yourself wanting to save mankind and looking for support doesn’t work, pivot and find some other way to achieve your goals.
If the problem is that people can’t process the complex chain of logic necessary to understand existential risk, work on IA. Start on working on fixing certain types of brain disease that you think might be beneficial for the rest of humanity as well. For example my brain feels tired after certain activities, and I don’t like to think. Why? Is it because I have depleted some nutrient in my brain chemistry? Can this be regulated in some fashion. This must be a chronic problem for some people, so you might be able to get funding.
Be careful not to use “bitterness is bad” as a way to indulge in anti-epistemology, e.g.
“If I thought that humanity doesn’t care about its own future, then I’d be bitter, and bitterness is bad, ergo humanity does, in fact, care about its own future”
Did my response look like that? I was trying to convey the idea that you can use existing factors in society to achieve the goals you want, even if humanity doesn’t care about the goals. In the first case it was leveraging disease prevention and then relying on the use of medical technology for self-enhancement that has happened previously (which I elided).
The benefits of following that path is determined by how much you think that people not being interested in existential risk reduction is due to their brains shutting down when people talk to them about it and how much you think it is due to conflicts with their other interests. I’d guess a little of both, but probably more interests. That we have lots of smart people here, suggests that there is something in humanity that can become interested in existential risk reduction given sufficient brain power. So I wouldn’t expect a vast awakening, but I think it would help the cause.
To give another example of how you might achieve your goals even if society doesn’t share them. Take aging, if Aubrey de Grey could get some of his proposed techniques to work on just the skin of humans and actually keep skin healthy and young (even while we degrade on the inside), he would get mountains of cash from the many women who want to keep looking young. Admittedly he couldn’t muck around with marrow and things (I forget his exact plans), but he should be able to do better than the current “anti-aging creams”. Then he needs to find another group of people that want to keep their muscles young (men?). And do it piecemeal.
At no point relying on people wanting to live forever. Think sneakier :)
Sure, I agree with the principle of using whatever resources are available to achieve whatever your goals are. It’s just important to keep background facts “clean”, i.e. not skewed by what your current near-term goal is.
Yes, but “humanity cares about its own future” is such a vague statement that you can accurately believe it either way, depending on how you interpret it. So I don’t see anything wrong with interpreting it so as to be less bitter.
Right, and my position is the strong pro-contrarian hypothesis. There are visibly countless opportunities for extremely but boundedly irrational individuals to benefit from solving commons problems, therefore almost no-one is extremely irrational but boundedly so for an extremely permissive bound, almost everyone is even more irrational than that.
One of my best data-points is that so few people did the obvious and invested in Buffett once he had the best track-record of any other investor 35 years ago. With some leverage, any such people who started out with reasonable investments could be billionaires today, and if many existed he would have been swamped with funds and unable to continue to overperform. Why would rational people who give their money to a money manager give it to one who didn’t have the best or almost the best track record. Yes there are reasons, but not plausibly for the number of people who didn’t buy Berkshire.
Do you have any theories about why so many people didn’t invest in Buffett?
Any chance that if Buffett had been swamped with money, he would have rethought his strategy and come up with something useful for the changed circumstances?
If you think of any of other amazingly good investment opportunities like that, let me know ;-0
In fact, what opportunities are there for a high-rationality person to both solve a commons problem and benefit from it? Smart speculation and venture cap is one place I can think of, but you have to already be a millionaire.
For myself, to even get to that level, I’m going to have to go slug it out in finance for a decade.
I think that maybe there is an analogous pair of statements for contrarian viewpoints.
The “Strong anti-contrarian hypothesis”—the world contains no severe commons problems and is generally being run rationally and well
The “Weak anti-contrarian Hypothesis”—there are no opportunities for an individual or small group to actually benefit from (or even stand a non-negligible chance of succeeding at) solving the severe commons problems and irrationalities that we do have.
E.g. It takes a mere smart person to realize that spending $10^10 on designing new kinds of lipstick and spending $0 on curing aging is not a sane allocation of effort.
But if you try to actually correct that failure, you end up locked away in an underfunded lab on a shoestring budget, poor and ridiculed by the very people you’re trying to help, who go on with the old plan whilst deriding your publicly as immoral and selfish.
Bitterness doesn’t help anything. If publically declaring yourself wanting to save mankind and looking for support doesn’t work, pivot and find some other way to achieve your goals.
If the problem is that people can’t process the complex chain of logic necessary to understand existential risk, work on IA. Start on working on fixing certain types of brain disease that you think might be beneficial for the rest of humanity as well. For example my brain feels tired after certain activities, and I don’t like to think. Why? Is it because I have depleted some nutrient in my brain chemistry? Can this be regulated in some fashion. This must be a chronic problem for some people, so you might be able to get funding.
Or in other words don’t go on this path..
Be careful not to use “bitterness is bad” as a way to indulge in anti-epistemology, e.g.
“If I thought that humanity doesn’t care about its own future, then I’d be bitter, and bitterness is bad, ergo humanity does, in fact, care about its own future”
Negative emotions are to me a warning sign, not to avoid some truth, but to uncover some falsehood.
That was really well stated.
Did my response look like that? I was trying to convey the idea that you can use existing factors in society to achieve the goals you want, even if humanity doesn’t care about the goals. In the first case it was leveraging disease prevention and then relying on the use of medical technology for self-enhancement that has happened previously (which I elided).
The benefits of following that path is determined by how much you think that people not being interested in existential risk reduction is due to their brains shutting down when people talk to them about it and how much you think it is due to conflicts with their other interests. I’d guess a little of both, but probably more interests. That we have lots of smart people here, suggests that there is something in humanity that can become interested in existential risk reduction given sufficient brain power. So I wouldn’t expect a vast awakening, but I think it would help the cause.
To give another example of how you might achieve your goals even if society doesn’t share them. Take aging, if Aubrey de Grey could get some of his proposed techniques to work on just the skin of humans and actually keep skin healthy and young (even while we degrade on the inside), he would get mountains of cash from the many women who want to keep looking young. Admittedly he couldn’t muck around with marrow and things (I forget his exact plans), but he should be able to do better than the current “anti-aging creams”. Then he needs to find another group of people that want to keep their muscles young (men?). And do it piecemeal.
At no point relying on people wanting to live forever. Think sneakier :)
Sure, I agree with the principle of using whatever resources are available to achieve whatever your goals are. It’s just important to keep background facts “clean”, i.e. not skewed by what your current near-term goal is.
Yes, but “humanity cares about its own future” is such a vague statement that you can accurately believe it either way, depending on how you interpret it. So I don’t see anything wrong with interpreting it so as to be less bitter.
^ Anti-epistemology ^
Right, and my position is the strong pro-contrarian hypothesis. There are visibly countless opportunities for extremely but boundedly irrational individuals to benefit from solving commons problems, therefore almost no-one is extremely irrational but boundedly so for an extremely permissive bound, almost everyone is even more irrational than that.
One of my best data-points is that so few people did the obvious and invested in Buffett once he had the best track-record of any other investor 35 years ago. With some leverage, any such people who started out with reasonable investments could be billionaires today, and if many existed he would have been swamped with funds and unable to continue to overperform. Why would rational people who give their money to a money manager give it to one who didn’t have the best or almost the best track record. Yes there are reasons, but not plausibly for the number of people who didn’t buy Berkshire.
Do you have any theories about why so many people didn’t invest in Buffett?
Any chance that if Buffett had been swamped with money, he would have rethought his strategy and come up with something useful for the changed circumstances?
If you think of any of other amazingly good investment opportunities like that, let me know ;-0
In fact, what opportunities are there for a high-rationality person to both solve a commons problem and benefit from it? Smart speculation and venture cap is one place I can think of, but you have to already be a millionaire.
For myself, to even get to that level, I’m going to have to go slug it out in finance for a decade.