(Also, if anybody knows or can estimate, are the gender ratios similar in the relevant areas of academia?)
All male biased as far as I know. (Math, philosophy, AI/CS)
(Also, if anybody knows or can estimate, are the gender ratios similar in the relevant areas of academia?)
All male biased as far as I know. (Math, philosophy, AI/CS)
typo. thanks for pointing out.
I assign a 99.9% probability to there being more male readers than female readers of LW, The most recent LW meetup that I attended had a gender ratio of roughly 20:1 male:female.
Males who feel that they are competing for a small pool of females will attempt to gain status over each other, diminishing the amount of honest, rational dialogue, and replacing it with oneupmanship.
Hence the idea of mixing LW—in its current state—with dating may not be good.
However, there is the possibility of re-framing LW it so that it appeals more to women. Perhaps we need to re-frame saving the world as a charitable sacrifice?
I would love to know what the gender ratio looks like within the atheist movement; I think we should regard that as a bound on what is achievable.
a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely
Which is why Roger Penrose is so keen to show that consciousness is a quantum phenomenon.
“Singleton
A world government or superpower imposes a population control policy over the whole world.”
it has to be stable essentially forever. It seems to me that no human government could achieve this, because of the randomness of human nature. Therefore, only an AI would suffice.
I’ve observed far more clannishness among children than political perspicuity
but what about the relative amounts in children vs adults?
A priori we should expect children to be genuine knowledge seekers, because in our EEA there would have been facts of life (such as which plants we poisonous) that were important to know early on. Our EEA was probably sufficiently simple and unchanging that once you were an adult there were few new abstract facts to know.
This “story” explains why children ask adults awkward questions about politics, often displaying a wisdom apparently beyond their age. In reality, they just haven’t traded in their curiosity for signalling yet.
At least, that is one possible hypothesis.
I do sometimes wonder what proportion of people who think about political matters are asking questions with genuine curiosity, versus engaging in praise for the idea that they and their group have gone into a happy death spiral about.
I suspect that those who ask with genuine curiosity are overwhelmingly chlidren.
EDIT: Others disagree that children are more genuinely curious. Perhaps it’s just the nerds who ask genuine questions then?
Great! now that we’ve both signalled our allegiance to the h+ ideology, would you like to mate with me!?
for an explanation of why I call this “Hansonian”, see, for example, this. Hanson has lots of posts on how charity, ideology, etc is all about affiliating with a tribe and finding mates.
it seems to be a game in which you counterfactually propose different states of the “government policy” node and explain why these would have the best effects, and whoever can give the best explanation gets rewarded with higher status.
no, no. The game is to counterfactually propose different states of the “government policy” node that involve making the government conform more to some ideology X, and then confabulate reasons why this would result in great success. In doing this, you signal your allegiance to ideology X.
But really, the game can work with pretty much anything in the place of the “government policy” node; it can be pretty much any decisionmaking entity, including the companies or diffuse classes of individuals. E.g.
“If binge drinkers went to church more, then they would find the inner strength to overcome the addiction!” (signalling religious allegiance)
“Binge drinkers have the right to run their own lives, the government should keep its hands off them!” (signalling libertarian allegiance)
“Binge drinkers usually come from deprived families and had poor childhoods, it isn’t their fault, it’s the government’s fault for not having enough social welfare!” (signalling liberal allegiance)
“Binge drinking is caused by the breakdown of traditional family values, we need a return to the good-old-fashioned traditional family values!” (Signalling conservative allegiance)
“Binge drinking could be prevented by human neuroenhancements that prevented alcohol from being addictive, we should push for faster research into such technology!” (Signalling h+ allegiance)
See, e.g. Eliezer writing in 2000:
“There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, nocancer patient, literally no one that I cannot look squarely in the eye. I’m working to save everybody, heal the planet, solve all the problems of the world.”
is rationality as a failure of compartmentalization—the attempt to take everything you hear seriously.
Many people enjoy reading books and watching films where the lead characters form a small group, pitted against all the odds to try to save the world. Many people—secular people—pay lip-service to the idea that every person in the world is equally important, and that we should value the life of an African peasant farmer as equal to our own.
It seems, however, that most people don’t actually take these notions seriously, because their actions seem to have little to do with such beliefs.
One day, a bunch of nerds got together and started a project called the Singularity Institute, and they actually took seriously the notion that they should try to save the world if it really was threatened, and that the lives of others should be assigned equal weigh to their own. Almost everyone else though they were really weird when they started to try to act on these beliefs.
The problem is, that isn’t an intuitively satisfying answer. We have an intuition that demands the answers to these questions, and yes, there are no answers, because the question needs “unasking”. But the process of unasking a question requires more than just saying why it doesn’t make sense, I think. One has to look for a way to satisfy what the intuition was asking for without asking the confused question.
Indeed, you could take two copies of me, reanimate one of them, let him live for a while, and then kill him, and then once you’ve done that reanimate the other copy and let him live. Which one do I experience being?
there’s a 50-50 chance I’ll find myself saved by tails vs. fluctuations.
why? why not just condition your existing probability distributions on continued conscious experience?
Ok, I see the point you are making. But When you say
quantum immortality must then run a consequentialist computation to distinguish
You are thinking of QI as an agent who has to decide what to do at a given time. But suppose a proponent of QI thinks instead of QI as simply the brute fact that there are certain paths through the tree structure of MWI QM that continue your conscious experience forever, and the substantive fact that what I actually experience will be randomly chosen from that set of paths.
I disagree with QI because I think that the very language being used to frame the problem is severely defective; the semantics of the word “I” is the problem.
The concept of “death” is too complex to be captured by any phenomenon other than the process of computation of this concept in human minds, or something derived therefrom.
I think that perhaps the word “I” suffers from the same problem.
Explain?
QI doesn’t specify. A reasonable assumption would simply be to condition upon your survival, so at 7:59 you assign, say, a 1-10^-6 probability to the coin landing tails, and a 10^-6 probability to other ways you could be saved, for example rescue by an Idiran assault force, a quantum bubble, etc.
seems to require a little too much advance planning.
This gets me too. Just how much advance planning is the universe allowed? Am I alive now, rather than in the year 1000, because we are sufficiently close to developing anti-aging treatments?
Upvoted for a sensible analysis of the problem. Want girls? Go get them. My experience is that a common mistake amongst academically inclined people is to expect reality to reward them for doing the right thing—for example men on LW may (implicitly, without realizing that they are doing it) expect attractive, eligible women to be abundant in the risk-mitigation movement, because mitigating existential risks is the right* thing to do, and the universe is a just place which rewards good behavior.
The reality of the situation is that a male who spends time attempting to reduce existential risks will find himself in a community which is full of other males, which, relative to other hobbies he could have, will reduce his pool of available women.
Women who spend time attempting to reduce existential risks will find themselves surrounded by guys, who are preselected for intelligence and high ethical standards.