Speaking of crazy ideas.… sitting around Googling methods of terrorism may not be the best way to stay of the CIA’s watch-list.
IffThen
That was sort of my point. Most people are going to imagine it as a more perfect world. But if they were to think through all of the implications, they would see that it probably involves massive taxation and a very very strong central government, with less motivation for people to do dirty and difficult jobs.
They want something they can’t, or don’t, accurately imagine.
Most people can’t imagine what a world without ageing would be like, and they can’t want what they can’t imagine.
I have to agree with Lumifer—most people can imagine (and want) a world without aging, because they would not bother to think about the demographic trends. I would compare this to asking someone to imagine a world in which no one was living below the average income level; I think most people would agree that this is easy to conceive of, and desirable. It’s only the select few who would think this through and wonder how the powers that be are going to achieve this without doing something very drastic to a lot of people.
Your real world clone would take 20 years to “make” and be a separate person, like you would be if you grew up when they did.
This is partially missing the point. The goal is to make a separate body, compatible with your biology. There is no need to grow a clone with a functioning brain—any medical science sufficient to clone a human would be able to clone an acephalic human (WARNING, NSFL, fetus with head damage), and growing a clone with a fully functioning brain (i.e., not driven insane by being grown in a de facto sensory deprivation chamber) would be much more expensive, even if you kept education to a minimum.
Still, all this is ethically questionable, something that would need a lot of advance planning, and will be a long time in the future. It is true that fixing your body piecemeal will almost surely be a better option—even if it does end up involving some limited form of cloning organs.
This is consistent with 27chaos’s statement, though. If you get a body transplant at 65, you have solved a number of medical problems, and the chance of living the next 30 years without having to worry about Alzheimer’s is ~70%. Of course, Alzheimer’s disease accounts for only 60-80% of cases of dementia. But still, I think there would be a market.
It is also worth noting that cardiovascular factors, physical fitness, and diet contribute to the risk of dementia, including Alzheimer’s. These are not the greatest risk factors (as you might have guessed, age is the greatest risk factor), but these can be managed if you are motivated to do so—in fact, getting a new body should be a fairly effective way of managing cardiovascular fitness.
From the Freakonomics blog: “FDA prohibits any gifts to blood donors in excess of $25 in cumulative value”.
Various articles give different amounts for the price per pint that hospitals pay, but it looks like it’s in the range of $125 in most cases.
I’m not sure where you got the 3 month figure from; in America we store the blood for less than that, no more than 6 weeks. It is true that the value of your donation is dependent on your blood type, and you may find that your local organization asks you to change your donation type (platelets, plasma, whole blood) if you have a blood type that is less convenient. I do acknowledge that this question is much more relevant for those of us who are typo O-.
I think you are often right about the marginal utility of blood. However, it is worth noting that the Red Cross both pesters people to give blood (a lot, even if you request them directly not to multiple times), and that they offer rewards for blood—usually a t-shirt or a hat, but recently I’ve been getting $5 gift cards. Obviously, this is not intended to directly indicate the worth of the blood, but these factors do indicate that bribery and coercion is alive and well.
EDIT: The FDA prohibits any gifts to blood donors in excess of $25 in cumulative value.
It is also worth noting that there is a thriving industry paying for blood plasma, which may indicate that certain types of blood donation are significantly more valuable than others (plasma are limited use, but can be given regardless of blood type).
“Chalmers argues that since such zombies are conceivable to us, they must therefore be logically possible. Since they are logically possible, then qualia and sentience are not fully explained by physical properties alone.”
This is shorthand for “in the two decades that Chalmers has been working on this problem, he has been defending the argument that...” You might look at his arguments and find them lacking, but he has spent much longer than five minutes on the problem.
It is definitely a necessary question to ask. You need to have a prediction of how effective your solutions will be. You also need predictions of how practical they are, and it may be that something very effective is not practical—e.g. banning Islam. You could make a list of things you should ask: how efficient, effective, sustainable, scalable, etc. But effective certainly has a place on the list.
FWIW, I have been a long time reader of SF, have long been a believer of strong AI, am familiar with friendly and unfriendly AIs and the idea of the singularity, but hadn’t heard much serious discussion on development of superintelligence. My experience and beliefs are probably not entirely normal, but arose from a context close to normal.
My thought process until I started reading LessWrong and related sites was basically split between “scientists are developing bigger and bigger supercomputers, but they are all assigned to narrow tasks—playing chess, obscure math problems, managing complicated data traffic” and “intelligence is a difficult task akin to teaching a computer to walk bipedally or recognize complex visual images, which will teke forever with lots of dead ends”. Most of what I had read in terms of spontaneous AI was fairly silly SF premises (lost packets on the internet become sentient!) or in the far future, after many decades of work on AI finally resulting in a super-AI.
I also believe that science reporting downplays the AI aspects of computer advances. Siri, self-driving cars, etc. are no longer referred to as AI in the way they would have been when I was growing up; AI is by definition something that is science fiction or well off in the future. Anything that we have now is framed as just an interesting program, not an ‘intelligence’ of any sort.
I suspect you already know this, but just in case, in philosophy, a zombie is an object that can pass the Turing test but does not have internal experiences or self-awareness. Traditionally, zombies are also physically indistinguishable from humans.
Logically possible just means that “it works in theory”—that there is no logical contradiction. It is possible to have an idea that is logically possible but not physically possible, e.g., a physicist might come up with a internally consistent theory of a universe that hold that the speed of light in a vacuum is 3mph.
These are in contrast to logically impossible worlds, the classic example being a world that contains both an unstoppable force and an unmovable object; these elements contradict each other, so cannot both occur in the same universe.
I’d like a quick peer review of some low-hanging fruit in the area of effective altruism.
I see that donating blood is rarely talked about in effective altruism articles; in fact, I’ve only found one reference to it on Less Wrong.
I am also told by those organizations that want me to donate blood that each donation (one pint) will save “up to three lives”. For all I know all sites are parroting information provided by the Red Cross, and of course the Red Cross is highly motivated to exaggerate the benefit of donating blood; “up to three” is probably usually closer to “one” in practice.
But even so, if you can save one life by donating blood, and can donate essentially for free (or nearly so), and can donate up to 6.5 times per year...
...and if the expected ROI for monetary donation is in the thousands of dollars for each life, then giving blood is a great deal.
Am I missing anything?
And as a corollary, should I move my charitable giving to bribing people to donate blood whenever there is a shortage?
I would re-frame the issue slightly; the process that philosophy/ethics goes through is something more like this:
If given A, B, and C we get D, and if not-A is unacceptable and not-B is unacceptable and not-C is unacceptable and D is unacceptable, then we do not fully understand the question. So lets play around with all the possibilities and see what interesting results pop up!
Playing around should involve frequent revisits to the differential and integral inspections of the argument; if you are doing just one type of inspection, you are doing it wrong.
But in the end you might come to a solution, and/or you might write a very convincing sounding paper on a solution, but the assumption isn’t that you have now solved everything because your integrating or differentiating is nicely explicable. It is highly debatable whether any argument in ethics is more than an intuition pump used to convince people of your own point of view. After all, you cannot prove even the basic fact that happy people are good; we simply happen to accept that as a warrant… or, in some cases, make it our definition of ‘good’.
It is worth considering the possibility that all these ethical arguments do is try to make us comfortable with the fact that the world does not work in our favor… and the correct solution is to accept as a working hypothesis that there is no absolutely correct solution, only solutions that we should avoid because they feel bad or lead to disaster. (To clarify how this would work in the case of the Repugnant Conclusion, the correct-enough solution might involve each population setting its own limits on population size and happiness ranges, and those who disagree having to make their own way to a better population; alternatively, we might define an acceptable range and stick to it, despite political pundits criticizing our decision at every turn, and many people maintaining roiling angst at our temerity.)
In the end, the primary and perhaps only reason we continue to engage in philosophy is because it is foolish to stop thinking about questions simply because we do not have a solution (or an experimental process) that applies.
Slightly off from what you asked, but the CFAR list looks suboptimal. I would add The Invisible Gorilla (And Other Ways Our Intuitions Deceive Us) by Christopher Chabris and Daniel Simons. It is more thorough and more generally applicable than Predictably Irrational.
If people have other recommendations for books that are better than (or highly complementary to) the books on the CFAR lists, I would be interested in hearing them.
That’s because the distinction doesn’t actually exist. In particular, to the extent gender refers to a real concept and not an pure XML tag, it refers to what is commonly called sex.
This is an interesting claim. Things that are often lumped into ‘gender’ includes things like dress, pronouns, and bathrooms, and these things are very important to people. Maybe they shouldn’t be, but they are.
You are very unclear as to what you are suggesting. One obvious interpretation is that caring if you wear a dress or a tie is the same as hallucinating, and we should stop doing it. This is an interesting claim… and potentially a very useful one. I would support this. I would also support unisex bathrooms and gender neutral pronouns. So we might not disagree at all.
But it is also possible to interpret your claim as saying that non-standard genders are less valid than standard ones, even when the standard ones are arbitrary or harmful. This is harder to defend—in fact, no one defends this as a general theory except extreme moral relativists (the claim that ‘woman are not worthy to vote’ is generally not held to be true based on the culture you are in, but to have a higher reasoning behind arguments for and against; in case it is not obvious, voting was assigned as a gender role; a voting woman would be outside of either standard gender for much of our history). Obviously this is an extreme example, but you can see that gender matters; if in your life you find that the only reason that gender matters is whether or not your bathroom has an urinal or not, this is very good for you.
(It is also possible that you believe that we currently have developed the perfect gender roles, and any change would be a worsening of conditions. This is sufficiently different from my own view that I am not willing to spend my time debating it unless you give me some sort of compelling evidence up front.)
I think that this is sufficient for you to see why I claim that you are equivocating on gender and sex. I’m fairly certain that what you actually want to claim is something much less strong, simply that using made-up pronouns and complaining about what bathroom you are assigned is annoying at best, and perhaps a sign of a personality disorder. In this case, you might want a very different list than the one provided.
Others have already mentioned the benefits of commercial sites pandering to their clients, so I needn’t elaborate.
There are two things here, if we care to stick to the discussion of edge cases (which is theoretically the point of this thread...)
The first is sex, in which case we should be talking about things like Turner’s syndrome and XYY syndrome; sex is not binary. It is only usually binary.
The second would be coming up with a definition of gender, and seeing if it matches our definition of sex. It is safe to say that 1) the use of ‘gender’ to mean the same as ‘sex’ is within the usual range of common usage, and 2) completely wrong under certain ‘domains’ (sociology, anthropology, a number of personal vocabularies, etc.).
That’s because the distinction doesn’t actually exist.
This seems to be saying that those domains are making a mistake in making this distinction—something that is hard to defend without knowing something of those domains. This is particularly hard to defend without making very strong definitions, and it is very hard to get strong definitions that we will agree on.
Since you appear to be new here, let me explain the local social norms. Around here people are expected to provide arguments for their positions.
Okay… People who are seeking to change social norms in general are not usually considered insane in the same way as someone who is making claims that their sensory input is showing them something different from everyone else.
For example, social norms would not allow women to walk topless outside except in exceptional situations, even where it is legal. This is often a problem… for example, even the edge case breastfeeding mothers are marginalized; more generally, bare-chested women as a class are marginalized. Changing this social norm would require changing gender norms. Generally, advocating for this change is acceptable, although not always respectable.
However, I do not think that you really care about gender norms. I think that you are specifically worried that adding further gender categories is a form of pandering to people who want to increase static without increasing signal; that is, the information that someone does not identify as traditionally female does not appear to be useful information to you, and therefore you view this as needless static.
Let me know if I am wrong.
However, if we are looking at domain knowledge as something worth exploring to allow you to interact with other people, as in the examples given for coders, you can see how awareness of this could be very important. After all, many people of non-standard genders feel more strongly about those identities than they do about religion, so you might reasonably view a basic knowledge of these views as equally important as knowing the basics of major religions.
If you are uncomfortable with this, you might simply avoid people who identify with non-standard genders. However, I would suggest that you can more profitably communicate with people of non-standard genders than with people who are hallucinating.… However, my personal sample size of interactions with people who are hallucinating/delusional in a psychological sense is fairly small, so I could be wrong.
America should take up the metric system.