Today is the thirty-fourth anniversary of the official certification that smallpox had been eradicated worldwide. From Wikipedia,
The global eradication of smallpox was certified, based on intense verification activities in countries, by a commission of eminent scientists on 9 December 1979 and subsequently endorsed by the World Health Assembly on 8 May 1980. The first two sentences of the resolution read:
Having considered the development and results of the global program on smallpox eradication initiated by WHO in 1958 and intensified since 1967 … Declares solemnly that the world and its peoples have won freedom from smallpox, which was a most devastating disease sweeping in epidemic form through many countries since earliest time, leaving death, blindness and disfigurement in its wake and which only a decade ago was rampant in Africa, Asia and South America.
Archaeological evidence shows evidence of smallpox infection in the mummies of Egyptian pharaohs. There was a Hindu goddess of smallpox in ancient India. By the 16th century it was a pandemic throughout the Old World, and epidemics with mortality rates of 30% were common. When smallpox arrived in the New World, there were epidemics among Native Americans with mortality rates of 80-90%. By the 18th century it was pretty much everywhere except Australia and New Zealand, which successfully used intensive screening of travelers and cargo to avoid infection.
The smallpox vaccine was one of the first ever developed, by English physician Edward Jenner in 1798. Vaccination programs in the wealthy countries made a dent in the pandemic, so that by WWI the disease was mostly gone in North America and Europe. The Pan-American Health Organization had eradicated smallpox in the Western hemisphere by 1950, but there were still 50 million cases per year, of which 2 million were fatal, mostly in Africa and India.
In 1959, the World Health Assembly adopted a resolution to eradicate smallpox worldwide. They used ring vaccination to surround and contain outbreaks, and little by little the number of cases dropped. The last naturally-occurring case was found in October 1975, in a two-year-old Bangladeshi girl named Rahima Banu, who recovered after medical attention by a WHO team. For the next four years, the WHO searched for more cases (in vain) before declaring the eradication program successful.
Smallpox scarred, blinded, and killed countless billions of people, on five continents, for hundreds to thousands of years, and now it is gone. It did not go away on its own. Highly trained doctors invented, then perfected a vaccine, other engineers found ways to manufacture it very cheaply, and lots of other serious, dedicated people resolved to vaccinate each vulnerable human being on the surface of the Earth, and then went out and did it.
Because Smallpox Eradication Day marks one of the most heroic events in the history of the human species, it is not surprising that it has become a major global holiday in the past few decades, instead of inexplicably being an obscure piece of trivia I had to look up on Wikipedia. I’m just worried that as time goes on it’s going to get too commercialized. If you’re going to a raucous SE Day party like I am, have fun and be safe.
Old King Plague is dead, the smallpox plague is dead, no more children dying hard no more cripples living scarred with the marks of the devil’s kiss, we still may die of other things but we will not die of this.
Raise your glasses high for all who will not die to all the doctors, nurses too to all the lab technician who drove it into the ground if the whole UN does nothing else it cut this terror down.
But scarce the headlines said, the ancient plague was dead, then they were filled with weapons new toxic waste and herpes too, and the AIDS scare coming on ten new plagues will take its place but at least this one is gone.
Population soars, checked with monstrous wars preachers rant at birth control ”Screww the body, save the soul”, bring new deaths off the shelves, and say to Nature, “Mother, please, we’d rather do it ourselves”.
Old King Plague is dead, the smallpox plague is dead, no more children dying hard no more cripples living scarred with the marks of the devil’s kiss, we still may die of other things but we will not die of this, oh no, we will not die of this.
The virus currently only still exists as samples in two freezers in two labs (known to the scientific community). These days I think that that is overkill even for research purposes for this pathogen, what with the genome sequenced and the ability to synthesize arbitrary sequences artificially. If you absolutely must have part of it for research make that piece again from scratch. Consign the rest of the whole infectious replication-competent particles to the furnace where they belong.
EDIT: I found a paper in which smallpox DNA was extracted and viruses observed via EM from a 50 year old fixed tissue sample from a pathology lab that was not from one of the aforementioned collections. No word in the paper on if it was potentially infectious or just detectable levels of nucleic acids and particles. These things could be more complicated to 100% securely destroy than we thought...
At risk of attracting the wrong kind of attention, I will publicly state that I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I’m a “new large donor”, this donation will be matched 3:1, netting a cool $20,000 for MIRI.
I have decided to post this because of “Why our Kind Cannot Cooperate”. I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.
I’d be interested, but only the small sum of 100$. Did anybody else take you up on that offer? Of course I’d like to verify the pool-persons identity before transfering money.
I have been convinced that people donating should publicly brag about it to attract other donors
It certainly seems to make sense for the sake of the cause for (especially large, well-informed) donors to make their donations public. The only downside seems to be a potentially conflicting signal on behalf of the giver.
instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.
I’m not sure this is true. Doesn’t MIRI publish its total receipts? Don’t most organizations that ask for donations?
Growing up Evangelical, it was taught that we should give secretly to charities (including, mostly, the church).
I wonder why? The official Sunday School answer is so that you remain humble as the giver, etc. I wonder if there is some other mechanism whereby it made sense for Christians to propogate that concept (secret giving) among followers?
I’m not sure this is true. Doesn’t MIRI publish its total receipts? Don’t most organizations that ask for donations?
Total receipts may not be representative. There’s a difference between MIRI getting funding from one person with a lot of money and large numbers of people donating small(er) amounts. I was hoping this post to serve as a reminder that many of us on LW do care about donating, rather than a few rather rich people like Peter Thiel or Jaan Tallinn.
Also I suspect scope neglect can be at play—it’s difficult to, on an emotional level, tell the difference between $1 million worth of donations, or ten million, or a hundred million. Seeing each donation that led to adding up to that amount may help.
Seeing each donation that led to adding up to that amount may help.
Yes, because it would show how many people donated. Number of people = power, at least in our brains.
The difference between one person donating 100 000, or one person donating 50 000 and ten people donating 5 000 is that in the latter case, your team has eleven people. It is the same amount of money, but emotionally it feels better. Probably it has other advantages (such as smaller dependence on whims of a single person), but maybe I am just rationalizing here.
Hm. Possibly. Though it does still seem to be a rather popular convention in churches today to adopt an interpretation of secret offerings.
I would imagine popular interprations of scriptures on giving would evolve based on the goals of the church (to get $$$), and kept in check only by being believable enough to the member congregations.
Tithing seems to work for the church, so lots of churches resurrect it from the OT and really shaky exegesis and make it a part of the rules. If tithing didn’t work for the church, they could easily make it go away in the same way they get rid of tons of outdated stuff from the OT (and the NT).
Secret offerings seems similar to me. I’d imagine they could make the commands for secret giving go away with some simple hermeneutical waves of the hand if it didn’t benefit them.
I wonder if there is some other mechanism whereby it made sense for Christians to propogate that concept (secret giving) among followers?
This gives the church an information advantage. Information is power. It gives them the opportunity to make it seem like everyone is donating less than their neighbors.
Ah. So the leaders can give the ongoing message to “give generously” to a group and, as long as the giving data is kept in secret and no one ever speaks to anyone else about how much they gave, then each member will feel compelled to continue to give more in an effort to (a) “please God” and (b) gain favor in the eyes of the leaders by keeping up with, or outgiving, the other members. Is this what you are saying? If not can you elaborate?
Look at Mormons. They have a rule that you have to donate 10% of your income. If you don’t than you aren’t pleasing god and god might punish you.
In reality the average Mormon doesn’t donate 10% but might feel guilty for not doing so. If someone who would donate 7% would know that they donate above average, they would feel less guilty about not meeting the goal of donating 10%.
It is possible that they are setting the bar too low. You might have many people who would have given 30% had not the command been for 10%, but for 30%?
It is possible that they are setting the bar too low.
Yes, it is. Choosing that particular number might not be optimal. But there a cost of setting the number to high. If you set it too high and people don’t think they can reach that standard they might not even try.
I’d guess 10% is not an arbitrary number, but rather is a sort of market equilibrium that happens to be supportable by a certain interpretation of OT scripture. It might have just as well been 3% or 7% or 12% as these numbers are all pretty significant in the OT, and could have been used by leadership to impose that % on laypeople.
In any case, in my experience within the church, there are tithes… AND then there are offerings which include numerous different cause to give to on any given Sunday. It was often stated these causes (building projects, missions outreaches, etc.) were in addition to your tithe.
It is funny to me… it is almost like the reverse of a compensation plan you’d build for a team of commissioned sales people. Instead of trying to optimize the plan to best incentivize for sales performance by motivating your sales people to sell, the church may have evolved their doctrines and practices on giving to optimize for collecting revenue by motivating your members to give. Ha.
It might have just as well been 3% or 7% or 12% as these numbers are all pretty significant in the OT
This is of course no argument against anything substantive you’re saying, but while the numbers 3,7,12 are certainly all significant in the OT the idea of percentage surely wasn’t. I can see 1⁄3, or 1⁄7, or 1⁄12, though.
Good point. Though, from my recall, there isn’t much basis in the OT for the modern day concept of tithing at all, percentage or otherwise. Christianity points to verses about giving 1/10th of your crops to the priest as the basis.
If they really wanted to change the rules and up it to 1/7th, or 12% or anything they want, they could come up with some new basis for that match using fancy hermeneutics.
This is sort of what is happening right now with homosexuality. Many churches are changing their views. They are justifying that by reinterpreting the verses they’ve used to condemn it in the past.
In fact, you can pretty much get the Bible to support any position or far-fetched belief you’d like. You only need a few verses… and it’s a big book.
We should encourage people to purchase status when that purchase involves doing things we want or giving money to causes we like. Unless you prefer traditional schemes for status assignment like height, handsomeness, ability to throw a ball, and mass murder.
See my comment on the “In Praise of Tribes that Pretend to Try” thread
If donating to purchase status is accepted and encouraged, it risks to become the main motive behind donations. This in turn creates perverse incentives for the recipient of such donations.
It sounds to me like somebody is purchasing utilons, using themselves as an example to get other people to also purchase utilons, and incidentally deriving a small amount of well deserved status from the process.
PSA: If you want to get store-bought food (as opposed to eating out all the time or eating Soylent), but you don’t want to have to go shopping all the time, check to see if there is a grocery delivery service in your area. At least where I live, the delivery fee is far outbalanced by the benefit of almost no shopping time, slightly cheaper food, and decreased cognitive load (I can just copy my previous order, and tweak it as desired).
If you don’t have a car, study in the bus/train or take the commute as a bicycling exercise if the distance is relatively short and you can take a shower.
Possibly cooking very large meals and saving the rest. If you want to save money by cooking from scratch rather than buying prepared food or eating out, it can help to prepare several meals worth at a time.
Dave Asprey claims that you can get by fine on five hours of sleep if you optimize it to spend as much time in REM and delta sleep as possible. This appeals to me more than polyphasic sleep does. Link
Also I was intrigued when xkcd mentioned the 28 hour day, but I don’t know of anyone who has maintained that schedule
Dan Aspey claims he can do well on 5 hours of sleep, and then makes a further claim that any other adult (he recommends not trying to do serious sleep reduction until you’re past 23) can also do well on 5 hours. To judge by a fast look at the comments, rather few of his readers are trying this, let alone succeeding at it.
Do you have any information about whether Aspey’s results generalize?
There are by now some quite extensive studies about the amount of required or healthy sleep.
Sleep is roughly normal distributed between 5 and 9 hours and for some of those getting 5 or less hours of sleep this appears to be healthy:
Jane E. Ferrie, Martin J. Shipley, Francesco P. Cappuccio, Eric Brunner, Michelle A. Miller, Meena Kumari, Michael G. Marmot: A Prospective Study of Change in Sleep Duration: Associations with Mortality in the Whitehall II Cohort.
So probably Dave Asprey is one of those 1% for this is correct.
Some improvements (or changes) may be possible for most of us though. You can get along with less sleep if you sleep at your optimum sleep time (which differs depending on your genes esp. the Period 3 gene) and if you start to sleep quickly.
Polyphasic sleep may significantly reduce your sleep total but nobody seems to be able what the health effects are. It might be that it risks your long time health.
Another benefit for me is reduced mistakes in picking items from the list.
Some people don’t use online shopping because they worry pickers may make errors. My experience is that they do, but at a much lower rate than I do when I go myself. I frequently miss minor items off my list on the first circuit through the shop, and don’t go back for it because it’d take too long to find. I am also influenced by in-store advertising, product arrangements, “special” offers and tiredness in to purchasing items that I would rather not. It’s much easier to whip out a calculator to work out whether an offer really is better when you’re sat calmly at your laptop than when you’re exhausted towards the end of a long shopping trip.
You’d expect paid pickers to be better at it—they do it all their working hours, I only do it once or twice a month. Also, all the services I’ve used (in the UK) allow you to reject any mistaken items at your door for a full refund—which you can’t do for your own mistakes. The errors pickers make are different to the ones I would, which makes them more salient—but they are no more inconvenient in impact on average.
My family does this and it’s not such a good idea. Old forgotten food will accumulate at the bottom and you’ll have less usable space at the top. Chucking out the old food is a) a trivial inconvenience and b) guilt-inducing.
Unless it’s one of those freezers with sliding trays.
I disagree with this. Having lived in the US my entire life (specifically MA and VA), I’ve been in very few homes that had chest freezers, and as far as I recall, none that only had chest freezers (as opposed to extra storage beyond a combination refrigerator/freezer).
I’m not willing to pay to resolve this difference of perception, but if one wanted to do so, the information is probably available here.
I am not sure we disagree. I’m not saying that people are using chest freezers instead of normal refrigerators. I’m saying that if a family buys a separate freezer in addition to a regular fridge, in the US that separate freezer is likely to be a chest freezer.
Here on the West Coast I’ve seen both standing and chest models, although combination refrigerator/freezers are far more common than either. I associate the chest style with hunters and older people, but that likely reflects my upbringing; I wouldn’t hazard a guess as to which is more common overall.
Most of the food that I eat doesn’t freeze or doesn’t freeze well (think fruits and vegetables). Frozen meat is OK for a stew but not at all OK for steaks.
I find—based on my personal experience—the texture, aromas, etc. of fresh food to be quite superior to those of frozen food.
I find—based on my personal experience—the texture, aromas, etc. of fresh food to be quite superior to those of frozen food.
I hear that if you stir-fry vegetables, then frozen is a better option. (I eat most of the vegetables I eat raw or dehydrated, neither of which seem to do well if you freeze them first.)
I hear that if you stir-fry vegetables, then frozen is a better option.
I think it depends on whether you can get your heat high enough.
The point of stir-frying frozen veggies is to brown the outside while not overcooking the inside. Normally this is done by cooking non-frozen veggies at very high heat but a regular house stove can’t do it properly—so a workaround is to use frozen.
The good kind of already frozen vegetables are much tastier, have better texture and have kept more of their nutrients. That is because an ordinary freezer is not nearly quick enough to preserve most vegetables.
Regarding food in particular, I’m still wishing Romeo Stevens would commercialize his tasty and nutritious soylent alternative so I could buy it the same way I buy juice from the grocery store.
New work suggests that life could have arisen and survived a mere 15 million years after the Big Bang, when the microwave background radiation levels would have provided sufficient energy to keep almost all planets warm. Summary here, and actual article here. This is still very preliminary, but the possibility at some level is extremely frightening. It adds billions of years of time for intelligent life to have arisen that we don’t see, and if anything suggests that the Great Filter is even more extreme than we thought.
Now that is scary, although there are a few complications. Rocky bodies were probably extremely rare during that time since the metal enrichment of the Universe was extremely low. You can’t build life out of just hydrogen and helium.
Doesn’t the relevant number of opportunities for life to appear have units of mass-time?
Isn’t the question not how early was some Goldilocks zone, but how much mass was in a Goldilocks zone for how long? This says that the whole universe was a Goldilocks zone for just a few million years. The whole universe is big, but a few million years is small. And how much of the universe was metallic? The paper emphasizes that some of it was, but isn’t this a quantitative question?
I agree that a few million years is small, and that the low metal content would be a serious issue (which in addition to being a problem for life forming would also make planets rare as pointed out by bramflakes in their reply). However, the real concern as I see it is that if everything was like this for a few million years, then if life did arise (and you have a whole universe for it to arise), as the cooldown occurred, it seems highly plausible that some forms of life would have then adopted to the cooler environment. This makes panspermia more plausible and thus makes life in general more likely. Additionally, it makes more of a chance for life to get lucky if it managed to get into one of the surviving safe zones (e.g. something like the Mars-Earth biotransfer hypothesis).
I think you may be correct that this isn’t a complete run around and panic level update, but it is still disturbing. My initial estimate for how bad this could be is likely overblown.
I’m nervous about the idea that life might adapt to conditions in which it cannot originate. Unless you mean spores, but they have to wait for the world to warm up.
As for panspermia, we have a few billion years of modern conditions before the Earth, which is itself already a problem. I think the natural comparison is the size of that Goldilocks zone to the very early one. But I don’t know which is bigger.
Here are three environments. Which is better for radiation of spores? (1) a few million years where every planet is wet (2) many billion years, all planets cold (3) a few billion years, a few good planets.
The first sounds just too short for anything to get anywhere, but the universe is smaller. If one source of life produces enough spores to hit everything, then greater time depth is better, but if they need to reproduce along the way, the modern era seems best.
I’m nervous about the idea that life might adapt to conditions in which it cannot originate.
Why this happened on Earth? It is pretty likely for example that life couldn’t originate in an environment like the Sahara desert, but life can adapt and survive there.
I do agree that spores are one of the more plausible scenarios. I don’t know enough to really answer the question, and I’m not sure that anyone does, but your intuition sounds plausible.
There’s barely any life in the Sahara. It looks a lot like spores to me. I want a measure of life that includes speed. Some kind of energy use or maybe cell divisions. I expect the probability of life developing in a place to be proportional to amount of life there after it arrives. Maybe that’s silly; there certainly are exponential effects of molecules arriving the same place at the same time that aren’t relevant to the continuation of life. But if you can rule out this claim, I think your model of the origin of life is too detailed.
There’s barely any life in the Sahara. It looks a lot like spores to me.
I’m not sure what you mean by this.
I want a measure of life that includes speed.
Do you mean something like the idea that if an environment is too harsh even if life can survive the chance that it will evolve into anything beyond a simple organism is low?
We should have the data now to take a whack at the metallicity side of that question, if only by figuring out how many Population 2 stars show up in the various extrasolar planet surveys in proportion with Pop 1. Don’t think I’ve ever seen a rigorous approach to this, but I’d be surprised if someone hasn’t done it.
One sticking point is that the metallicity data would be skewed in various ways (small stars live longer and therefore are more likely to be Pop 2), but that shouldn’t be a showstopper—the issues are fairly well understood.
The paper mentions a model. Maybe the calculation is even done in one of the references. The model does not sound related to the observations you mention.
I don’t think this is frightening. If you thought life couldn’t have arisen more than 3.6 billion years ago but then discover that it could have arisen 13.8 billion years ago, you should be at most 4 times as scared.
The number of habitable planets in the galaxy over the number of habituated planets is a scary number.
The time span of earth civilization over the time span of earth life is a scary number.
If it were just a date, then, yes, a factor of 4 is lost in the noise. But switching to panspermia changes the calculation. Try Overcoming Bias [Added: maybe this is only a change under Robin Hanson’s hard steps model.]
It changes my epistemic position by a helluva lot more than a factor of 4. If an interstellar civilization arose somewhere in the universe that is now visible, somewhere in a uniform distribution over the last 3.6 billion years, there’s much smaller chance we’d currently (or ever) be within their light cone than if they’d developed 13.8 billion years ago.
It’s potentially scary not because of the time difference, but because of the quantity of habitable planets. It’s understood that current conditions in the Universe make it so that only relatively few planets are in the habitable zone. But if the Universe was warm, then almost all planets would be in the habitable zone, making the likelihood of life that much higher.
As I said in my reply to JoshuaZ though, the complication is that rocky planets were probably much rarer than they are now.
There weren’t any planets 15 million years after the Big Bang. The first stars formed 100 million years after the Big Bang, and you need another few million on top of that for the planets to form and cool down.
It seems to take a lot more than 15 million years to get from “life” to “intelligent life”. According to the article this period would only have lasted for a million years, so at most we would probably get a lot of monocellular life arising and then dying during the cooloff.
1 - why should no intelligent life arising from a set of places that were likely habitable for only 5 million years (if they existed at all, which is doubtful) be surprising?
2 - I raise the possibility of outcomes for intelligent life that are not destruction or expansion through the universe.
Edit: Gah, that’s what I get for leaving this window open while about 8 other people commented
The paper implies that it only adds millions of years, not billions.
a new regime of habitability made possible
for a few Myr by the uniform CMB radiation
Once the CMB cools down enough with the expansion of the Universe, the Goldilock conditions disappear. The CMB temperature is roughly inversely proportional to the age of the Universe, so 300K at 15 million years becomes just 150K 15 million years later.
I decided I’d share the list of questions I try to ask myself every morning and evening. I usually spend about thirty seconds on each question, just thinking about them, though I sometimes write my answers down if I have a particularly good insight. I find they keep me pretty well-calibrated to my best self. Some are idiosyncratic, but hopefully these will be generally applicable.
A. Today, this week, this month:
What am I excited about?
What goals do I have?
What questions do I want to answer?
What specific ways do I want to be better?
B. Yesterday, last week, last month:
What did I accomplish that am I proud of?
In what instances did I behave in a way I am proud of?
What did I do wrong? How will I do better?
What do I want to remember? What adventures did I have?
C. Generally:
9: If I’m not doing exactly what I want to be doing, why?
For about a month and a half, though I forget about 25% of the time. I haven’t noticed any strong effects, though I feel as if I approach the day-to-day more conscientiously and often get more out of my time.
For a term in university I followed a similar method. Every day I would post ‘Today’s Greatest Achievement:’ in the relevant social media of the time. There was a noticeable improvement in happiness and extra-curricular productivity as I more actively sought out novel experiences, active community roles, and academic side projects. The daily reminder led to a far more conscientious use of my time.
The combined reminder that I spent all weekend playing video games and broadcasting to my entire social circle that that was my greatest achievement in the past 48 hours was in a mindless video game led to immediate behavior changes.
Are there any translation efforts in academia? It bothers me that there may be huge corpuses of knowledge that are inaccessible to most scientists or researchers simply because they don’t speak, say, Spanish, Mandarin, or Hindi. The current solution to this problem seems to be ‘everyone learn English’, which seems to do ok in the hard sciences. But I fear there may be a huge missed opportunity in social sciences, especially because Americans are WEIRD and not necessarily psychologically or behaviorally respresentative of the world population. (Link is to an article, link to the cited paper here: pdf)
This was translated into / written in English and published in a peer-reviewed journal (Neural Regeneration Research). And it’s complete crap.
Of course there is very bad research published by the West on alternative medicine too, but as the links I provide show, Chinese research is systematically and generally of very low quality. If China cannot produce good research, what can we expect of other countries?
Some time ago someone linked a paper indicating that there are benefits to fragmentation of academia by language barriers as less people are exposed to some kind of dominant view allowing them to come up with new ideas. One cited example was anthropology which had a Russian and an Anglosphere tradition.
I’d assume there not to be any major translation efforts as being a translator isn’t as effective as publishing something of your own by far.
being a translator isn’t as effective as publishing something of your own by far.
Publishing your own scientific paper brings you more rewards, but translating other person’s article requires less time and less scientific skills (just enough to understand the vocabulary and follow the arguments).
If someone would pay me for doing it, I would probably love to have a job of translating scientific articles to my language. It would be much easier for me to translate dozen articles than to create one. And if I would only translate the articles that passed some filter, for example those published in peer-reviewed journals, I could probably translate the output of twenty or fifty scientists.
It seems like there could definitely be money in ‘international’ journals for different fields, which would aggregate credible foreign papers and translate them. Interesting that they don’t seem to exist.
How effective would it be to use human expertise to translate just the contents pages of journals, with links to Google Translate for the bodies of the papers? Or perhaps use humans to also translate the abstracts?
Idea that popped into my head: it might be straightforward to make a frontend for the arXiv that adds a “Translate this into” drop-down list to every paper’s summary page. (Using the list could redirect the user to Google Translate, with the URL for the PDF automatically fed into the translator.) As far as I know no one has done this but I could be wrong.
The Body Electric mentioned that the Soviets were ahead of the west in studying electrical fields in biology because (not sure of the date—sometime before the seventies) electricity sounded to much like elan vital to the westerners.
Possibly this Body Electric. It’s at least about the right subject, but I’d have swore I’d read it much earlier than 1998, and my copy (buried somewhere) probably had a purple cover.
The cover on the hardcover looks more familiar, and at least it’s from 1985.
That’s interesting. I read your comment out of context and didn’t know you were making a comment about the language. I agreed that I don’t like thinking about electricity in animals (or more strongly, any coordinated magnetic phenomena, etc) because of this association. There is a similarity in the sounds, (“electrical” and “elan vital”) but also the concepts are close in space … perhaps the Soviets lacked this ugh field altogether.
I was using “sounded like” metaphorically. I assume they knew the difference in meaning, but were affected by the similarity of concepts and worry about their reputations.
I guessed that the Soviets were more willing to do the research because Marxism was kind of like weird science, so they were willing to look into weird science in general. However, this is just a guess. A more general hypothesis is that new institutions are more willing to try new things.
I am not professionally involved in these fields but I have read that among those who are there is a very jaundiced opinion of Chinese and Indian scientific research. If none of the following hold completely ignoring their publications is apparently a good heuristic; at least one foreign co-author or one who did their doctorate in the first world or an institution or author with a significant reputation. Living in China and having some minimal experience with the Chinese attitude to plagiarism/copying/research makes this seem plausible. I doubt anyone’s missing anything by ignoring scientific articles published in Mandarin. I make no such claims for social sciences.
I’m expecting China to have an increasing role in global affairs over the next century. With that in mind, there are a couple of things I’m curious about:
Does anyone have an idea of how prevalent existential risk type ideas are in China?
Has anyone tried to spread LW memes there?
Are the LW meetups in Shanghai, etc. mostly ex-pats or also locals?
Gregory Cochran has written something on aging. I’ll post some selected parts, but you should read the whole thing, which is pretty short.
Theoretical biology makes it quite clear that individuals ought to age. Every organism faces tradeoffs between reproduction and repair. In a world with hazards, such that every individual has a decreasing chance of survival over time, the force of natural selection decreases with increasing age. This means that perfect repair has a finite value, and organisms that skimp on repair and instead apply those resources to increased reproduction will have a greater reproductive rate – and so will win out. Creatures in which there is no distinction between soma and germ line, such as prokaryotes, cannot make such tradeoffs between repair and reproduction – and apparently do not age. Which should be a hint.
...
In practice, this means that animals that face low exogenous hazards tend to age more slowly. Turtles live a long time. Porcupines live a good deal longer than other rodents. [...] Organisms whose reproductive output increases strongly with time, like sturgeons or trees, tend to live longer. The third way of looking at things is thermodynamics. Is aging inevitable? Certainly not. As long as you have an external source of free energy, you can reduce entropy with enthalpy.
...
In principle there is no reason why people couldn’t live to be a billion years old, although that might entail some major modifications (and an extremely cautious lifestyle). The third way of looking at things trumps the other two. People age, and evolutionary theory indicates that natural selection won’t produce ageless organisms, at least if their germ cells and body are distinct - but we could make it happen.
This might take a lot of work. If so, don’t count on seeing effective immortality any time soon, because society doesn’t put much effort into it. In part, this is because the powers that be don’t know understand the points I just made.
Nothing entirely new to me here, but it’s always good to see another scientist come out in favor of aging research. Also, note that the Latin text on the top of Cochran’s website is omnes vulnerant, ultima necat, which means approximately, “Each second wounds, the last kills.”
It just doesn’t matter very much—certainly not enough to keep wrangling over the exact definition of the boundary. As long as we understand what we mean by crystal, bacterium, RNA, etc., why should we care about the fuzzy dividing line? Are ribozymes going to become more or less precious to us according only to whether we count them as living or not, given that nothing changes about their actual manifested qualities? Should they?
Every science uses terms which are called universal terms, such as ‘energy’, ‘velocity’, ‘carbon’, ‘whiteness’, ‘evolution’, ‘justice’, ‘state’, ‘humanity’. These are distinct from the sort of terms which we call singular terms or individual concepts, like ‘Alexander the Great’, ‘Halley’s Comet’, ‘The First World War’. Such terms as these are proper names, labels attached by convention to the individual things denoted by them.
[...] The school of thinkers whom I propose to call methodological essentialists was founded by Aristotle, who taught that scientific research must penetrate to the essence of things in order to explain them. Methodological essentialists are inclined to formulate scientific questions in such terms as ‘what is matter?’ or ‘what is force?’ or ‘what is justice?’ and they believe that a penetrating answer to such questions, revealing the real or essential meaning of these terms and thereby the real or true nature of the essences denoted by them, is at least a necessary prerequisite of scientific research, if not its main task. Methodological nominalists, as opposed to this, would put their problems in such terms as ‘how does this piece of matter behave?’ or ‘how does it move in the presence of other bodies?’ For methodological nominalists hold that the task of science is only to describe how things behave, and suggest that this is to be done by freely introducing new terms wherever necessary, or by re-defining old terms wherever convenient while cheerfully neglecting their original meaning. For they regard words merely as useful instruments of description.
Most people will admit that methodological nominalism has been victorious in the natural sciences. Physics does not inquire, for instance, into the essence of atoms or of light, but it uses these terms with great freedom to explain and describe certain physical observations, and also as names of certain important and complicated physical structures. So it is with biology. Philosophers may demand from biologists the solution of such problems as ‘what is life?’ or ‘what is evolution?’ and at times some biologists may feel inclined to meet such demands. Nevertheless, scientific biology deals on the whole with different problems, and adopts explanatory and descriptive methods very similar to those used in physics.
The quote says that biologists don’t deal with questions such as “what is life?” because that’s essentialism and that’s Bad. Similarly, physicists certainly don’t study ideal systems like atoms or light. The disease is in the false dichotomy.
Oh, hmm, I thought what he was saying about atoms and light is not that physicists don’t study those things, but that they don’t study some abstract platonic version of light or atom derived from our intuitions, but instead use those words to describe phenomena in the real world and then go on to continue investigating those phenomena on their own terms.
So, for example, “Do radio waves really count as light?” is not a very interesting question from a physics perspective once you grant that both radio waves and visible light are on the same electromagnetic wave spectrum. Or with atoms we could ask, “Are atoms really atoms if they can be broken down into constituent parts?” These would just be questions about human definitions and intuitions rather than about the phenomena themselves. And so it is with the question, “What is life?”
That’s what it seemed like Popper was saying to me. Did you have a different interpretation? Also, I’m not sure I’ve understood your comment—which dichotomy are you saying is a false dichotomy?
Asking whether radio waves really count as light is just arguing a definition. That’s not interesting to anyone who understands the underlying physics.
Notice that the questions he gives for essentialists are actually interesting questions, they’re just imprecisely phrased, e.g. “what is matter?” These questions were asked before we’d decided matter was atoms. They were valid questions and serious scientists treated them. Now these questions are silly because we’ve already solved them and moved on to deeper questions, like “where do these masses come from?” and “how will the universe end?”
When a theorist comes up with a new theory they are usually trying to answer one of these essentialist questions. “What is it about antimatter that makes it so rare?” The theorist comes up with a guess, computes some results, spends a year processing LHC data, and realizes that their theory is wrong. At some point in here they switched from essentialist (considering an ideal model) to nominalist (experimental data), but the whole distinction is unnecessary.
… they don’t study some abstract platonic version of light or atom derived from our intuitions …
Yes, they most certainly do. QED is an extremely abstract idea, derived from intuition about how the light we interact with on a classical level behaves. This is called the correspondence principle.
String theorists come up with a theory based entirely on mathematical beauty, much like Plato.
I think you’re reading Popper uncharitably, and his view of what physicists do is about the same as yours. He really is arguing against arguing definitions. “What is matter?” is an ambiguous question: it can be understood as asking about a definition, “what do we understand by the word ‘matter’, exactly?”, and it can be understood as asking about the structure, “what are these things that we call matter really made of, how do they behave, what are their properties, etc.?”. The former, to Popper, is an essentialist question; the latter is not.
Your understanding of “essentialist questions” is not that of Popper; he wouldn’t agree with you, I’m sure, that “What is it about antimatter that makes it so rare?” is an essentialist question. “Essentialist” doesn’t mean, in his treatment, “having nothing to do with experimental data” (even though he was very concerned with the value of experimental data and would have disagreed with some of modern theoretical physics in that respect). A claim which turns out to be unfalsifiable is anathema to Popper, but it is not necessarily an “essentialist” claim.
Oh, hmm. I see now that we were interpreting Popper differently, and I may have been wrong.
Notice that the questions he gives for essentialists are actually interesting questions, they’re just imprecisely phrased, e.g. “what is matter?” These questions were asked before we’d decided matter was atoms. They were valid questions and serious scientists treated them. Now these questions are silly because we’ve already solved them and moved on to deeper questions …
If Popper did mean to exclude that kind of inquiry, then I agree with you that he was misguided.
In that case, it sounds like you would agree with the rest of Anatoly’s comment, just not the Popper quote. Is that right?
The precise definition of life will not be the thing that will determine our opinion about possible extraterrestrial life when we come across it. It will matter whether that hypothetical life is capable of growth, change, producing offspring, heredity, communication, intelligence, etc. etc. - all of these things will matter a lot. Having a very specific subset of these enshrined as “the definition of life” will not matter. This is what Popper’s quote is all about.
The precise definition of life will not be the thing that will determine our opinion about possible extraterrestrial life when we come across it.
It’s possible that extraterrestrial life will be nothing but a soup of RNA molecules. If we visit a planet while its life is still in the embryonic stages, we need to include that in our discourse of life in general. We need to have a word to represent what we are talking about when we talk about it. That’s the only purpose any definition ever serves. If you want to go down the route of ‘the definition of life is useless’, you might as well just say ‘all definitions are useless’.
What I meant is that stars are born, they procreate (by spewing out new seeds for further star formation), then grow old. Stars “evolved” to be mostly smaller and longer lived due to higher metallicity. They compete for food and they occasionally consume each other. They sometimes live in packs facilitating further star formation, for a time. Some ancient stars have whole galaxies spinning around them, occasionally feeding on their entourage and growing ever larger.
Don’t traits have to be heritable for evolution to count? I’m not an expert or anything, but I thought I’d know if stars’ descendants had similar properties to their parent stars.
Descendant stars might have proportions of elements related to what previous stars generated as novas. I don’t know whether there’s enough difference in the proportions to matter.
Can you give an example of a property a star might have because having that property made its ancestor stars better at producing descendant stars with that property?
Sorry, I’m not an expert in stellar physics. Possibly metallicity, or maybe something else relevant. My original point was to agree that there is no good definition of “life” which does not include some phenomena we normally don’t think of as living.
What’s wrong with ‘A self-sustaining (through an external energy source) chemical process characterized by the existence of far-from-equilibrium chemical species and reactions.’?
Suspect you would have a difficult time defining “external energy source” in a way that excludes fire but includes mitochondria.
True; what is meant is a simple external energy source such as radiation or a simple chemical source of energy. It’s true that this is a somewhat fuzzy line though.
Which equilibrium? Stars are far from the eventual equilibrium of the heat death, and also not at equilibrium with the surrounding vacuum.
I specifically said far-from-equilibrium chemical species and reactions. The chemistry that goes on inside a star is very much in equilibrium conditions.
Not clear whether viruses, prions, and crystals are included or excluded.
Viruses are not self-sustaining systems, so they are obviously excluded. You have to consider the system of virus+host (plus any other supporting processes). Same with prions. Crystals are excluded since they do not have any non-equilibrium chemistry.
what is meant is a simple external energy source such as radiation or a simple chemical source of energy.
I do not see how this answers the objection. All you did was add the qualification ‘simple’ to the existing ‘external’. Is this meant to exclude fire, or include it? If the former, how does it do so? Presumably plant matter is a sufficiently “simple” source of energy, since otherwise you would exclude human digestion; plant matter also burns.
The chemistry that goes on inside a star is very much in equilibrium conditions.
Again, which equilibrium? The star is nowhere near equilibrium with its surroundings.
Viruses are not self-sustaining systems,
Neither are humans… in a vacuum; but viruses are quite self-sustaining in the presence of a host. You are sneaking in environmental information that wasn’t there in the original “simple” definition.
Look at my reply to kalium. To reiterate, the problem is that people confuse objects with processes. The definition I gave explicitly refers to processes. This answers your final point.
All you did was add the qualification ‘simple’ to the existing ‘external’. Presumably plant matter is a sufficiently “simple” source of energy, since otherwise you would exclude human digestion; plant matter also burns.
I already conceded that it’s a fuzzy definition. As I said, you are correct that ‘simple’ is a subjective property. However, if you look at the incredibly complex reactions that occur inside human cells (gene expression, ribosomes, ATP production, etc), then yes, amino acids and sugars are indeed extremely simple in comparison. If you pour some sugars and phosphates and amino acids into a blender you will not get much DNA; not nearly in the quantities that it is found in cells. This is what is meant by ‘far from equilibrium’. There is much more DNA in cells than you would find if you took the sugars and fatty acids and vitamins and just mixed them together randomly.
Again, which equilibrium? The star is nowhere near equilibrium with its surroundings.
Ok, chemical equilibrium. This does not seem to me like a natural boundary; why single out this particular equilibrium and energy scale?
As I said, you are correct that ‘simple’ is a subjective property.
I think you’re missing my point, which is that I don’t see how your definition excludes fire as a living thing.
The definition I gave explicitly refers to processes. This answers your final point.
I don’t think it does. A human in vacuum is alive, for a short time. How do you distinguish between “virus in host cell” and “human in supporting environment”?
why single out this particular equilibrium and energy scale?
Because the domain of chemistry is broad enough to contain life as we know it, and also hypothesized forms of life on other planets, without being excessively inclusive.
I think you’re missing my point, which is that I don’t see how your definition excludes fire as a living thing.
I tried to answer it. The chemical species that are produced in fire are the result of equilibrium reactions http://en.wikipedia.org/wiki/Combustion . They are simple chemical species (with more complex species only being produced in small quantities; consistent with equilibrium). Especially, they are not nearly as complex as compared to the feedstock as living chemistry is.
I don’t think it does. A human in vacuum is alive, for a short time. How do you distinguish between “virus in host cell” and “human in supporting environment”?
They are both part of living processes. The timescale for ‘self-sustaining’ does not need to be forever. It only needs to be for some finite time that is larger than what would be expected of matter rolling down the energy hill towards equilibrium.
As I said, you have to consider the system of parasite+host (plus any other supporting processes).
I think a lot of the confusion arises from people confusing objects with processes that unfold over time. You can’t ask if an object is alive by itself; you have to specify the time-dynamics of the system. Statements like ‘a bacterium is alive’ are problematic because a frozen bacterium in a block of ice is definitely not alive. Similarly, a virus that is dormant is most definitely not alive. But that same virus inside a living host cell is participating in a living process i.e. it’s part of a self-sustaining chain of non-equilibrium chemical reactions. This is why I specifically used the words ‘chemical process’.
So this is a definition for “life” only, not “living organism,” and you would say that a parasite, virus, or prion is part of something alive, and that as soon as you remove the parasite from the host it is not alive. How many of its own life functions must a parasite be able to perform once removed from the host in order for it to be considered alive after removal from the host?
How many of its own life functions must a parasite be able to perform once removed from the host in order for it to be considered alive after removal from the host?
As the definition says. It must demonstrate non-equilibrium chemistry and must be self-sustaining. Again, ‘simple forms of energy’ is relative, so I agree that there’s some fuzziness here. However, if you look at the extreme complexity of the chemical processes of life (dna, ribosomes, proteins, etc.) and compare that to what most life consumes (sugars, minerals, etc.) there is no ambiguity. It’s quite clear that there’s a difference.
Are you sure that all life is chemical? There’s a common belief here that a sufficiently good computer simulation of a human being counts as being that person (and presumably, a sufficiently good computer simulation of an animal counts as being an animal, though I don’t think I’ve seen that discussed), and that’s more electrical than chemical, I think.
I have a notion that there could be life based on magnetic fields in stars, though I’m not sure how sound that is.
I guess it depends on your philosophical position on ‘simulations’. If you believe simulations “aren’t the real thing”, then a simulation of chemistry “isn’t actual chemistry”, and thus a simulation of life “isn’t actual life.” Anyways, the definition I gave doesn’t explicitly make any distinction here.
About exotic forms of life, it could be possible. A while ago I had some thoughts about life based on quark-gluon interactions inside a neutron star. Since neutron star matter is incredibly compact and quarks interact on timescales much faster than typical chemistry, you could have beings of human-level complexity existing in a space of less than a cubic micrometer and living out a human-lifespan-equivalent existence in a fraction of a second.
But these types of life are really really speculative at this point. We have no idea that they could exist, and pretty strong reasons for thinking they couldn’t. It doesn’t seem worth it to stretch a definition of life to contain types of life we can’t even fathom yet.
Any good advice on how to become kinder? This can really be classified as two related goals, 1) How can I get more enjoyment out of alleviating others suffering and giving others happiness? 2) How can I reliably do 1 without negative emotions getting in my way (ex. staying calm and making small nudges to persuade people rather than getting angry and trying to change people’s worldview rapidly)?
I’d recommend Nonviolent Communication for this. It contains specific techniques for how to frame interactions that I’ve found useful for creating mutual empathy. How To Win Friends And Influence People is also a good source, although IIRC it’s more focused on what to do than on how to do it. (And of course, if you read the books, you have to actually practice to get good at the techniques.)
Thanks! And out of curiosity, does the first book have much data backing it? The author’s credentials seem respectable so the book would be useful even if it relied on mostly anecdotal evidence, but if it has research backing it up then I would classify it as something I need (rather than ought) to read.
According to wikipedia, there’s a little research and it’s been positive, but it’s not the sort of research I find persuasive. I do have mountains of anecdata from myself and several friends whose opinions I trust more than my own. PM me if you want a pdf of the book.
I would like to offer further anecdotal evidence that NVC techniques are useful for understanding your own and other people’s feelings and feeling empathy toward them.
Thirded. The most helpful part for me was internalising the idea that even annoying/angry/etc outbursts are the result of people trying to get their needs met. It may not be a need I agree with, but it gives me better intuition for what reaction may be most effective.
When it comes to research about paradigms like that it’s hard to evaluate them. If you look at nonviolent communication and set up your experiment well enough I think you will definitely find effects.
The real question isn’t whether the framework does something but whether it’s useful. That in turn depends on your goals.
Whether a framework helps you to successfully communicate depends a lot on cultural background of the people with whom you are interacting.
If you engage in NVC, some people with a strong sense of competition might see you as week.
If you would consistentely engage in NVC in your communcation on LessWrong, you might be seen as a weird outsider.
You would need an awful lot of studies to be certain about the particular tradeoff in using NVC for a particular real world situation.
I don’t know of many studies that compare whether Windows is better than Linux or whether VIM is better than Emacs. Communication paradigms are similar they are complex and difficult to compare.
I found NVC is very intuitively compelling, have personal anecdotal evidence that it works (though not independent of ESRogs, we go to the same class).
In addition to seconding nonviolent communication, cognitive behavior therapy techniques are pretty good—basically mindfulness exercises and introspection. If you want to change how you respond to certain situations (e.g. times when you get angry, or times when you have an opportunity to do something nice), you can start by practicing awareness of those situations, e.g. by keeping a pencil and piece of paper in your pocket and making a check mark when the situation occurs.
I also want to learn how to be kinder. The sticking point, for me, is better prediction about what makes people feel good.
I was very ill a year ago, and at that time learned a great deal about how comforting it is to be taken care of by someone who is compassionate and knowledgeable about my condition. But for me, unless I’m very familiar with that exact situation, I have trouble anticipating what will make someone feel better.
This is also true in everyday situations. I work on figuring out how to make guests feel better in my home and how to make a host feel better when I’m the guest. (I already know that my naturally overly-analytic, overly-accommodating manner is not most effective.) I observe other people carefully, but it all seems very complex and I consider myself learning and a ‘beginner’—far behind someone who is more natural at this.
I have trouble anticipating what will make someone feel better.
In this kind of situation, I usually just ask, outright, “What can I do to help you?” Then I can file away the answer for the next time the same thing happens.
However, this assumes that, like me, you are in a strongly Ask culture. If the people you know are strongly Guess, you might get answers such as “Oh, it’s all right, don’t inconvenience yourself on my account”, in which case the next best thing is probably to ask 1) people around them, or 2) the Internet.
You also need to keep your eyes out for both Ask cues and Guess cues of consent and nonconsent—some people don’t want help, some people don’t want your help, and some people won’t tell you if you’re giving them the wrong help because they don’t want to hurt your feelings. This is the part I get hung up on.
The “keep your eyes out for cues” works the other way around in what we’re calling a “Guess culture” as well.
That is, most natives of such a culture will be providing you with hints about what you can do to help them, while at the same time saying “Oh, it’s all right, don’t inconvenience yourself on my account.” Paying attention to those hints and creating opportunities for them to provide such hints is sometimes useful.
(I frequently observe that “Guess culture” is a very Ask-culture way of describing Hint culture.)
Yes, I would like to improve on all of this. I haven’t found the internet particularly helpful.
And I do find myself in a bewildering ‘guess’ culture. Asking others (though not too close to the particular situation) would probably yield the most information.
I find myself happier when I act more kindly to others. In addition, lowering suffering/increasing happiness are pretty close to terminal values for me.
It mostly boils down to simply concentrating on feeling nice towards everyone. There is some technical advice on how to turn the vague goal of ‘feeling nice’ to more concrete mental actions (through visualization, repeating specific phrases, focusing on positive qualities of people) and how to structure the practice by having a progression of people towards which you generate warm fuzzy feelings, of increasing level of difficulty (like starting with yourself and eventually moving on to someone you consider an enemy). Most of this can be found in the Wiki article or easily googled.
What are community norms here about sexism (and related passive aggressive “jokes” and comments about free speech) at the LW co-working chat? Is LW going for wheatons law or free speech and to what extent should I be attempting to make people who engage in such activities feel unwelcome or should I be at all?
I have hesitated to bring this up because I am aware its a mind-killer but I figured If facebook can contain a civil discussion about vaccines then LW should be able to talk about this?
There are no official community norms on the topic.
For my own part, I observe a small but significant number of people who seem to believe that LessWrong ought to be a community where it’s acceptable to differentially characterize women negatively as long as we do so in the proper linguistic register (e.g, adopting an academic and objective-sounding tone, avoiding personal characterizations, staying cool and detached).
The people who believe this ought to be unacceptable are either less common or less visible about it. The majority is generally silent on such matter, though will generally join in condemning blatant register-violations.
The usual result is something closer to wheaton’s law at the surface level, but closer to “say what you think is true” at the structural level. (Which is not quite free speech, but a close enough cousin in context.) That is, it’s often considered OK to say things, as long as they are properly hedged and constructed, that if said more vulgarly or directly would be condemned for violating wheaton’s law, and which in other communities would be condemned for a variety of reasons.
I think there’s a general awareness that this pattern-matches to sexism, though I expect that many folks here consider that to be mistaken pattern-matching (the “I’m not sexist; I can’t help it if you feminists choose to interpret my words and actions that way” stance).
So my guess is that if you attempt to make people who engage in sexism (and related defenses) feel unwelcome you will most likely trigger net-negative reactions unless you’re very careful with your framing.
It does answer my question. Also thanks for suggestion to focus on the behaviour rather than the person. I didn’t even realize I was thinking like that till you two pointed it out.
That is, it’s often considered OK to say things, as long as they are properly hedged and constructed, that if said more vulgarly or directly would be condemned for violating wheaton’s law, and which in other communities would be condemned for a variety of reasons.
Yes, and this is best, is it not? I enjoy reading what people have to say, even if their views are directly in contradiction to mine. I’ve changed my views more than once because it was correctly pointed out to me why my views were wrong. http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind
And about being vulgar, it’s just a matter of human psychology. People in general—even on LW—are more receptive to arguments that are phrased politely and intelligently. We’d all like to think that we are immune to this, but we are not.
Disclaimer: this is not meant as a defence of the behaviour in question, since I don’t exactly know what we’re talking about.
For my own part, I observe a small but significant number of people who seem to believe that LessWrong ought to be a community where it’s acceptable to differentially characterize women negatively
LessWrong characterizes outgroups negatively all the time. I cautiously suggest the whole premise of LW characterizes most people negatively, and it’s easier to talk about any outgroup irrationality, in this case women statistically, than look at our own flaws. If we talked about what men are like on average, we might not have many flattering things to say either.
Should negative characterizations of people be avoided in general, irrespective of how accurately we think they describe the average of the groups in question?
If you see characterizations that are wrong, you should obviously confront them.
I agree that there are also other groups of people who are differentially negatively characterized; I restricted myself to discussions of women because the original question was about sexism.
I cautiously suggest you could say the whole premise of lw characterizes most people negatively,
I would cautiously agree. There’s a reason I used the word “differentially.”
Should negative characterizations of people be avoided in general, irrespective of how accurately we think they describe the average of the groups in question?
Personally, I’m very cautions about characterizing groups by their averages, as I find I’m not very good about avoiding the temptation to then characterize individuals in that group by the group’s average, which is particularly problematic since I can assign each individual to a vast number of groups and then end up characterizing that individual differently based on the group I select, even though I haven’t actually gathered any new evidence. I find it’s a failure mode my mind is prone to, so I watch out for it.
If your mind isn’t as prone to that failure mode as mine, your mileage will of course vary.
I’m not sure how not being differential is supposed to work though. Different groups have different kinds of failure modes.
Suppose it’s actually true in the world that all people are irrational, that blue-eyed people (BEPs) are irrational in a blue way, green-eyed-people (GEPs) are irrational in a green way, and green and blue irrationality can be clearly and meaningfully distinguished from one another.
Now consider two groups, G1 and G2. G1 often discusses both blue and green irrationality. G2 often discusses blue irrationality and rarely discuss green irrationality. The groups are otherwise indistinguishable.
How would you talk about the difference between G1 and G2? (Or would you talk about it at all?)
For my own part, I’m comfortable saying that G2 differentially negatively characterizes BEPs more than G1 does. That said, I acknowledge that one could certainly argue that in fact G1 differentially negatively characterizes BEPs just as much as G2 does, because it discusses blue and green irrationality differently, so if you have a better suggestion for how to talk about it I’m listening.
What if G1=BEP and G2=GEP and discussing outgroup irrationality is much easier than discussing ingroup irrationality? Now suppose G1 is significantly larger than G2, and perhaps even that discussing G1 is more relevant to G2 winning* and discussing G2 is more relevant to G1 winning. How is the situation going to look like for a member of G2 who’s visiting G1? How about if you mix the groups a bit? Is it wrong?
if you have a better suggestion for how to talk about it I’m listening.
You connotationally implied the behaviour you described to be wrong. Can you denotationally do that?
How is the situation going to look like for a member of G2 who’s visiting G1?
I expect a typical G2/GEP visiting a G1/BEP community in the scenario you describe, listening to the BEPs differentially characterizing GEPs as irrational in negative-value-laden ways, will feel excluded and unwelcome and quite possibly end up considering the BEP majority a threat to their ongoing wellbeing.
How about if you mix the groups a bit?
I assume you mean, what if G1 is mostly BEPs but has some GEPs as well? I expect most of G1′s GEP minority to react like the G2/GEP visitors above, though it depends on how self-selecting they are. I also expect them to develop a more accurate understanding of the real differences between BEPs and GEPs than they obtained from a simple visit. I also expect some of G1′s BEP majority to develop a similarly more-accurate understanding.
Is it wrong?
I would prefer a scenario that causes less exclusion and hostility than the above. How about you?
You connotationally implied the behaviour you described to be wrong. Can you denotationally do that?
I’m not sure.
As I said, I’m cautious about characterizing groups by their averages, because it leads me to characterize individuals differently based on the groups I tend to think of them as part of, rather than based on actual evidence, which often leads me to false conclusions.
I suspect this is true of most people, so I endorse others being cautious about it as well.
I would prefer a scenario that causes less exclusion and hostility than the above. How about you?
I definitely want less exclusion and hostility, but I’m not sure the above scenario causes them for all values like GEP and BEP, nor for all kinds of examples of their irrationality. Perhaps we’re assuming different values for the moving parts in the scenario, although we’re pretending to be objective.
Many articles here are based on real life examples and this makes them more interesting. This often means picking an outgroup and demonstrating how they’re irrational. To make things personal, I’d say health care has gotten it’s fair share, especially in the OB days. I never thought the problem was that my ingroup was disproportionally targeted, but I was more concerned about strawmen and the fact I couldn’t do much to correct them.
Would it have been better if I had not seen those articles? I don’t think so, since they contained important information about the authors’ biases. They also told me that perhaps characterizations of other groups here are relatively inaccurate too. Secret opinions cannot be intentionally changed. Had their opinions been muted, I would have received information only through inexplicable downvotes when talking about certain topics.
I’m not sure the above scenario causes them for all values like GEP and BEP
I’m not exactly sure what reference class you’re referring to, but I certainly agree that there exist groups in the above scenario for whom negligible amounts of exclusion and hostility are being created.
Perhaps we’re assuming different values for the moving parts in the scenario, although we’re pretending to be objective.
I don’t know what you intend for this sentence to mean.
Would it have been better if I had not seen those articles? I don’t think so, [..] Had their opinions been muted, I would have received information only through inexplicable downvotes when talking about certain topics.
I share your preferences among the choices you lay out here.
Specific ones? Not especially. But it’s hard to know how to respond when someone concludes that C1 is superior to C2 and I agree, but I have no idea what makes the set (C1, C2) interesting compared to (C3, C4, .., Cn).
I mean, I suppose I could have asked you why you chose those two options to discuss, but to be honest, this whole thread has started to feel like I’m trying to nail Jell-O to a tree, and I don’t feel like doing the additional work to do it effectively.
So I settled for agreeing with the claim, which I do in fact agree with.
I have no idea what makes the set (C1, C2) interesting
I find that difficult to believe.
I’m trying to nail Jell-O to a tree,
I suggest this is because all we had was Jell-O and nails in the first place, but of course there are also explanations (E1, E2, .., En) you might find more plausible :)
If your mind isn’t prone to that failure mode, your mileage will of course vary.
Perhaps any such characterizations should be explicitly hedged against this failure mode, instead of being tabooed. I also think people should confront ambiguous statements, instead of just assuming they’re malicious.
Ideally, I’d want the people to feel that the behavior is unwelcome rather than that they themselves are unwelcome, but people are apt to have their preferred behaviors entangled with their sense of self, so the ideal might not be feasible. Still, it’s probably worth giving a little thought to discouraging behaviors rather than getting rid of people.
Depends on how you define sexism. Some people consider admitting that men and women are different to be sexism, never mind acting on that belief :-/
TheOtherDave’s answer is basically correct. Crass and condescending people don’t get far, but its possible to have a discussion of issues which cost Larry Summers so dearly.
Since this comment is framed in part as endorsing mine, I should probably say explicitly that while I agree denotationally with every piece of this comment taken individually, I don’t endorse the comment as a whole connotationally.
I connotationally interpret your question as: “what are the community norms about bad things?”
You’re not giving us enough information so that we could know what you’re talking about, and you’re asking our blind permission to condemn behaviour you disagree with.
Fair critique. Despite the lack of clarity on my part the comments have more than satisfactorily answered the question about community norms here. I suppose the responders can thank g-factor for that :)
I don’t have an answer here, just a note that this question actually contains two questions, and it would be good to answer both of them together. It would also be a good example of using rationalist taboo.
A: What are the community norms for defining sexism?
B: What are the community norms for dealing with sexism (as defined above)?
Answering B without answering A can later easily lead to motivated discussions about sexism, where people would be saying: “I think that X is [not] an example of sexism” when what they really wanted to say would be: “I think that it is [not] appropriate to use the community norm B for X”.
If you want to tell people off for being sexist, your speech is just as free as theirs. People are free to be dicks, and you’re free to call them out on it and shame them for it if you want.
I think you should absolutely call it out, negative reactions be damned, but I also agree with NancyLebovitz that you may get more traction out of “what you said is sexist” as opposed to “you are sexist”.
To say nothing is just as much an active choice as to say something. Decide what kind of environment you want to help create.
A norm of “don’t be a dick” isn’t inherently a violation of free speech. The question is, does LW co-working chat have a norm of not being a dick? Would being a dick likely lead to unfavorable reactions, or would objecting to dickish behavior be frowned on instead?
I’d like to see some evidence that such stuff is going on before pointing fingers and making rules that could possible alienate a large fraction of people.
I’ve been attending the co-working chat for about a week, on and off (I take the handle of ‘fist’) and so far everyone seems friendly and more than willing to accomodate the girls in the chat. Have you personally encountered any problems?
I did encounter this problem (once) and I was experiencing resistance to going back even though I had a lot of success with the chat. I figured having a game plan for next time would be my solution.
I have been musing over the results of Rindermann, Coyle and Becker’s survey of intelligence experts presented at the ISIR conference. Since you may well be reading a newspaper this Sunday, I thought it might interest you to show what the experts think of the coverage of intelligence in the public media. By way of explanation, the authors cast their net widely, but did some extra sampling of the German media. Readers might like to suggest their own likes and dislikes in terms of the accuracy of coverage. I will be adding more details on other issues later. In yellow is the original survey 30 years ago, in blue the current 2013 survey.
According to the survey of experts Steve Sailer outperforms everyone else.
That there are differences between identical twins is known, but the article goes into detail about the types of difference, including effects which are in play before birth.
Wirth’s law is a computing adage made popular by Niklaus Wirth in 1995. It states that “software is getting slower more rapidly than hardware becomes faster.”
Is Wirth’s Law still in effect? Most of the examples I’ve read about are several years old.
ETA: I find it interesting that Wirth’s Law was apparently a thing for decades (known since the 1980s, supposedly) but seems to be over. I’m no expert though, I just wonder what changed.
It was my impression that Wirth’s law was mostly intended to be tongue-in-cheek, and refer to how programs with user interfaces are getting bloated (which may be true depending on your point of view).
In terms of software that actually needs speed (numerical simulations, science and tech software, games, etc.) the reverse has always been true. New algorithms are usually faster than old ones. Case in point is the trusty old BLAS library which is the workhorse of scientific computing. Modern BLAS implementations are extremely super-optimized, far more optimized than older implementations (for current computing hardware, of course).
It wasn’t even true in 1995, I don’t think. The first way of evaluating it that comes to mind is the startup times of “equivalent” programs, like MS Windows, Macintosh OS, various Corels, etc.
Startup times for desktop operating systems seem to have trended up, then down, between the ’80s and today; with the worst performance being in the late ’90s to 2000 or so when rebooting on any of the major systems could be a several-minutes affair. Today, typical boot times for Mac, Windows, or GNU/Linux systems can be in a handful of seconds if no boot-time repairs (that’s “fsck” to us Unix nerds) are required.
I know that a few years back, there was a big effort in the Linux space to improve startup times, in particular by switching from serial startup routines (with only one subsystem starting at once) to parallel ones where multiple independent subsystems could be starting at the same time. I expect the same was true on the other major systems as well.
My experience is that boot time was worst in Windows Vista (released 2007) and improved a great deal in Windows 7 and 8. MS Office was probably at its worst in bloatiness in the 2007 edition as well.
It would be interesting to plot the time sequence of major chip upgrades from intel on the same page as the time sequence of major upgrades of MS Word and/or MS Excel. My vague sense is the mid/early 90s had Word releases that I avoided for a year or two until faster machines came along that made them more usable from my point of view. But it seems the rate of new Word releases has come way down compared to the rate of new chip releases. That is, perhaps hardware is creeping up faster than features are in the current epoch?
I find it interesting that Wirth’s Law was apparently a thing for decades (known since the 1980s, supposedly) but seems to be over. I’m no expert though, I just wonder what changed.
I think both software and hardware got further out on the learning curve which means their real rates of innovative development have both slowed down which means the performance of software has sped up.
I don’t get how I get to the last part of that sentence from the first part either, but it almost makes sense.
I mean, this formulation is wrong (software isn’t getting slower), except for the tongue-in-cheek original interpretation I guess. On the other hand, software is getting faster at a slower rate than hardware is and that is still an important observation.
This insight also leads to a helpful lesson of just what “having an open mind to a different culture” really means. At bottom, it means having faith in the people who subscribe to the culture—faith that these people are motivated by the same forces as we, that they are not stupid, irrational or innately predisposed to a certain temperament, that whatever they are doing will make sense once we understood the entire circumstance.
There are a couple of commercially available home eeg sets available now, has anyone tried them? Are they useful tools for self monitoring mental states?
[Reposted from last thread because I think i was too late to be seen mch]
I think the studies at the beginning of the book provide pretty compelling evidence that it’s at least worth looking into more.
“Just five years after Kamya’s discovery, Barry Sterman published his landmark experiment (Wyricka & Sterman, 1968). Cats were trained to increase sensorimotor rhythm (SMR) or 12– 15 Hz. This frequency bandwidth usually increases when motor activity decreases. Thus, the cats were rewarded each time that SMR increased, which likely accompanied a decrease in physical movements. Unrelated to his study, NASA requested that Sterman study the effects of human exposure to hydrazine (rocket fuel) and its relationship to seizure disorder. Sterman started his research with 50 cats. Ten out of the 50 had been trained to elevate SMR. All 50 were injected with hydrazine. Much to Sterman’s surprise, the 10 specially trained cats were seizure resistant. The other 40 developed seizures 1 hour after being injected (Budzynski, 1999, p. 72; Robbinsa, 2000, pp. 41– 42). Sterman had serendipitously discovered a medical application for this new technology.”
Why does a human care about if a monkey cares about whether a human can crush leaves? For things like us primates, sometimes these things are their own reward.
Do the monkeys ever crush leaves like that for themselves? Otherwise I think that it is more likely giving him a gift, hoping that he will reciprocate by giving the monkey a treat, or maybe just pet it. The leaves just happen to be what the monkey has most easily available at the time.
Yes. What I was thinking was that people had previously given the monkeys treats by putting something in the monkey’s hand and closing its fingers, so that this is the monkey is more or less imitating something that it wants the human to do.
It is not that teaching is too complex for a monkey, it is that I don’t see what exactly it’s teaching, but I feel that I recognize what the monkey is doing as the “you keep this” gesture.
I’ve heard it said that, when cats present a kill to their owners, it’s a form of trying to teach the owner to hunt. I can only assume that some mammals will treat animals from other species as part of their tribe/pack/pride/etc if they get along well enough.
If so, I’d predict this happens more often in more social animals. So yes to lions and monkeys, no to bears and hamsters. This would suggest we’d see similar behavior from dogs, though, and I can’t think of examples of dogs trying to teach humans any skills. This is particularly damning for my hypothesis, since dogs are known for their cooperation with humans.
I can only assume that some mammals will treat animals from other species as part of their tribe/pack/pride/etc if they get along well enough.
It’s hard for me to imagine how this wouldn’t be the case. It is a highly non-trivial sensory/processing problem for a cat to look at another cat and think “This creature is a cat, just like I am a cat, therefore we should take care of each other” but, at the same time, to look at a human and think “This creature is a human, it is not like me, therefore it does not share my interests.”
This problem is especially more acute for cats than dogs, because cats don’t really form tight-knit packs, and they have less available processing power.
I’d like to see some more research on the psychology of pack behavior and how/why animals cooperate with each other though.
Many of the leaders in the field of AI are no longer writing programs themselves: They don’t waste their time debugging miles of code; they just sit around thinking about this and that with the aid of the new [CS-specific] concepts. They’ve become… philosophers! The topics they work on are strangely familiar (to a philosopher) but recast in novel terms.
Red Queen hypothesis means that humans are probably the latest step in a long sequence of fast (on evolutionary time scale) value changes. So does Coherent Extrapolated Volition (CEV) intend to
1) extrapolate all the future co-evolutionary battles humans would have and predict the values of the terminal species as our CEV, or is it intended somehow to
2) freeze the values humans have at the point in time we develop FAI and build a cocoon around humanity which will let it keep this (nearly) arbitrarily picked point in its evolution forever?
If it is 1), it seems the AI doesn’t have much of a job to do. Presumably interfere against existential risks to humanity and its successor species, perhaps keep extremely reliable stocks for repopulating if humanity or its successor manages still to kill itself. Maybe even in a less extreme interpretation, FAI does what is required to keep humanity and its successors as the pinnacle species, stealing adaptations from unrelated species that actually manage to threaten us and our successors, so we sort of have 1′) which is extrapolate to a future where the pinnacle species is always a descendant of ours.
If 2), it would seem FAI could simply build a sim that freezes in place the evolutionary pressures that brought us to this point as well as freezing in to place our own current state. And then run that sim forever, the sim simply removes genetic mutation from the sim and perhaps has active rebalancing to work against any natural selection which is currently going on.
We could have BOTH futures, those who prefer 2) go live in the Sim that they have always thought was indistinguishable from reality anyway, and those who prefer 1 stay here in the real world and play out their part in evolving whatever comes next. Indeed, the sim of 2) might serve as a form of storage/insurance against existential threats, a source from which human history can be restarted from its point at 0 year FAI whenever needed.
Does CEV crash in to Red Queen hypothesis in interesting ways? Could a human value be to roll the dice on our own values in hopes of developing an even more effective species?
Neither. CEV is supposed to look at what humanity would want if they were smarter, faster, and more the people they wished they were. It finds the end of the evolution of how we change if we are controlled by ourselves, not by the blind idiot god.
It finds the end of the evolution of how we change if we are controlled by ourselves, not by the blind idiot god.
Well considering that we at the point we create the FAI are completely a product of the blind idiot god, and so our CEV is some extrapolation of where that blind idiot had gotten us to at the point we finally got the FAI going, it seems very difficult to me to say that the blind idiot god has at all been taken out of the picture.
I guess the idea is that by US being smart and the FAI being even smarter, we are able to whittle down our values until we get rid of the froth, dopey things like being a virgin when you are married and never telling a lie, move through the 6 stages of morality to the top one, the FAI discovers the next 6 or 12 stages and runs sims or something to cut even more foam and crust until there’s only one or two really essential things left.
Of course those one or two things were still placed there by the blind idiot god. And if something other than them had been placed by the blind idiot, CEV would have come up with something else. It does not seem there is any escaping this blind idiot. So what is the value of a scheme who’s appeal is the appearance of escaping the blind idiot if the appearance is false?
We are not escaping the blind idiot god in the sense if it not having any control. We are escaping in the sense that we have full control. To some extent, they overlap, but that doesn’t matter. I only care about being in control, not about everything else not being in control.
So what is the value of a scheme who’s appeal is the appearance of escaping the blind idiot if the appearance is false?
The value is in escaping the parts that harm us. Evolution made me enjoy chocolate, and evolution also made me grow old and die. I would love to have an eternal happy life. I don’t see any good reason to get rid of the chocolate; although I would accept to trade it for something better.
CEV is supposed to refer to the values of current humans. However, this does not necessarily imply that an FAI would prevent the creation of non-human entities. I’d expect that many humans (including me) would assign some value to the existence of interesting entities with somewhat different (though not drastically different) values than ours, and the satisfaction of those values. Thus a CEV would likely assign some value to the preferences of a possible human successor species by proxy through our values.
Thus a CEV would likely assign some value to the preferences of a possible human successor species by proxy through our values.
An interesting question, is the CEV dynamic? As we spent decades or millennia in the walled gardens built for us by the FAI would the FAI be allowed to drift its own values through some dynamic process of checking with the humans within its walls to see how its values might be drifting? I had been under the impression that it would not, but that might have been my own mistake.
No. CEV is the coherent extrapolation of what we-now value.
Edit: Dynamic value systems likely aren’t feasible for recursively self-improving AIs, since an agent with a dynamic goal system has incentive to modify into an agent with a static goal system, as that is what would best fulfill its current goals.
It’s not dynamic. It isn’t our values in the sense of what we’d prefer right now. It’s what we’d prefer if we were smarter, faster, and more the people that we wished we were. In short, it’s what we’d end up with if it was dynamic.
It’s not dynamic. It isn’t our values in the sense of what we’d prefer right now. It’s what we’d prefer if we were smarter, faster, and more the people that we wished we were. In short, it’s what we’d end up with if it was dynamic.
Unless the FAI freezes our current evolutionary state, at least as involves our values, the result we would wind up with if CEV derivation was dynamic would be different from what we would end up with if it is just some extrapolation from what current humans want now.
Even if there were some reason to think our current values were optimal for our current environment, which there is actually reason to think they are NOT, we would still have no reason to think they were optimal in a future environment.
Of course being effectively kept in a really really nice zoo by the FAI, we would not be experiencing any kind of NATURAL selection anymore, and evidence certainly suggests that our volition is to be taller, smarter, have bigger dicks and boobs, be blonder, tanner, and happier, all of which our zookeeper FAI should be able to move us (or our descendants) towards while carrying out necessary eugenics to keep our genome healthy in the absence of natural selection pressures. Certainly CEV keeps us from wanting defective, crippled, and genetically diseased children, so this seems a fairly safe prediction.
It would seem as defined that CEV would have to be fixed at the value it was set at when FAI was created. That no matter how smart, how tall, how blond, how curvaceous or how pudendous we became we would still be constantly pruned back to the CEV of 2045 humans.
As to our values not even being optimal for our current environment fuhgedaboud our future environment, it is pretty widely recognized that we are evolved for the hunter gatherer world of 10,000 years ago, with familial groups of a few hundred, the necessity for survival of hostile reaction against outsiders, and systems which allow fear to distort in extreme ways our rational estimations of things.
I wonder if the FAI will be sad to not be able to see what evolution in its unlimited ignorance would have come up with for us? Maybe they will push a few other species to become intelligent and social and let them duke it out and have natural selection run with them. As long as their species that our CEV didn’t feel too overly warm and fuzzy about this shouldn’t be a problem for them. And certain as a human in the walled garden I would LOVE to be studying what evolution does beyond what it has done to us, so this would seem like a fine and fun thing for the FAI to do to keep at least my part of the CEV entertained.
Even if there were some reason to think our current values were optimal for our current environment, which there is actually reason to think they are NOT, we would still have no reason to think they were optimal in a future environment.
Type error. You can evaluate the optimality of actions in an environment with respect to values. Values being optimal with respect to an environment is not a thing that makes sense. Unless you mean to refer to whether or not our values are optimal in this environment with respect to evolutionary fitness, in which case obviously they are not, but that’s not very relevant to CEV.
all of which our zookeeper FAI should be able to move us (or our descendants) towards while carrying out necessary eugenics to keep our genome healthy in the absence of natural selection pressures.
An FAI can be far more direct than that. Think something more along the lines of “doing surgery to make our bodies work the way we want them to” than “eugenics”.
Type error. … Unless you mean to refer to whether or not our values are optimal in this environment with respect to evolutionary fitness, in which case obviously they are not, but that’s not very relevant to CEV.
You are right about the assumptions I made and I tend to agree it is erroneous.
Your post helps me refine my concern about CEV. It must be that I am expecting the CEV will NOT reflect MY values. In particular, I am suggesting that the CEV will be too conservative in the sense of over-valuing humanity as it currently is and therefore undervaluing humaity as it eventually would be with further evolution, further self-modification.
Probably what drives my fear of CEV not reflecting MY values is dopey, low probability. In my case it is an aspect of “Everything that comes from organized religion is automatically stupid.” To me, CEV and FAI are the modern dogma, man discovering his natural god does not exist, but deciding he can build his own. An all-loving (Friendly) all powerful (self-modifying AI after FOOM) father-figure to take care of us (totally bound by our CEV).
Of course there could be real reasons that CEV will not work. Is there any kind of existence proof for a non-trivial CEV? For the most part values such as “lying is wrong” “stealing is wrong” “help your neighbors” all seem like simplifying abstractions that are abandoned by the more intelligent because they are simply not flexible enough. The essence of nation-to-nation conflict is covert, illegal competition between powerful government organizations that takes place in the virtual absence of all other values other than “we must prevail.” I would presume a nation which refused to fight dirty at any level would be less likely to prevail and so such high mindedness would have no place in the future, and therefore no place in the CEV. That is, the fact that I with normal-ish intelligence can see that most values are a simple map for how humanity should interoperate to survive and the map is not the territory, an extrapolation to if we were MUCH smarter would likely remove all the simple landmarks we have on the maps suitable for our current distribution of IQ.
Then consider the value much of humanity places on accomplishment, and the understanding that coddling, keeping as pets, keeping safe, protecting, is at odds with accomplishment, and get really really smart about that and a CEV is likely to not have much in it about protecting us, even from ourselves.
So perhaps the CEV is a very sparse thing indeed, requiring only that humanity, its successors or assigns, survive. Perhaps FAI sits there not doing a whole hell of a lot that seems useful to us at our level of understanding, with its designers kicking it wondering where they went wrong.
I guess what I’m really getting too is perhaps our CEV, perhaps when you use as much intelligence as you can to extrapolate where our values go in the long long run, you get to the same place the blind idiot was going all along- survival. I understand many here will say no you are missing out on the bad vs good things in our current life, how we can cheat death but keep our taste for chocolate. Their hypothesis is that CEV has them still cheating death and keeping their taste for chocolate. I am hypothesizing that CEV might well have the juggernaut of the evolution of intelligence, and not any of the individuals or even species that are parts of that evolution, as its central value. I am not saying I know it will, what I am saying is I don’t know why everybody else has already decided they can safely predict that even a human 100X or 1000X as smart as they are doesn’t crush them the way we crush a bullfrog when his stream is in the way of our road project or shopping mall.
Evolution may be run by a blind idiot but it has gotten us this far. It is rare that something as obviously expensive as death would be kept in place for trivial reasons. Certainly the good news for those who hate death is the evidence is that lifespans are more valuable in smart species, I think we live twice as long as most other trends against other species would suggest we should, so maybe the optimum continues to go in that direction. But considering how increased intelligence and understanding is usually the enemy of hatred, it seems at least a possibility that needs to be considered that CEV doesn’t even stop us from dying.
It must be that I am expecting the CEV will NOT reflect MY values. In particular, I am suggesting that the CEV will be too conservative in the sense of over-valuing humanity as it currently is and therefore undervaluing humaity as it eventually would be with further evolution, further self-modification.
CEV is supposed to value the same thing that humanity values, not value humanity itself. Since you and other humans value future slightly-nonhuman entities living worthwhile lives, CEV would assign value to them by extension.
Is there any kind of existence proof for a non-trivial CEV?
That’s kind of a tricky question. Humans don’t actually have utility functions, which is why the “coherent extrapolated” part is important. We don’t really know of a way to extract an underlying utility function from non-utility-maximizing agents, so I guess you could say that the answer is no. On the other hand, humans are often capable of noticing when it is pointed out to them that their choices contradict each other, and, even if they don’t actually change their behavior, can at least endorse some more consistent strategy, so it seems reasonable that a human, given enough intelligence, working memory, time to think, and something to point out inconsistencies, could come up with a consistent utility function that fits human preferences about as well as a utility function can. As far as I understand, that’s basically what CEV is.
CEV is likely to not have much in it about protecting us, even from ourselves.
Do you want to die? No? Then humanity’s CEV would assign negative utility to you dying, so an AI maximizing it would protect you from dying.
I am not saying I know it will, what I am saying is I don’t know why everybody else has already decided they can safely predict that even a human 100X or 1000X as smart as they are doesn’t crush them the way we crush a bullfrog when his stream is in the way of our road project or shopping mall.
If some attempt to extract a CEV has a result that is horrible for us, that means that our method for computing the CEV was incorrect, not that CEV would be horrible to us. In the “what would a smarter version of me decide?” formulation, that smarter version of you is supposed to have the same values you do. That might be poorly defined since humans don’t have coherent values, but CEV is defined as that which it would be awesome from our perspective for a strong AI to maximize, and using the utility function that a smarter version of ourselves would come up with is a proposed method for determining it.
Criticisms of the form “an AI maximizing our CEV would do bad thing X” involve a misunderstanding of the CEV concept. Criticisms of the form “no one has unambiguously specified a method of computing our CEV that would be sure to work, or even gotten close to doing so” I agree with.
My thought on CEV not actually including much individual protection followed something like this: I don’t want to die. I don’t want to live in a walled garden taken care of as though I was a favored pet. Apply intelligence to that and my FAI does what for me? Mostly lets me be since it is smart enough to realize that a policy of protecting my life winds up turning me into a favored pet. This is sort of the distinction ask somewhat what they want you might get stories of candy and leisure, look at them when they are happiest you might see when they are doing meaningful and difficult work and living in a healthy manner. Apply high intelligence and you are unlikely to promote candy and leisure. Ultimately, I think humanity careening along on its very own planet as the peak species, creating intelligence in the universe where previously there was none is very possibly as good as it can get for humanity, and I think it plausible FAI would be smart enough to realize that and we might be surprised how little it seemed to interfere. I also think it is pretty hard working part time to predict what something 1000X smarter than I am will conclude about human values, so I hardly imagine what I am saying is powerfully convincing to anybody who doesn’t lean that way, I’m just explaining why or how an FAI could wind up doing almost nothing, i.e. how CEV could wind up being trivially empty in a way.
THe other aspect of being empty for CEV I was not thinking our own internal contradictions although that is a good point. I was thinking disagreement across humanity. Certainly we have seen broad ranges of valuations on human life and equality and broadly different ideas about what respect should look like and what punishment should look like. THese indicate to me that a human CEV as opposed to a French CEV or even a Paris CEV, might well be quite sparse when designed to keep only what is reasonably common to all humanity and all potential humanity. If morality turns out to be more culturally determined than genetically, we could still have a CEV, but we would have to stop claiming it was human and admit it was just us, and when we said FAI we meant friendly to us but unfriendly to you. The baby-eaters might turn out to be the Indonesians or the Inuits in this case.
I know how hard it is to reach consensus in a group of humans exceeding about 20, I’m just wondering how much a more rigorous process applied across billions is going to come up with.
we would still be constantly pruned back to the CEV of 2045 humans
Two connotational objections: 1) I don’t think that “constantly pruned back” is an appropriate metaphor for “getting everything you have ever desired”. The only thing that would prevent us from doing X would be the fact that after reflection we love non-X. 2) The extrapolated 2045 humans would be probably as different from the real 2045 humans, as the 2045 humans are different from the MINUS 2045 humans.
I wonder if the FAI will be sad to not be able to see what evolution in its unlimited ignorance would have come up with for us?
Sad? Why, unless we program it to be? Also, with superior recursively self-improving intelligence it could probably make a good estimate of what would have happened in an alternative reality where all AIs are magically destroyed. But such estimate would most likely be a probability distribution of many different possibilities, not one specific goal.
I’m dubious about the extrapolation—the universe is more complex than the AI, and the AI may not be able to model how our values would change as a result of unmediated choices and experiense.
I am not sure how obvious is the part that there are multiple possible futures. Most likely, the AI would not be able to model all of them. However, without AI most of them wouldn’t happen anyway.
It’s like saying “if I don’t roll a die, I lose the chance of rolling 6”, to which I add “and if you do roll the die, you still have 5⁄6 probability of not rolling 6″. Just to make it clear that by avoiding the “spontaneous” future of humankind, we are not avoiding one specific future magically prepared for us by destiny. We are avoiding the whole probability distribution, which contains many possible futures, both nice and ugly.
Just because AI can model something imperfectly, it does not mean that without the AI the future would be perfect, or even better on average than with the AI.
‘Unmediated’ may not have been quite the word to convey what I meant.
My impression is that CEV is permanently established very early in the AI’s history, but I believe that what people are and want (including what we would want if we knew more, thought faster, were more the people we wished we were, and had grown up closer together) will change, both because people will be doing self-modification and because they will learn more.
What I mean is that if you looked at what people valued, and gave them the ability to self-modify, and somehow kept them from messing up and accidentally doing something that they didn’t want to do, you’d have something like CEV but dynamic. CEV is the end result of this.
with random mutations and natural selection, old values can disappear and new values can appear in a population. The success of the new values depends only on their differential ability to keep their carriers in children, not on their “friendliness” to the old values of the parents, which is what FAI respecting CEV is meant to accomplish.
The Red Queen Hypothesis is (my paraphrase for purposes of this post) that a lot of the evolution that takes place is not to adapt to unliving environment but to the living and most importantly also evolving environment in which we live, on which we feed, and which does its damdest to feed on us. Imagine a set of smart primates who have already done pretty well against dumber animals by evolving more complex vocal and gestural signalling, and larger neocortices so that complex plans worthy of being communicated can be formulated and understood when communicated. But they lack the concept of handing off something they have with the expectation that they might get something they want even more in trade. THIS is essentially one of the hypotheses of Matt Ridley’s book “The Rational Optimist,” that homo sapiens is a born trader, while the other primates are not. Without trading, economies of scale and specialization do almost no good. With trading and economies of scale and specialization, a large energy investment in a super-hot brain and some wicked communication gear and skills really pays off.
Subspecies with the right mix of generosity, hypocrisy, selfishness, lust, power hunger, and self-righteousness will ultimately eat the lunch of their too generous or too greedy to cooperate or too lustful to raise their children or too complacent to seek out powerful mates brethren and sistern. This is value drift brought to you by the Red Queen.
I’ve noticed something: the MIRI blog RSS feed doesn’t update as a new article appears on the blog, but rather at certain times (two or three times a month?) it updates with the articles that have been published since the last update.
Stuff that a rational person would be better off not knowing. For example, if I live among people of religion X, and I find out something disgusting that the religion’s founder did, and whenever someone discussed the founder my face betrayed my feelings of disgust, then knowledge of the founder’s misdeeds could harm me.
Stuff that a rational person would be better off not knowing.
Interesting. So, living in Soviet Russia a rational person would treat knowledge about GULAG, etc. as a basilisk? Or a rational person in Nazi Germany would actively avoid information about the Holocaust?
It depends on one’s own risk factors. It’s REALLY important to know about the holocaust if you’re jewish or have jewish ancestry, but arguably safer or at least more pleasant not to if you don’t.
I think the moral question (as opposed to the practical safety question) of “is it better to know a dark truth or not” will come down to whether or not you can effectively influence the world after knowing it. You can categorize bad things into avoidable/changeable and unavoidable/unchangeable, and (depending on how much you value truth in general) knowing about unavoidable bad thing will only make you less happy without making the world a better place.
unfortunately it’s pretty hard to tell whether you can do anything about a bad thing without learning about what it is.
It’s REALLY important to know about the holocaust if you’re jewish or have jewish ancestry, but arguably safer or at least more pleasant not to if you don’t.
If anything, my impression is that knowing about the Holocaust has made my mother significantly less realistic with respect to assessing potential threats faced by Jews in the present day.
On the other hand, to the extent that it represents a general lesson about human behavior, that understanding might end up being valuable for anyone. Being non-Jewish may actually make it easier to properly generalize the principles rather than thinking of it in terms of unique identity politics.
It’s worth knowing that societies can just start targeting people for no reason. It can be hard to have a sense of proportion about risks.
I suspect the best strategy is to become such a distinguished person that more than one country will welcome you, but the details are left as an exercise for the student.
this is possible but I meant knowing about the holocaust as it’s ongoing, like lumifer’s example of knowing about gulags while living in soviet russia.
If anything, my impression is that knowing about the Holocaust has made my mother significantly less realistic with respect to assessing potential threats faced by Jews in the present day.
First they came for the communists, and I did not speak out— because I was not a communist; Then they came for the socialists, and I did not speak out— because I was not a socialist; Then they came for the trade unionists, and I did not speak out— because I was not a trade unionist; Then they came for the Jews, and I did not speak out— because I was not a Jew; Then they came for me— and there was no one left to speak out for me.
It’s REALLY important to know about the holocaust if you’re jewish or have jewish ancestry, but arguably safer or at least more pleasant not to if you don’t.
This person, a German Protestant minister, followed your advice, did he not?
good point. I totally covered every base with that one line of advice, and meant it to apply to all people in all situations.
More seriously, my advice very clearly was a subset of the more general advice: Be fucking wary of angering powerful entities. He clearly did NOT follow that advice.
It is unclear what will be the consequences and side-effects of not knowing the specific evidence. And on meta level: what will be the consequences of modifying your cognitive algorithms to avoid the paths that seem to lead to such evidence.
Depending on all these specific details, it may be good or bad. Human imperfection makes it impossible to evaluate. And actually not knowing the specific evidence makes it impossible again. So… the question is analogical to: “If I am too stupid to understand the question, should I answer ‘yes’, or should I answer ‘no’?” (Meaning: yes = avoid the evidence, no = don’t avoid the evidence.)
Scientists recently discovered, and I am not making this up, that consuming a drink containing grain alcohol (like Tucker Max’s “Tucker Death Mix”) raised both free and total testosterone for five hours post workout, whereas those who did not consume the frat boy rapist punch had their test levels fall below baseline. Happily, the alcohol had no effect on cortisol or estradiol levels, so the dudes in the study were just floating in a sea of dying brain cells and testosterone-fueled awesomeness (Vingren).
How much is enough to get the nearly 100% boost in testosterone postworkout science has recorded? It depends on your bodyweight. For matters of convenience and exigency, I decided to make a little chart for you guys to give you the proper dosage to spike your test levels properly using the study’s 1.09mg/kg bodyweight ratio organized by weight class, as this is after all an article aimed at serious lifters. For the Oly guys and IPF/USAPL (/sadfaceissad) among you, these are the weight classes that existed before the IOC decided that you guys couldn’t hang with the old school lifters.
How the fucking guys in the study made it home is a mystery- they sure as hell didn’t drive, and if they did, they didn’t live, because they slammed that shit in 10 minutes. I can drink with the best of them, but I’ve never faced half a liter of vodka in ten minutes- that’s some Decline of Western Civilization style drinking, and I’m not sure I can hang with the likes of 1980s hair metal bands.
I’m still not going to drink copious amounts of alcohol after a workout...
A glass of wine (or two (or three)) or a beer after a workout have noticeably improved how I feel the next day. I didn’t believe this post either, but it appears to have panned out.
Object level response To the Stars. Meta level, check the monthly media thread archives and/or HPMOR’s author notes. They have lots of good suggestions, and in depth reviews.
I’m also one-thirds into Amends, or Truth and Reconciliation, which is a decent look at how Harry Potter characters would logically react to the end of the Second Wizarding War. So far no idiot balls and pretty good characterization.
Rationalising Death may be better if you haven’t read Death Note; it’s pretty good about explaining everything. As someone familiar with Death Note my feeling so far has been that Rationalising Death hasn’t diverged enough; it sometimes feels like just rehashing the original. Not always, certainly, and I’m overall enjoying it, but that’s seemed like the biggest flaw to me so far (admittedly, the author says divergence will increase as it goes along, and there are signs of that pattern).
I took this recommendation, and hated it. Got as far as the thing with Jayne’s mother before I accepted that it wasn’t going to get any better.
If you’re some random person, wondering whether you should listen to me or Alsadius, I recommend the following test: read the first chapter. If you like chapter one you’ll probably like the rest of it, and if you don’t, you won’t.
I agree with this test. True of many stories, really. I’m a fan of the plot, which only really comes together 2⁄3 of the way through, but if you’re not a fan of the banter, it’s not worth it.
I started reading it. Harry isn’t Harry. He’s constantly spouting “Charming” and “Snarky” lines at every character, and is inexplicably expert at piloting and knows everything about the firefly-verse after a time-skip of 2 years. If you hadn’t told me he was Harry Potter I would’ve guessed he was Pham Nuwen. There’s also tons of call-backs to past firefly events and lines of dialogue, which shows pretty weak imagination on the part of the author. A reference is one thing but you don’t make it by characters constantly going “Hey remember that one time when we did X?” “Hey remember your wife?”.
The request was for a HPMOR substitute. I figured that a Harry-like Harry wasn’t exactly a necessity. As I said in an above comment, this author uses canon as a loose suggestion.
Not really. You can get by without Potter knowledge(as usual, this author mangles it a fair bit anyways), but the plot is heavily tied into that of Firefly/Serenity, and the Firefly characters are more prominent. That said, feel free to read his Potter-only stuff instead—I haven’t gone through his whole oeuvre, but everything I’ve read has been hilarious and well-written.
I think I want to buy a new laptop computer. Can anyone here provide advice, or suggestions on where to look?
The laptop I want to replace is a Dell Latitude D620. Its main issues are weight, heat production, slowness (though probably in part from software issues), inability to sleep or hibernate (buying and installing a new copy of XP might fix this), lack of an HDMI port, and deteriorated battery life. I briefly tried an Inspiron i14z-4000sLV, but it was still kind of slow, and trying to use Windows 8 without a touchscreen was annoying.
I remember reading that it’s unsafe to move or jostle a laptop with a magnetic hard drive while it’s running, because of the moving parts. Based on that, it seems like it’s best to get one with only a solid-state drive and no magnetic drive. Is that accurate?
I’m somewhat ambivalent about how to trade off power against heat and weight, or against cost of replacement if it’s lost or damaged.
Not counting external storage, I’m using about 25 GB of the D620′s 38 GB, plus 25 GB (not counting software) on the family desktop PC.
(After ordering the XPS, I realized that it doesn’t have a removeable battery, which seems like a longevity issue; but it seems likely that that’s standard for devices of its weight class.)
Based on that, it seems like it’s best to get one with only a solid-state drive and no magnetic drive. Is that accurate?
Not necessarily. Most laptops nowadays are equipped with anti shock hard drive mounts and the hard drives are specially designed to be resistant to shock. The advantages for an SSD are speed, not reliability.
This reliability report (with this caveat) indicates that Samsung is the most reliable brand on the market for now. I’ve always considered Lenovo and ASUS to be high quality, with ASUS generally having cheaper and more powerful computers (and a trade off in actually figuring out which one you want, that website is terrible).
The advantages for an SSD are speed, not reliability.
I would expect an SSD to be MUCH more reliable than a hard drive.
SSDs are solid-state devices with no moving parts. Hard drives are mechanical devices with platters rapidly rotating at microscopic tolerances.
So now that I’ve declared my prior let’s see if there’s data… :-)
“From the data I’ve seen, client SSD annual failure rates under warranty tend to be around 1.5%, while HDDs are near 5%,” Chien said. (where Chien is “an SSD and storage analyst with IHS’s Electronics & Media division”) Source
Reliability for SSDs is better than for HDD. However, they aren’t so much more reliable that it alters best practices for important data keeping—at least two backups, and one off site.
they aren’t so much more reliable that it alters best practices for important data keeping
Oh, certainly.
Safety of your data involves considerably more than the reliability of your storage devices. SSDs won’t help you if your laptop gets stolen or if, say, your power supply goes berserk and fries everything within reach.
Thanks for replying. I haven’t looked at your link yet, but it seems like there’d be limits to how much shock protection could be fit in an ultrathin laptop, and it’d be hard to find out how good it is for specific models. (And the speed advantage seems like enough reason to want an SSD in any case.)
General comments: SSDs are generally faster than magnetic drives, but often fail much sooner.
If you’re not positive you want to replace it altogether: You might be able to fix your heat/slowness issues just by taking a can of compressed air to it. And you could probably buy a new battery. Replacing it might still be a better proposition overall, though...
Source on SSDs failing sooner? I thought (or assumed) it was the opposite. A quick Google search turns up the headline “SSD Annual Failure Rates Around 1.5%, HDDs About 5%”.
Looking further, though, I also see: “An SSD failure typically goes like this: One minute it’s working, the next second it’s bricked.”. The page goes on to say that there’s a service that can reliably recover the data from a dead drive, but that seems like a privacy concern (if everything on the drive weren’t logged by the NSA to begin with).
On the pro-SSD side, though, I try to keep anything important online or on an external drive anyway (for easier moving between devices). And I really like the idea of a laptop I can casually carry around without worrying about platters and heads.
Thanks for the suggestions; I may try the Reddit link later. (Edit: posted a thread here.)
If you are backing up your data responsibly, the SSD failure isn’t as much of an issue. And if you aren’t backing up your data, then you need to take care of that before worrying about storage failure.
This story, where they treated and apparently cured someone’s cancer, by taking some of his immune system cells, modifying them, and putting them back, looks pretty important.
How it was done: removing T cells (the cells which kill body cells infected with viruses directly, unlike B cells which secrete antibody proteins) and using replication-incapable viruses to put in a chimeric gene composed of part of a mouse antibody against human B-cell antigens, part of the human T-cell receptor that activates the T cell when it binds to something, and an extra activation domain to make the T-cell activation and proliferation particularly strong. Cells were reinjected, and they proliferated over 1000-fold, killed off all the cancerous leukemia cells they could detect in most patients, and the T-cells are sticking around as a permanent part of the patients immune systems. Relapse rates have been pretty low (but not zero).
This type of cancer (B-cell originating leukemia) is uniquely extraordinarily well suited for this kind of intervention for two reasons. One, there is an antigen on B cells and B-cell derived cancers that can be targeted without destroying anything else important in the body other than normal B cells. Two, since the modded T cells destroy both normal B cells carrying this antigen and the cancerous B cells, the patients have a permanent lack of antibodies after treatment which makes sure their immune system has a hard time reacting against the modified receptors present on the modded T cells, which has been a problem in other studies. Fortunately people can live without B cells if they are careful—it’s living without T cells you cannot do. They also suspect that pre-treating with chemotherapy majorly helped these immune cells go after the weakened cancer cell population.
You can repeat this with T-cells tuned against any protein you want, but you had better watch out for autoimmune effects or the patient’s immune system going after the chimeric protein you add and eliminating the modded population. And watch out ten years down the line for any T-cell originating lymphomas derived from wonky viral insertion sites in the modded cells—though these days there are ‘gentler’ viral agents than in the old days with a far lower rate of such problems, and CRISPR might make modding cells in a dish even more reliable soon.
Another thing in the toolkit. No silver bullets. Still pretty darn cool.
Loaded Language is a term coined by Dr. Robert Jay Lifton, a psychiatrist who did extensive studies on the thought reform techniques used by the communists on Chinese prisoners. Of all the cults in existence today, Scientology has one of the most complex systems of loaded language. If an outsider were to hear two Scientologists conversing, they probably wouldn’t be able to understand what was being said. Loaded language is words or catch phrases that short-circuits a person’s ability to think. For instance, all information that is opposed to Scientology, such as what I am writing here, is labelled by Scientologists as “entheta” (enturbulated theta—“enturbulated” meaning chaotic, confused and “theta” being the Scientology term for spirit). Thus, if a Scientologist is confronted with some information that opposes Scientology, the word “entheta” immediately comes into his mind and he/she will not examine the information and think critically about it because the word “entheta” has short-circuited the person’s ability to do so. This is just one example, of many, many Scientology terms.
Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.
The next step is TR-0 “bullbaiting” where the partner says things to the indoctrinee to get them to react. This is called finding a person’s “buttons”. When the person does react, he is told “flunk” and what he did to flunk and then the phrase that got him to react is repeated until the person no longer reacts. This is very effective as a behavior control method to get the person to blank out when someone starts saying negative things about Scientology.
Hm, this actually sounds like it could be useful...
I wonder if it would be valuable to get partway in to Scientology, then quit, just to observe the power of peer pressure, groupthink, and whatnot.
I wonder if it would be valuable to get partway in to Scientology, then quit, just to observe the power of peer pressure, groupthink, and whatnot.
Part of scientology program involve sharing personal secrets. If you quit they can use those against you. Scientology is set up in a way that makes it hard to quit.
A lot of people still do, though. Last time I looked into this, the retention rate (reckoned between the first serious [i.e. paid] Scientology courses and active participation a couple years later) was about 10%.
It’s not a question of whether they do leave, but whether they do come out ahead.
Scientology courses aren’t cheap. If you are going to invest money into training, I would prefer to buy training from an organisation that makes leaving easy instead of making it painful.
Oh, I’m pretty confident they don’t. But if you had strong reasons for joining and leaving Scientology other than what Scientologists euphemistically call “tech”, then in the face of those base rates it seems unlikely to me that they’d manage to suck you in for real.
There are probably safer places to see groupthink in action, though.
Part of scientology program involve sharing personal secrets.
More precisely, sharing personal secrets while connected to an amateur lie detector. And the secrets are documented on paper and stored in archives of the organization. It’s optimized for blackmailing former members.
Motivated cognition is pretty much the only kind of cognition people do. It seems epistemically healthy to sample cognition stemming from diverse motivations.
Observation: game theory is not uniquely human, and does not inherently cater to important human values.
Immediate consequence: game theory, taken to extremes already found in human history, is inhuman.
Immediate consequence the second: Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman.
Conjecture: if you attempt to optimize by taking your own use of game theory and similar arts to similar extremes, you will become a monster of a similar type.
Observation: a refusal to use game theory in considerations results in a strictly worse life than otherwise, and possibly its use more often, more intensely, and with less puny human mercy will result in a better life for you alone.
Conjecture: this really, really looks like the scary and horrifying spawn of a Red Queen race, defecting on PD, and being a jerk in the style of Cthulhu.
Sorry, how did you go from “non human agents use X” (a statement about commonality) to “X is inhuman” (a value judgement) to “if you use X you become a monster” (an even stronger value judgement), to “being a jerk in the style of Cthulhu” (!!!???).
Does this then mean you think using eyesight is monstrous because cephalopodes also have eyes they independently evolved?
Or that maximizing functions is a bad idea because ants have a different function than humans?
Nonhuman agents use X → X does not necessarily and pretty likely does not preserve human values → your overuse of X will cause you not to preserve human values. Being a jerk in a style of Cthulhu I use to mean being a jerk incidentally. Eyesight is not a means of interacting with people, and maximization is not a bad thing if you maximize for the right things, which game theory does not necessarily do.
Immediate consequence the second: Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman.
I suspect all economics is inhuman. I suspect that any complex economy that connects millions or billions of people is going to be incomprehensible and inhuman. By far the best explanation I’ve heard of this thought is by Cosma Shalizi.
The key bit here is the conclusion:
There is a fundamental level at which Marx’s nightmare vision is right: capitalism, the market system, whatever you want to call it, is a product of humanity, but each and every one of us confronts it as an autonomous and deeply alien force. Its ends, to the limited and debatable extent that it can even be understood as having them, are simply inhuman. The ideology of the market tell us that we face not something inhuman but superhuman, tells us to embrace our inner zombie cyborg and lose ourselves in the dance. One doesn’t know whether to laugh or cry or run screaming.
But, and this is I think something Marx did not sufficiently appreciate, human beings confront all the structures which emerge from our massed interactions in this way. A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market. We have no choice but to live among these alien powers which we create, and to try to direct them to human ends. It is beyond us, it is even beyond all of us, to find “a human measure, intelligible to all, chosen by all”, which says how everyone should go.
A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market.
This is a great way to express it. I was thinking about something similar, but could not express it like this.
The essence of the problem is, all “systems of human interaction” are not humans. A market is not a human. An election is not a human. An organization is not a human. Etc. Complaining that we are governed by non-humans is essentially complaining that there is more than one human, and that the interaction between humans is not itself a human. Yes, it is true. Yes, it can (and probably will) have horrible consequences. It just does not depend on any specific school of economics, or anything like this.
not uniquely human does not imply inhuman. Lungs are not uniquely human, hardly inhuman though.
Generally, using loaded, non-factual words like “inhuman” and “monster” and “cthulhu” and “horrifying” and “puny” in a pseudo-logical format is worthy of a preacher exhorting illiterates. But is it helpful here? I”d like to think it isn’t, and yet I’d rather discuss game theory in a visible thread than downvote your post.
“Inhuman” has strong connotations of inimical to human values—your argument looks different if it starts with something like “game theory is a non-human—it’s a simplified version of some aspects of human behavior”. In that case, altruism is non-human in the same sense.
I guess I’m mostly reacting to RAND and its ilk, having read the article about Schelling’s book (which I intend to buy), and am thinking of market failures, as well.
OK Mr Bayeslisk, I am one boxing you. I am upvoting this post now knowing that you predicted I would upvote it and intended all along to include or add some links to the above post so I don’t have to do a lot of extra work to figure out what RAND is and what book you are talking about.
What you’re referring to is a problem I’ve been thinking about and chipping away at for some time; I’ve even had some discussions about it here and people have generally been receptive. Maybe the reason you’re being downvoted is that you’re using the word ‘human’ to mean ‘good’.
The core issue is that humans have empathy, and by this we mean that other people’s utility function matters to us. More concisely, our perception of other people’s utility forms a part of our utility which is conditionally independent of the direct benefits to us.
Our empathy not only extends to other humans, but also animals and perhaps even robots.
So what are examples of human beings who lack empathy? Lacking empathy is basically the definition of psychopathy. And, indeed, some psychopaths (not all, but some) have been violent criminals who e.g. killed babies for money, tortured people for amusement, etc. etc.
So you’re essentially right that a game theory where the players do not have models of each other’s utility functions shows aspects of psychopathy and ‘inhumanity’.
But that doesn’t mean game theory is wrong or ‘inhuman’! All it means is that you’re missing the ‘empathy’ ingredient. It also means that it would not be a good idea to build an AI without empathy. That’s exactly what CEV attempts to solve. CEV is basically a crude attempt at trying to instill empathy in a machine.
Yes, that was what I was getting at. Like I said elsewhere—game theory is not evil. It’s just horrifyingly neutral. I am not using inhuman as bad; I am using inhuman as unfriendly.
Game theory is about strategies, not about values. It tells you which strategy should you use, if your goal is to maximize X. It does not tell you what X is. (Although some X’s, such as survival, are instrumental goals for many different terminal goals, so they will be supported by many strategies.)
OK, I think I was misunderstood and also tired and phrased things poorly. Game theory itself is not a bad thing; it is somewhat like a knife, or a nuke. It has no intrinsic morality, but the things it seems to tend to be used for, for several reasons, wind up being things that eject negative externalities like crazy.
Yes, but this seems to be most egregious when you advocate letting millions of people starve because the precious Market might be upset.
Besides the fact that maximizing a non-Friendly function leads to horrible results (whether the system being maximized is the Market, the Party, the Church, or… whatever), what exactly are you trying to say? Do you think that markets create more horrible results than those other options? Do you have any specific evidence for that? In that case it would be probably better to discuss the specific thing, before moving to a wide generalization.
I have no idea how the Holodomor is germane to this discussion.
The observation being made, I believe, is that the most prominent examples in the 20th century of mass death due to famine were caused by economic and political systems very far from the Austrian school economics. There’s a longish list of mass starvation due to Communist governments.
Is there an example of Austrian economists giving advice that led to a major famine, or that would have led to famine? I cannot offhand think of an example of anybody advocating “letting millions of people starve because the precious Market might be upset.”
Game theory is not like calculus or evolutionary theory—something any alien race smart enough to develop space travel is likely to formulate. It does represent human values.
You solve games by having solution criteria . Unfortunately, for any reasonable list of solution criteria you will always be able to find games where the result doesn’t seem to make sense. Also, there is no set of obviously correct and complete solution concepts. Consider the following game:
Two rational people simultaneously and secretly write down a real number [0,100]. The person who writes down the highest number gets a payoff of zero, and the person who writes down the lowest number gets that as his payoff. If there is a tie they each get zero. What happens?
The only “Nash equilibrium” (the most important solution concept in all of game theory) is for both players to write down 0, but this is a crazy result because picking 0 is weakly dominated by picking any other number (expect 100).
Game theory also has trouble solving many games where (a) Player Two only gets to move if Player One does a certain thing, (b) Player One’s strategy is determined by what he expects Player Two would do if Player Two gets to move, and (c) in equilibrium Player Two never moves.
Are you agreeing or disagreeing with “the things you describe in this post seem to be the kind of maths a smart alien race might discover just like we did”?
It depends on what you mean by “might” and “discover” (as opposed to invent). I predict that smart aliens’ theories of physics, chemistry, and evolution would be much more similar to ours than their theories of how rational people play games would be.
How so? Game theory basically studies interactions between two (or more) agents which can make choices the outcome of which depends on what the other agent does. You can use game theory to model interaction between two pieces of software, for example.
I still don’t see what does all this have to do with human values.
I am talking about game theory as a field of inquiry. You’re talking about the current state of the art in this field and pointing out that it has unsolved issues. So? Physics has unsolved issues, too.
I still don’t see what does all this have to do with human values.
I also don’t understand what does it mean for game theory to “be solved”. If you mean that in certain specific situations you don’t get an answer, that’s true for physics as well.
Game theory would be solved if there were a set of reasonable criteria which, if applied to every possible game of rational players, would cause you to know what the players would do.
Game theory would be solved if there were a set of reasonable criteria which, if applied to every possible game of rational players, would cause you to know what the players would do.
To continue with physics: physics would be solved if there were a set of reasonable criteria which, if applied to every possible interaction of particles, would cause you to know what the particles would do.
Consider a situation in which using physics you could prove that (1) X won’t happen, and (2) X will happen. If this situation existed physics wouldn’t be capable of being solved, but my understanding of science is that such a situation is unlikely to exist. Alas, this kind of situation does come up in game theory.
Whether you get an answer is dependent on the criteria you choose, but these criteria must have arbitrariness in them even for rational people. Consider the solution concept “never play a weakly dominated strategy.” This is neither right nor wrong but an arbitrary criteria that reflects human values.
Saying “the game theory solution is A,Y” is closer to “this picture is pretty” than “the electron will...”
Also, assuming someone is rational and wants to maximize his payoff isn’t enough to fully specify him, and consequently you need to bring in human values to figure out how this person will behave.
You seem to be talking about forecasting human behavior and giving advice to humans about how to behave.
That, of course, depends on human values. But that is related to game theory in the same way engineering is related to mathematics. If you are building a bridge you need to know the properties of materials you’re building it out of. Doesn’t change the equations, though.
You know that a race of aliens is rational. Do you need to know more about their values to predict how they will build bridges? Yes. Do you need to know more about their values to predict how they will play games? Yes.
Game theory is (basically) the study of how rational people behave. Unfortunately, there will always exist relatively simple games for which you can not use the tools of game theory to determine how players will behave.
Game theory is (basically) the study of how rational people behave.
Ah. We have a terminology difference. I defined my understanding of game theory a bit upthread and it’s not about people at all. For example, consider software agents operating in a network with distributed resources and untrusted counterparties.
I do not feel up to defending myself against multiple relatively hostile people. My apologies for having a belief that does not correspond to the prevailing LW memeplex. Kindly leave me alone to be wrong.
Today is the thirty-fourth anniversary of the official certification that smallpox had been eradicated worldwide. From Wikipedia,
Archaeological evidence shows evidence of smallpox infection in the mummies of Egyptian pharaohs. There was a Hindu goddess of smallpox in ancient India. By the 16th century it was a pandemic throughout the Old World, and epidemics with mortality rates of 30% were common. When smallpox arrived in the New World, there were epidemics among Native Americans with mortality rates of 80-90%. By the 18th century it was pretty much everywhere except Australia and New Zealand, which successfully used intensive screening of travelers and cargo to avoid infection.
The smallpox vaccine was one of the first ever developed, by English physician Edward Jenner in 1798. Vaccination programs in the wealthy countries made a dent in the pandemic, so that by WWI the disease was mostly gone in North America and Europe. The Pan-American Health Organization had eradicated smallpox in the Western hemisphere by 1950, but there were still 50 million cases per year, of which 2 million were fatal, mostly in Africa and India.
In 1959, the World Health Assembly adopted a resolution to eradicate smallpox worldwide. They used ring vaccination to surround and contain outbreaks, and little by little the number of cases dropped. The last naturally-occurring case was found in October 1975, in a two-year-old Bangladeshi girl named Rahima Banu, who recovered after medical attention by a WHO team. For the next four years, the WHO searched for more cases (in vain) before declaring the eradication program successful.
Smallpox scarred, blinded, and killed countless billions of people, on five continents, for hundreds to thousands of years, and now it is gone. It did not go away on its own. Highly trained doctors invented, then perfected a vaccine, other engineers found ways to manufacture it very cheaply, and lots of other serious, dedicated people resolved to vaccinate each vulnerable human being on the surface of the Earth, and then went out and did it.
Because Smallpox Eradication Day marks one of the most heroic events in the history of the human species, it is not surprising that it has become a major global holiday in the past few decades, instead of inexplicably being an obscure piece of trivia I had to look up on Wikipedia. I’m just worried that as time goes on it’s going to get too commercialized. If you’re going to a raucous SE Day party like I am, have fun and be safe.
This deserves some music:
-- Leslie Fish, The Ballad of Smallpox Gone
The virus currently only still exists as samples in two freezers in two labs (known to the scientific community). These days I think that that is overkill even for research purposes for this pathogen, what with the genome sequenced and the ability to synthesize arbitrary sequences artificially. If you absolutely must have part of it for research make that piece again from scratch. Consign the rest of the whole infectious replication-competent particles to the furnace where they belong.
EDIT: I found a paper in which smallpox DNA was extracted and viruses observed via EM from a 50 year old fixed tissue sample from a pathology lab that was not from one of the aforementioned collections. No word in the paper on if it was potentially infectious or just detectable levels of nucleic acids and particles. These things could be more complicated to 100% securely destroy than we thought...
With any luck, Polio will be next.
At risk of attracting the wrong kind of attention, I will publicly state that I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I’m a “new large donor”, this donation will be matched 3:1, netting a cool $20,000 for MIRI.
I have decided to post this because of “Why our Kind Cannot Cooperate”. I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.
This post and reading “why our kind cannot cooperate” kicked me off my ass to donate. Thanks Tuxedage for posting.
.
Would anyone else be interested in pooling donations to take advantage of the 3:1 deal?
I’d be interested, but only the small sum of 100$. Did anybody else take you up on that offer? Of course I’d like to verify the pool-persons identity before transfering money.
You sir, are awesome.
Interesting.
It certainly seems to make sense for the sake of the cause for (especially large, well-informed) donors to make their donations public. The only downside seems to be a potentially conflicting signal on behalf of the giver.
I’m not sure this is true. Doesn’t MIRI publish its total receipts? Don’t most organizations that ask for donations?
Growing up Evangelical, it was taught that we should give secretly to charities (including, mostly, the church).
I wonder why? The official Sunday School answer is so that you remain humble as the giver, etc. I wonder if there is some other mechanism whereby it made sense for Christians to propogate that concept (secret giving) among followers?
Total receipts may not be representative. There’s a difference between MIRI getting funding from one person with a lot of money and large numbers of people donating small(er) amounts. I was hoping this post to serve as a reminder that many of us on LW do care about donating, rather than a few rather rich people like Peter Thiel or Jaan Tallinn.
Also I suspect scope neglect can be at play—it’s difficult to, on an emotional level, tell the difference between $1 million worth of donations, or ten million, or a hundred million. Seeing each donation that led to adding up to that amount may help.
Yes, because it would show how many people donated. Number of people = power, at least in our brains.
The difference between one person donating 100 000, or one person donating 50 000 and ten people donating 5 000 is that in the latter case, your team has eleven people. It is the same amount of money, but emotionally it feels better. Probably it has other advantages (such as smaller dependence on whims of a single person), but maybe I am just rationalizing here.
There may not be anything to explain: the early Christian church grew very slowly. Perhaps secret almsgiving simply isn’t a good idea.
Hm. Possibly. Though it does still seem to be a rather popular convention in churches today to adopt an interpretation of secret offerings.
I would imagine popular interprations of scriptures on giving would evolve based on the goals of the church (to get $$$), and kept in check only by being believable enough to the member congregations.
Tithing seems to work for the church, so lots of churches resurrect it from the OT and really shaky exegesis and make it a part of the rules. If tithing didn’t work for the church, they could easily make it go away in the same way they get rid of tons of outdated stuff from the OT (and the NT).
Secret offerings seems similar to me. I’d imagine they could make the commands for secret giving go away with some simple hermeneutical waves of the hand if it didn’t benefit them.
This gives the church an information advantage. Information is power. It gives them the opportunity to make it seem like everyone is donating less than their neighbors.
or that “Christians” donate a lot when it’s really just a few of them.
Ah. So the leaders can give the ongoing message to “give generously” to a group and, as long as the giving data is kept in secret and no one ever speaks to anyone else about how much they gave, then each member will feel compelled to continue to give more in an effort to (a) “please God” and (b) gain favor in the eyes of the leaders by keeping up with, or outgiving, the other members. Is this what you are saying? If not can you elaborate?
Look at Mormons. They have a rule that you have to donate 10% of your income. If you don’t than you aren’t pleasing god and god might punish you.
In reality the average Mormon doesn’t donate 10% but might feel guilty for not doing so. If someone who would donate 7% would know that they donate above average, they would feel less guilty about not meeting the goal of donating 10%.
Sure, but why 10%? Why not 15%? Or 20%?
It is possible that they are setting the bar too low. You might have many people who would have given 30% had not the command been for 10%, but for 30%?
Yes, it is. Choosing that particular number might not be optimal. But there a cost of setting the number to high. If you set it too high and people don’t think they can reach that standard they might not even try.
Right.
I’d guess 10% is not an arbitrary number, but rather is a sort of market equilibrium that happens to be supportable by a certain interpretation of OT scripture. It might have just as well been 3% or 7% or 12% as these numbers are all pretty significant in the OT, and could have been used by leadership to impose that % on laypeople.
In any case, in my experience within the church, there are tithes… AND then there are offerings which include numerous different cause to give to on any given Sunday. It was often stated these causes (building projects, missions outreaches, etc.) were in addition to your tithe.
It is funny to me… it is almost like the reverse of a compensation plan you’d build for a team of commissioned sales people. Instead of trying to optimize the plan to best incentivize for sales performance by motivating your sales people to sell, the church may have evolved their doctrines and practices on giving to optimize for collecting revenue by motivating your members to give. Ha.
This is of course no argument against anything substantive you’re saying, but while the numbers 3,7,12 are certainly all significant in the OT the idea of percentage surely wasn’t. I can see 1⁄3, or 1⁄7, or 1⁄12, though.
Good point. Though, from my recall, there isn’t much basis in the OT for the modern day concept of tithing at all, percentage or otherwise. Christianity points to verses about giving 1/10th of your crops to the priest as the basis.
If they really wanted to change the rules and up it to 1/7th, or 12% or anything they want, they could come up with some new basis for that match using fancy hermeneutics.
This is sort of what is happening right now with homosexuality. Many churches are changing their views. They are justifying that by reinterpreting the verses they’ve used to condemn it in the past.
In fact, you can pretty much get the Bible to support any position or far-fetched belief you’d like. You only need a few verses… and it’s a big book.
This is one of my favorites.
http://en.wikipedia.org/wiki/Tithe
Sounds like somebody is trying to purchase status...
We should encourage people to purchase status when that purchase involves doing things we want or giving money to causes we like. Unless you prefer traditional schemes for status assignment like height, handsomeness, ability to throw a ball, and mass murder.
See my comment on the “In Praise of Tribes that Pretend to Try” thread
If donating to purchase status is accepted and encouraged, it risks to become the main motive behind donations. This in turn creates perverse incentives for the recipient of such donations.
I think it’s already the main psychological motivation behind most donations. I think it’s better to harness that than not to.
It sounds to me like somebody is purchasing utilons, using themselves as an example to get other people to also purchase utilons, and incidentally deriving a small amount of well deserved status from the process.
This isn’t the most parsimonious explanation for that behaviour.
PSA: If you want to get store-bought food (as opposed to eating out all the time or eating Soylent), but you don’t want to have to go shopping all the time, check to see if there is a grocery delivery service in your area. At least where I live, the delivery fee is far outbalanced by the benefit of almost no shopping time, slightly cheaper food, and decreased cognitive load (I can just copy my previous order, and tweak it as desired).
This makes me wonder: What are some simple ways to save quite some time that the average person does not think of?
Stop watching TV.
Sleep enough.
Look at the boring advice repository
Move close to where you work (even if it means you have to live in a smaller place).
If you don’t have a car, study in the bus/train or take the commute as a bicycling exercise if the distance is relatively short and you can take a shower.
Possibly cooking very large meals and saving the rest. If you want to save money by cooking from scratch rather than buying prepared food or eating out, it can help to prepare several meals worth at a time.
Pay for an online assistant. It makes you feel awkward but I hear it’s quite effective.
Dave Asprey claims that you can get by fine on five hours of sleep if you optimize it to spend as much time in REM and delta sleep as possible. This appeals to me more than polyphasic sleep does. Link
Also I was intrigued when xkcd mentioned the 28 hour day, but I don’t know of anyone who has maintained that schedule
Dan Aspey claims he can do well on 5 hours of sleep, and then makes a further claim that any other adult (he recommends not trying to do serious sleep reduction until you’re past 23) can also do well on 5 hours. To judge by a fast look at the comments, rather few of his readers are trying this, let alone succeeding at it.
Do you have any information about whether Aspey’s results generalize?
I am under the impression that nearly anybody who talks about sleep is guilty of Generalizing from One Example.
Not really.
There are by now some quite extensive studies about the amount of required or healthy sleep. Sleep is roughly normal distributed between 5 and 9 hours and for some of those getting 5 or less hours of sleep this appears to be healthy:
Jane E. Ferrie, Martin J. Shipley, Francesco P. Cappuccio, Eric Brunner, Michelle A. Miller, Meena Kumari, Michael G. Marmot: A Prospective Study of Change in Sleep Duration: Associations with Mortality in the Whitehall II Cohort.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2276139/pdf/aasm.30.12.1659.pdf
So probably Dave Asprey is one of those 1% for this is correct.
Some improvements (or changes) may be possible for most of us though. You can get along with less sleep if you sleep at your optimum sleep time (which differs depending on your genes esp. the Period 3 gene) and if you start to sleep quickly.
Polyphasic sleep may significantly reduce your sleep total but nobody seems to be able what the health effects are. It might be that it risks your long time health.
Another benefit for me is reduced mistakes in picking items from the list.
Some people don’t use online shopping because they worry pickers may make errors. My experience is that they do, but at a much lower rate than I do when I go myself. I frequently miss minor items off my list on the first circuit through the shop, and don’t go back for it because it’d take too long to find. I am also influenced by in-store advertising, product arrangements, “special” offers and tiredness in to purchasing items that I would rather not. It’s much easier to whip out a calculator to work out whether an offer really is better when you’re sat calmly at your laptop than when you’re exhausted towards the end of a long shopping trip.
You’d expect paid pickers to be better at it—they do it all their working hours, I only do it once or twice a month. Also, all the services I’ve used (in the UK) allow you to reject any mistaken items at your door for a full refund—which you can’t do for your own mistakes. The errors pickers make are different to the ones I would, which makes them more salient—but they are no more inconvenient in impact on average.
Alternative: buy a freezer and buy your food in bulk.
My family does this and it’s not such a good idea. Old forgotten food will accumulate at the bottom and you’ll have less usable space at the top. Chucking out the old food is a) a trivial inconvenience and b) guilt-inducing.
Unless it’s one of those freezers with sliding trays.
I have one of those. I thought the chest models are antiquity.
They are standard in the US. It’s like washers: top-loaders dominate in the US and front-loaders dominate in Europe.
I disagree with this. Having lived in the US my entire life (specifically MA and VA), I’ve been in very few homes that had chest freezers, and as far as I recall, none that only had chest freezers (as opposed to extra storage beyond a combination refrigerator/freezer).
I’m not willing to pay to resolve this difference of perception, but if one wanted to do so, the information is probably available here.
I am not sure we disagree. I’m not saying that people are using chest freezers instead of normal refrigerators. I’m saying that if a family buys a separate freezer in addition to a regular fridge, in the US that separate freezer is likely to be a chest freezer.
Here on the West Coast I’ve seen both standing and chest models, although combination refrigerator/freezers are far more common than either. I associate the chest style with hunters and older people, but that likely reflects my upbringing; I wouldn’t hazard a guess as to which is more common overall.
Assuming you are largely indifferent between fresh and frozen food (a data point: I’m not).
I find this a false dichotomy. Care to muster a rebuke?
Empiricism! :-)
Most of the food that I eat doesn’t freeze or doesn’t freeze well (think fruits and vegetables). Frozen meat is OK for a stew but not at all OK for steaks.
I find—based on my personal experience—the texture, aromas, etc. of fresh food to be quite superior to those of frozen food.
Ah, it’s funny how easily I forget food isn’t just about fueling your cells.
I was expecting some sort of a nutrition based argument.
I would point out that it’s unwise to ignore one of the major sources of pleasure in this world :-)
Must… resist… mentioning a particular stereotype about northern Europe.
I hear that if you stir-fry vegetables, then frozen is a better option. (I eat most of the vegetables I eat raw or dehydrated, neither of which seem to do well if you freeze them first.)
I think it depends on whether you can get your heat high enough.
The point of stir-frying frozen veggies is to brown the outside while not overcooking the inside. Normally this is done by cooking non-frozen veggies at very high heat but a regular house stove can’t do it properly—so a workaround is to use frozen.
How does freeze-them-yourself compare to buying vegetables which are already frozen?
The good kind of already frozen vegetables are much tastier, have better texture and have kept more of their nutrients. That is because an ordinary freezer is not nearly quick enough to preserve most vegetables.
Industrially-frozen food is frozen much faster which is good. A house freezer is not powerful (or cold) enough to freeze food sufficiently fast.
I hear that buying them already frozen is cheaper, more sanitary, and less work, but I haven’t looked into it myself.
re: steaks, that’s just not accurate. Frozen steaks are great! I say this as someone who filled his freezer with a quarter of a cow.
Maybe I just don’t know how to deal with frozen steaks, but for me fresh-meat steaks are much, much juicier.
For those in the community living in the south Bay Area: https://www.google.com/shopping/express/
Regarding food in particular, I’m still wishing Romeo Stevens would commercialize his tasty and nutritious soylent alternative so I could buy it the same way I buy juice from the grocery store.
New work suggests that life could have arisen and survived a mere 15 million years after the Big Bang, when the microwave background radiation levels would have provided sufficient energy to keep almost all planets warm. Summary here, and actual article here. This is still very preliminary, but the possibility at some level is extremely frightening. It adds billions of years of time for intelligent life to have arisen that we don’t see, and if anything suggests that the Great Filter is even more extreme than we thought.
Now that is scary, although there are a few complications. Rocky bodies were probably extremely rare during that time since the metal enrichment of the Universe was extremely low. You can’t build life out of just hydrogen and helium.
Is that a relevant number?
Doesn’t the relevant number of opportunities for life to appear have units of mass-time?
Isn’t the question not how early was some Goldilocks zone, but how much mass was in a Goldilocks zone for how long? This says that the whole universe was a Goldilocks zone for just a few million years. The whole universe is big, but a few million years is small. And how much of the universe was metallic? The paper emphasizes that some of it was, but isn’t this a quantitative question?
I agree that a few million years is small, and that the low metal content would be a serious issue (which in addition to being a problem for life forming would also make planets rare as pointed out by bramflakes in their reply). However, the real concern as I see it is that if everything was like this for a few million years, then if life did arise (and you have a whole universe for it to arise), as the cooldown occurred, it seems highly plausible that some forms of life would have then adopted to the cooler environment. This makes panspermia more plausible and thus makes life in general more likely. Additionally, it makes more of a chance for life to get lucky if it managed to get into one of the surviving safe zones (e.g. something like the Mars-Earth biotransfer hypothesis).
I think you may be correct that this isn’t a complete run around and panic level update, but it is still disturbing. My initial estimate for how bad this could be is likely overblown.
I’m nervous about the idea that life might adapt to conditions in which it cannot originate. Unless you mean spores, but they have to wait for the world to warm up.
As for panspermia, we have a few billion years of modern conditions before the Earth, which is itself already a problem. I think the natural comparison is the size of that Goldilocks zone to the very early one. But I don’t know which is bigger.
Here are three environments. Which is better for radiation of spores?
(1) a few million years where every planet is wet
(2) many billion years, all planets cold
(3) a few billion years, a few good planets.
The first sounds just too short for anything to get anywhere, but the universe is smaller. If one source of life produces enough spores to hit everything, then greater time depth is better, but if they need to reproduce along the way, the modern era seems best.
Why this happened on Earth? It is pretty likely for example that life couldn’t originate in an environment like the Sahara desert, but life can adapt and survive there.
I do agree that spores are one of the more plausible scenarios. I don’t know enough to really answer the question, and I’m not sure that anyone does, but your intuition sounds plausible.
There’s barely any life in the Sahara. It looks a lot like spores to me. I want a measure of life that includes speed. Some kind of energy use or maybe cell divisions. I expect the probability of life developing in a place to be proportional to amount of life there after it arrives. Maybe that’s silly; there certainly are exponential effects of molecules arriving the same place at the same time that aren’t relevant to the continuation of life. But if you can rule out this claim, I think your model of the origin of life is too detailed.
I’m not sure what you mean by this.
Do you mean something like the idea that if an environment is too harsh even if life can survive the chance that it will evolve into anything beyond a simple organism is low?
We should have the data now to take a whack at the metallicity side of that question, if only by figuring out how many Population 2 stars show up in the various extrasolar planet surveys in proportion with Pop 1. Don’t think I’ve ever seen a rigorous approach to this, but I’d be surprised if someone hasn’t done it.
One sticking point is that the metallicity data would be skewed in various ways (small stars live longer and therefore are more likely to be Pop 2), but that shouldn’t be a showstopper—the issues are fairly well understood.
The paper mentions a model. Maybe the calculation is even done in one of the references. The model does not sound related to the observations you mention.
I don’t think this is frightening. If you thought life couldn’t have arisen more than 3.6 billion years ago but then discover that it could have arisen 13.8 billion years ago, you should be at most 4 times as scared.
The number of habitable planets in the galaxy over the number of habituated planets is a scary number.
The time span of earth civilization over the time span of earth life is a scary number.
4 is not a scary number.
If it were just a date, then, yes, a factor of 4 is lost in the noise. But switching to panspermia changes the calculation. Try Overcoming Bias [Added: maybe this is only a change under Robin Hanson’s hard steps model.]
It changes my epistemic position by a helluva lot more than a factor of 4. If an interstellar civilization arose somewhere in the universe that is now visible, somewhere in a uniform distribution over the last 3.6 billion years, there’s much smaller chance we’d currently (or ever) be within their light cone than if they’d developed 13.8 billion years ago.
It’s potentially scary not because of the time difference, but because of the quantity of habitable planets. It’s understood that current conditions in the Universe make it so that only relatively few planets are in the habitable zone. But if the Universe was warm, then almost all planets would be in the habitable zone, making the likelihood of life that much higher.
As I said in my reply to JoshuaZ though, the complication is that rocky planets were probably much rarer than they are now.
It’s the scariest number.
There weren’t any planets 15 million years after the Big Bang. The first stars formed 100 million years after the Big Bang, and you need another few million on top of that for the planets to form and cool down.
It seems to take a lot more than 15 million years to get from “life” to “intelligent life”. According to the article this period would only have lasted for a million years, so at most we would probably get a lot of monocellular life arising and then dying during the cooloff.
1 - why should no intelligent life arising from a set of places that were likely habitable for only 5 million years (if they existed at all, which is doubtful) be surprising?
2 - I raise the possibility of outcomes for intelligent life that are not destruction or expansion through the universe.
Edit: Gah, that’s what I get for leaving this window open while about 8 other people commented
See the conversation with Doug up subthread.
Does it add billions of years? That’s not saying that life could have arisen and survived since 15 million years after the Big Bang.
The paper implies that it only adds millions of years, not billions.
Once the CMB cools down enough with the expansion of the Universe, the Goldilock conditions disappear. The CMB temperature is roughly inversely proportional to the age of the Universe, so 300K at 15 million years becomes just 150K 15 million years later.
I decided I’d share the list of questions I try to ask myself every morning and evening. I usually spend about thirty seconds on each question, just thinking about them, though I sometimes write my answers down if I have a particularly good insight. I find they keep me pretty well-calibrated to my best self. Some are idiosyncratic, but hopefully these will be generally applicable.
A. Today, this week, this month:
What am I excited about?
What goals do I have?
What questions do I want to answer?
What specific ways do I want to be better?
B. Yesterday, last week, last month:
What did I accomplish that am I proud of?
In what instances did I behave in a way I am proud of?
What did I do wrong? How will I do better?
What do I want to remember? What adventures did I have?
C. Generally: 9: If I’m not doing exactly what I want to be doing, why?
How long have you been doing this, and have you noticed any effects?
For about a month and a half, though I forget about 25% of the time. I haven’t noticed any strong effects, though I feel as if I approach the day-to-day more conscientiously and often get more out of my time.
For a term in university I followed a similar method. Every day I would post ‘Today’s Greatest Achievement:’ in the relevant social media of the time. There was a noticeable improvement in happiness and extra-curricular productivity as I more actively sought out novel experiences, active community roles, and academic side projects. The daily reminder led to a far more conscientious use of my time.
The combined reminder that I spent all weekend playing video games and broadcasting to my entire social circle that that was my greatest achievement in the past 48 hours was in a mindless video game led to immediate behavior changes.
That’s the hardest of them all, still searching for answers.
What does it mean for “you” to not be doing exactly what you “want”? Do you downplay or ignore your not-conscious thought processes?
Are there any translation efforts in academia? It bothers me that there may be huge corpuses of knowledge that are inaccessible to most scientists or researchers simply because they don’t speak, say, Spanish, Mandarin, or Hindi. The current solution to this problem seems to be ‘everyone learn English’, which seems to do ok in the hard sciences. But I fear there may be a huge missed opportunity in social sciences, especially because Americans are WEIRD and not necessarily psychologically or behaviorally respresentative of the world population. (Link is to an article, link to the cited paper here: pdf)
The plural of “corpus” is “corpora”. I don’t say this to be pedantic, but because the word is quite lovely, and deserves to be used more.
If a hypothetical bothers you, maybe you should hold off proposing solutions and instead investigate whether it is a real problem.
I’m not sure losing the non-English literature is a big problem. A lot of foreign research is really bad. A little demonstration from 5 days ago: I criticized a Chinese study on moxibustion https://plus.google.com/103530621949492999968/posts/TisYM64ckLM
This was translated into / written in English and published in a peer-reviewed journal (Neural Regeneration Research). And it’s complete crap.
Of course there is very bad research published by the West on alternative medicine too, but as the links I provide show, Chinese research is systematically and generally of very low quality. If China cannot produce good research, what can we expect of other countries?
The language that I think most plausibly contains a disconnected scientific literature is Japanese.
Some time ago someone linked a paper indicating that there are benefits to fragmentation of academia by language barriers as less people are exposed to some kind of dominant view allowing them to come up with new ideas. One cited example was anthropology which had a Russian and an Anglosphere tradition.
I’d assume there not to be any major translation efforts as being a translator isn’t as effective as publishing something of your own by far.
Publishing your own scientific paper brings you more rewards, but translating other person’s article requires less time and less scientific skills (just enough to understand the vocabulary and follow the arguments).
If someone would pay me for doing it, I would probably love to have a job of translating scientific articles to my language. It would be much easier for me to translate dozen articles than to create one. And if I would only translate the articles that passed some filter, for example those published in peer-reviewed journals, I could probably translate the output of twenty or fifty scientists.
It seems like there could definitely be money in ‘international’ journals for different fields, which would aggregate credible foreign papers and translate them. Interesting that they don’t seem to exist.
How effective would it be to use human expertise to translate just the contents pages of journals, with links to Google Translate for the bodies of the papers? Or perhaps use humans to also translate the abstracts?
Does anything like this exist already?
Idea that popped into my head: it might be straightforward to make a frontend for the arXiv that adds a “Translate this into” drop-down list to every paper’s summary page. (Using the list could redirect the user to Google Translate, with the URL for the PDF automatically fed into the translator.) As far as I know no one has done this but I could be wrong.
This chain is so interesting. As a grad student I could translate some papers and make some decent money in such a hypothetical regime.
The Body Electric mentioned that the Soviets were ahead of the west in studying electrical fields in biology because (not sure of the date—sometime before the seventies) electricity sounded to much like elan vital to the westerners.
Which Body Electric? I don’t see it in Becker and Selden, but maybe I don’t know what to look for.
Possibly this Body Electric. It’s at least about the right subject, but I’d have swore I’d read it much earlier than 1998, and my copy (buried somewhere) probably had a purple cover.
The cover on the hardcover looks more familiar, and at least it’s from 1985.
Wikipedia makes it sound like the right book.
Where were you searching? You had the authors right.
I looked at that book on google books. I searched for “Soviet,” “elan,” etc, and did not see the story you mentioned.
Added: Amazon says that the book uses these words a lot more than google says, but I didn’t look at many hits.
That’s interesting. I read your comment out of context and didn’t know you were making a comment about the language. I agreed that I don’t like thinking about electricity in animals (or more strongly, any coordinated magnetic phenomena, etc) because of this association. There is a similarity in the sounds, (“electrical” and “elan vital”) but also the concepts are close in space … perhaps the Soviets lacked this ugh field altogether.
I was using “sounded like” metaphorically. I assume they knew the difference in meaning, but were affected by the similarity of concepts and worry about their reputations.
I guessed that the Soviets were more willing to do the research because Marxism was kind of like weird science, so they were willing to look into weird science in general. However, this is just a guess. A more general hypothesis is that new institutions are more willing to try new things.
If you know English and Mandarin, you might make an academic career out of writing meta analysis of topics discussed in Mandarin research papers.
I am not professionally involved in these fields but I have read that among those who are there is a very jaundiced opinion of Chinese and Indian scientific research. If none of the following hold completely ignoring their publications is apparently a good heuristic; at least one foreign co-author or one who did their doctorate in the first world or an institution or author with a significant reputation. Living in China and having some minimal experience with the Chinese attitude to plagiarism/copying/research makes this seem plausible. I doubt anyone’s missing anything by ignoring scientific articles published in Mandarin. I make no such claims for social sciences.
I’m expecting China to have an increasing role in global affairs over the next century. With that in mind, there are a couple of things I’m curious about:
Does anyone have an idea of how prevalent existential risk type ideas are in China?
Has anyone tried to spread LW memes there?
Are the LW meetups in Shanghai, etc. mostly ex-pats or also locals?
Thanks!
Gregory Cochran has written something on aging. I’ll post some selected parts, but you should read the whole thing, which is pretty short.
...
...
Nothing entirely new to me here, but it’s always good to see another scientist come out in favor of aging research. Also, note that the Latin text on the top of Cochran’s website is omnes vulnerant, ultima necat, which means approximately, “Each second wounds, the last kills.”
Life is a concept we invented
Discussion of why it plausibly does not make sense to look for a firm dividing line between life and non-life.
Just because a boundary is fuzzy doesn’t mean it’s meaningless.
It just doesn’t matter very much—certainly not enough to keep wrangling over the exact definition of the boundary. As long as we understand what we mean by crystal, bacterium, RNA, etc., why should we care about the fuzzy dividing line? Are ribozymes going to become more or less precious to us according only to whether we count them as living or not, given that nothing changes about their actual manifested qualities? Should they?
-- Karl Popper, from The Poverty of Historicism
Why did you post this quote? It seems like a good example of diseased thinking, but I’m not sure if that was your point.
Are you saying you think the quote exhibits diseased thinking or just that it was about diseased thinking?
To me, the quote seemed to clearly make the same point that Anatoly’s first paragraph did, so it seems straightforward why he would include it.
The quote says that biologists don’t deal with questions such as “what is life?” because that’s essentialism and that’s Bad. Similarly, physicists certainly don’t study ideal systems like atoms or light. The disease is in the false dichotomy.
Oh, hmm, I thought what he was saying about atoms and light is not that physicists don’t study those things, but that they don’t study some abstract platonic version of light or atom derived from our intuitions, but instead use those words to describe phenomena in the real world and then go on to continue investigating those phenomena on their own terms.
So, for example, “Do radio waves really count as light?” is not a very interesting question from a physics perspective once you grant that both radio waves and visible light are on the same electromagnetic wave spectrum. Or with atoms we could ask, “Are atoms really atoms if they can be broken down into constituent parts?” These would just be questions about human definitions and intuitions rather than about the phenomena themselves. And so it is with the question, “What is life?”
That’s what it seemed like Popper was saying to me. Did you have a different interpretation? Also, I’m not sure I’ve understood your comment—which dichotomy are you saying is a false dichotomy?
Asking whether radio waves really count as light is just arguing a definition. That’s not interesting to anyone who understands the underlying physics.
Notice that the questions he gives for essentialists are actually interesting questions, they’re just imprecisely phrased, e.g. “what is matter?” These questions were asked before we’d decided matter was atoms. They were valid questions and serious scientists treated them. Now these questions are silly because we’ve already solved them and moved on to deeper questions, like “where do these masses come from?” and “how will the universe end?”
When a theorist comes up with a new theory they are usually trying to answer one of these essentialist questions. “What is it about antimatter that makes it so rare?” The theorist comes up with a guess, computes some results, spends a year processing LHC data, and realizes that their theory is wrong. At some point in here they switched from essentialist (considering an ideal model) to nominalist (experimental data), but the whole distinction is unnecessary.
Yes, they most certainly do. QED is an extremely abstract idea, derived from intuition about how the light we interact with on a classical level behaves. This is called the correspondence principle.
String theorists come up with a theory based entirely on mathematical beauty, much like Plato.
I think you’re reading Popper uncharitably, and his view of what physicists do is about the same as yours. He really is arguing against arguing definitions. “What is matter?” is an ambiguous question: it can be understood as asking about a definition, “what do we understand by the word ‘matter’, exactly?”, and it can be understood as asking about the structure, “what are these things that we call matter really made of, how do they behave, what are their properties, etc.?”. The former, to Popper, is an essentialist question; the latter is not.
Your understanding of “essentialist questions” is not that of Popper; he wouldn’t agree with you, I’m sure, that “What is it about antimatter that makes it so rare?” is an essentialist question. “Essentialist” doesn’t mean, in his treatment, “having nothing to do with experimental data” (even though he was very concerned with the value of experimental data and would have disagreed with some of modern theoretical physics in that respect). A claim which turns out to be unfalsifiable is anathema to Popper, but it is not necessarily an “essentialist” claim.
Oh, hmm. I see now that we were interpreting Popper differently, and I may have been wrong.
If Popper did mean to exclude that kind of inquiry, then I agree with you that he was misguided.
In that case, it sounds like you would agree with the rest of Anatoly’s comment, just not the Popper quote. Is that right?
That’s right, more or less.
Gotcha, thanks!
Which disease are you referring to?
“Diseased thinking” here is probably jargon; see Yvain’s 2010 post “Diseased thinking: dissolving questions about disease”.
The definition of life matters because we want to be able to talk about extraterrestrial life as well.
The precise definition of life will not be the thing that will determine our opinion about possible extraterrestrial life when we come across it. It will matter whether that hypothetical life is capable of growth, change, producing offspring, heredity, communication, intelligence, etc. etc. - all of these things will matter a lot. Having a very specific subset of these enshrined as “the definition of life” will not matter. This is what Popper’s quote is all about.
It’s possible that extraterrestrial life will be nothing but a soup of RNA molecules. If we visit a planet while its life is still in the embryonic stages, we need to include that in our discourse of life in general. We need to have a word to represent what we are talking about when we talk about it. That’s the only purpose any definition ever serves. If you want to go down the route of ‘the definition of life is useless’, you might as well just say ‘all definitions are useless’.
My favorite example is challenging people to show that stars (in space) are any less alive than stars (in Hollywood).
What’s the Darwinian evolution involved in stars? (Are you thinking of the hypothesis that universes evolve to create black holes?)
What I meant is that stars are born, they procreate (by spewing out new seeds for further star formation), then grow old. Stars “evolved” to be mostly smaller and longer lived due to higher metallicity. They compete for food and they occasionally consume each other. They sometimes live in packs facilitating further star formation, for a time. Some ancient stars have whole galaxies spinning around them, occasionally feeding on their entourage and growing ever larger.
Don’t traits have to be heritable for evolution to count? I’m not an expert or anything, but I thought I’d know if stars’ descendants had similar properties to their parent stars.
Descendant stars might have proportions of elements related to what previous stars generated as novas. I don’t know whether there’s enough difference in the proportions to matter.
Can you give an example of a property a star might have because having that property made its ancestor stars better at producing descendant stars with that property?
Sorry, I’m not an expert in stellar physics. Possibly metallicity, or maybe something else relevant. My original point was to agree that there is no good definition of “life” which does not include some phenomena we normally don’t think of as living.
See here.
Do stars exhibit teleological behavior?
Why do you ask?
Isn’t teleology fundamental to some conceptions of life?
Feel free to elaborate.
What’s wrong with ‘A self-sustaining (through an external energy source) chemical process characterized by the existence of far-from-equilibrium chemical species and reactions.’?
Suspect you would have a difficult time defining “external energy source” in a way that excludes fire but includes mitochondria.
Which equilibrium? Stars are far from the eventual equilibrium of the heat death, and also not at equilibrium with the surrounding vacuum.
Not clear whether viruses, prions, and crystals are included or excluded.
True; what is meant is a simple external energy source such as radiation or a simple chemical source of energy. It’s true that this is a somewhat fuzzy line though.
I specifically said far-from-equilibrium chemical species and reactions. The chemistry that goes on inside a star is very much in equilibrium conditions.
Viruses are not self-sustaining systems, so they are obviously excluded. You have to consider the system of virus+host (plus any other supporting processes). Same with prions. Crystals are excluded since they do not have any non-equilibrium chemistry.
I do not see how this answers the objection. All you did was add the qualification ‘simple’ to the existing ‘external’. Is this meant to exclude fire, or include it? If the former, how does it do so? Presumably plant matter is a sufficiently “simple” source of energy, since otherwise you would exclude human digestion; plant matter also burns.
Again, which equilibrium? The star is nowhere near equilibrium with its surroundings.
Neither are humans… in a vacuum; but viruses are quite self-sustaining in the presence of a host. You are sneaking in environmental information that wasn’t there in the original “simple” definition.
Look at my reply to kalium. To reiterate, the problem is that people confuse objects with processes. The definition I gave explicitly refers to processes. This answers your final point.
I already conceded that it’s a fuzzy definition. As I said, you are correct that ‘simple’ is a subjective property. However, if you look at the incredibly complex reactions that occur inside human cells (gene expression, ribosomes, ATP production, etc), then yes, amino acids and sugars are indeed extremely simple in comparison. If you pour some sugars and phosphates and amino acids into a blender you will not get much DNA; not nearly in the quantities that it is found in cells. This is what is meant by ‘far from equilibrium’. There is much more DNA in cells than you would find if you took the sugars and fatty acids and vitamins and just mixed them together randomly.
I feel like we’re talking past each other here. I explicitly (and not once, but twice in the definition) referred to chemical processes: http://en.wikipedia.org/wiki/Chemical_equilibrium
Ok, chemical equilibrium. This does not seem to me like a natural boundary; why single out this particular equilibrium and energy scale?
I think you’re missing my point, which is that I don’t see how your definition excludes fire as a living thing.
I don’t think it does. A human in vacuum is alive, for a short time. How do you distinguish between “virus in host cell” and “human in supporting environment”?
Because the domain of chemistry is broad enough to contain life as we know it, and also hypothesized forms of life on other planets, without being excessively inclusive.
I tried to answer it. The chemical species that are produced in fire are the result of equilibrium reactions http://en.wikipedia.org/wiki/Combustion . They are simple chemical species (with more complex species only being produced in small quantities; consistent with equilibrium). Especially, they are not nearly as complex as compared to the feedstock as living chemistry is.
They are both part of living processes. The timescale for ‘self-sustaining’ does not need to be forever. It only needs to be for some finite time that is larger than what would be expected of matter rolling down the energy hill towards equilibrium.
In what sense are parasitic bacteria that depend on the host for many important functions self-sustaining while viruses are not?
As I said, you have to consider the system of parasite+host (plus any other supporting processes).
I think a lot of the confusion arises from people confusing objects with processes that unfold over time. You can’t ask if an object is alive by itself; you have to specify the time-dynamics of the system. Statements like ‘a bacterium is alive’ are problematic because a frozen bacterium in a block of ice is definitely not alive. Similarly, a virus that is dormant is most definitely not alive. But that same virus inside a living host cell is participating in a living process i.e. it’s part of a self-sustaining chain of non-equilibrium chemical reactions. This is why I specifically used the words ‘chemical process’.
So this is a definition for “life” only, not “living organism,” and you would say that a parasite, virus, or prion is part of something alive, and that as soon as you remove the parasite from the host it is not alive. How many of its own life functions must a parasite be able to perform once removed from the host in order for it to be considered alive after removal from the host?
Precisely.
As the definition says. It must demonstrate non-equilibrium chemistry and must be self-sustaining. Again, ‘simple forms of energy’ is relative, so I agree that there’s some fuzziness here. However, if you look at the extreme complexity of the chemical processes of life (dna, ribosomes, proteins, etc.) and compare that to what most life consumes (sugars, minerals, etc.) there is no ambiguity. It’s quite clear that there’s a difference.
Are you sure that all life is chemical? There’s a common belief here that a sufficiently good computer simulation of a human being counts as being that person (and presumably, a sufficiently good computer simulation of an animal counts as being an animal, though I don’t think I’ve seen that discussed), and that’s more electrical than chemical, I think.
I have a notion that there could be life based on magnetic fields in stars, though I’m not sure how sound that is.
I guess it depends on your philosophical position on ‘simulations’. If you believe simulations “aren’t the real thing”, then a simulation of chemistry “isn’t actual chemistry”, and thus a simulation of life “isn’t actual life.” Anyways, the definition I gave doesn’t explicitly make any distinction here.
About exotic forms of life, it could be possible. A while ago I had some thoughts about life based on quark-gluon interactions inside a neutron star. Since neutron star matter is incredibly compact and quarks interact on timescales much faster than typical chemistry, you could have beings of human-level complexity existing in a space of less than a cubic micrometer and living out a human-lifespan-equivalent existence in a fraction of a second.
But these types of life are really really speculative at this point. We have no idea that they could exist, and pretty strong reasons for thinking they couldn’t. It doesn’t seem worth it to stretch a definition of life to contain types of life we can’t even fathom yet.
Any good advice on how to become kinder? This can really be classified as two related goals, 1) How can I get more enjoyment out of alleviating others suffering and giving others happiness? 2) How can I reliably do 1 without negative emotions getting in my way (ex. staying calm and making small nudges to persuade people rather than getting angry and trying to change people’s worldview rapidly)?
I’d recommend Nonviolent Communication for this. It contains specific techniques for how to frame interactions that I’ve found useful for creating mutual empathy. How To Win Friends And Influence People is also a good source, although IIRC it’s more focused on what to do than on how to do it. (And of course, if you read the books, you have to actually practice to get good at the techniques.)
Thanks! And out of curiosity, does the first book have much data backing it? The author’s credentials seem respectable so the book would be useful even if it relied on mostly anecdotal evidence, but if it has research backing it up then I would classify it as something I need (rather than ought) to read.
According to wikipedia, there’s a little research and it’s been positive, but it’s not the sort of research I find persuasive. I do have mountains of anecdata from myself and several friends whose opinions I trust more than my own. PM me if you want a pdf of the book.
I would like to offer further anecdotal evidence that NVC techniques are useful for understanding your own and other people’s feelings and feeling empathy toward them.
Thirded. The most helpful part for me was internalising the idea that even annoying/angry/etc outbursts are the result of people trying to get their needs met. It may not be a need I agree with, but it gives me better intuition for what reaction may be most effective.
When it comes to research about paradigms like that it’s hard to evaluate them. If you look at nonviolent communication and set up your experiment well enough I think you will definitely find effects.
The real question isn’t whether the framework does something but whether it’s useful. That in turn depends on your goals.
Whether a framework helps you to successfully communicate depends a lot on cultural background of the people with whom you are interacting.
If you engage in NVC, some people with a strong sense of competition might see you as week. If you would consistentely engage in NVC in your communcation on LessWrong, you might be seen as a weird outsider.
You would need an awful lot of studies to be certain about the particular tradeoff in using NVC for a particular real world situation.
I don’t know of many studies that compare whether Windows is better than Linux or whether VIM is better than Emacs. Communication paradigms are similar they are complex and difficult to compare.
I found NVC is very intuitively compelling, have personal anecdotal evidence that it works (though not independent of ESRogs, we go to the same class).
In addition to seconding nonviolent communication, cognitive behavior therapy techniques are pretty good—basically mindfulness exercises and introspection. If you want to change how you respond to certain situations (e.g. times when you get angry, or times when you have an opportunity to do something nice), you can start by practicing awareness of those situations, e.g. by keeping a pencil and piece of paper in your pocket and making a check mark when the situation occurs.
I also want to learn how to be kinder. The sticking point, for me, is better prediction about what makes people feel good.
I was very ill a year ago, and at that time learned a great deal about how comforting it is to be taken care of by someone who is compassionate and knowledgeable about my condition. But for me, unless I’m very familiar with that exact situation, I have trouble anticipating what will make someone feel better.
This is also true in everyday situations. I work on figuring out how to make guests feel better in my home and how to make a host feel better when I’m the guest. (I already know that my naturally overly-analytic, overly-accommodating manner is not most effective.) I observe other people carefully, but it all seems very complex and I consider myself learning and a ‘beginner’—far behind someone who is more natural at this.
In this kind of situation, I usually just ask, outright, “What can I do to help you?” Then I can file away the answer for the next time the same thing happens.
However, this assumes that, like me, you are in a strongly Ask culture. If the people you know are strongly Guess, you might get answers such as “Oh, it’s all right, don’t inconvenience yourself on my account”, in which case the next best thing is probably to ask 1) people around them, or 2) the Internet.
You also need to keep your eyes out for both Ask cues and Guess cues of consent and nonconsent—some people don’t want help, some people don’t want your help, and some people won’t tell you if you’re giving them the wrong help because they don’t want to hurt your feelings. This is the part I get hung up on.
The “keep your eyes out for cues” works the other way around in what we’re calling a “Guess culture” as well.
That is, most natives of such a culture will be providing you with hints about what you can do to help them, while at the same time saying “Oh, it’s all right, don’t inconvenience yourself on my account.” Paying attention to those hints and creating opportunities for them to provide such hints is sometimes useful.
(I frequently observe that “Guess culture” is a very Ask-culture way of describing Hint culture.)
Yes, I would like to improve on all of this. I haven’t found the internet particularly helpful.
And I do find myself in a bewildering ‘guess’ culture. Asking others (though not too close to the particular situation) would probably yield the most information.
What is your reason for wanting to?
I find myself happier when I act more kindly to others. In addition, lowering suffering/increasing happiness are pretty close to terminal values for me.
You say
Yet you said earlier that
Does this mean that you feel that you do enjoy it but not “enough” in some sense and you want to enjoy it even more?
Correct, it is enjoyable but I wish to make it more so. Hence why I used “more”.
I recommend trying loving-kindness meditation.
Could you elaborate? I’m relatively familiar with and practice mindfulness meditation, but I’ve never heard of loving-kindness meditation.
This here Wikipedia page is a good summary.
It mostly boils down to simply concentrating on feeling nice towards everyone. There is some technical advice on how to turn the vague goal of ‘feeling nice’ to more concrete mental actions (through visualization, repeating specific phrases, focusing on positive qualities of people) and how to structure the practice by having a progression of people towards which you generate warm fuzzy feelings, of increasing level of difficulty (like starting with yourself and eventually moving on to someone you consider an enemy). Most of this can be found in the Wiki article or easily googled.
See here for an introduction.
What are community norms here about sexism (and related passive aggressive “jokes” and comments about free speech) at the LW co-working chat? Is LW going for wheatons law or free speech and to what extent should I be attempting to make people who engage in such activities feel unwelcome or should I be at all?
I have hesitated to bring this up because I am aware its a mind-killer but I figured If facebook can contain a civil discussion about vaccines then LW should be able to talk about this?
There are no official community norms on the topic.
For my own part, I observe a small but significant number of people who seem to believe that LessWrong ought to be a community where it’s acceptable to differentially characterize women negatively as long as we do so in the proper linguistic register (e.g, adopting an academic and objective-sounding tone, avoiding personal characterizations, staying cool and detached).
The people who believe this ought to be unacceptable are either less common or less visible about it. The majority is generally silent on such matter, though will generally join in condemning blatant register-violations.
The usual result is something closer to wheaton’s law at the surface level, but closer to “say what you think is true” at the structural level. (Which is not quite free speech, but a close enough cousin in context.) That is, it’s often considered OK to say things, as long as they are properly hedged and constructed, that if said more vulgarly or directly would be condemned for violating wheaton’s law, and which in other communities would be condemned for a variety of reasons.
I think there’s a general awareness that this pattern-matches to sexism, though I expect that many folks here consider that to be mistaken pattern-matching (the “I’m not sexist; I can’t help it if you feminists choose to interpret my words and actions that way” stance).
So my guess is that if you attempt to make people who engage in sexism (and related defenses) feel unwelcome you will most likely trigger net-negative reactions unless you’re very careful with your framing.
Does that answer your question?
It does answer my question. Also thanks for suggestion to focus on the behaviour rather than the person. I didn’t even realize I was thinking like that till you two pointed it out.
Yes, and this is best, is it not? I enjoy reading what people have to say, even if their views are directly in contradiction to mine. I’ve changed my views more than once because it was correctly pointed out to me why my views were wrong. http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind
And about being vulgar, it’s just a matter of human psychology. People in general—even on LW—are more receptive to arguments that are phrased politely and intelligently. We’d all like to think that we are immune to this, but we are not.
It’s certainly better than nobody ever getting to express views that contradict anyone else’s views; agreed.
Yes, that’s true.
Disclaimer: this is not meant as a defence of the behaviour in question, since I don’t exactly know what we’re talking about.
LessWrong characterizes outgroups negatively all the time. I cautiously suggest the whole premise of LW characterizes most people negatively, and it’s easier to talk about any outgroup irrationality, in this case women statistically, than look at our own flaws. If we talked about what men are like on average, we might not have many flattering things to say either.
Should negative characterizations of people be avoided in general, irrespective of how accurately we think they describe the average of the groups in question?
If you see characterizations that are wrong, you should obviously confront them.
I agree that there are also other groups of people who are differentially negatively characterized; I restricted myself to discussions of women because the original question was about sexism.
I would cautiously agree. There’s a reason I used the word “differentially.”
Personally, I’m very cautions about characterizing groups by their averages, as I find I’m not very good about avoiding the temptation to then characterize individuals in that group by the group’s average, which is particularly problematic since I can assign each individual to a vast number of groups and then end up characterizing that individual differently based on the group I select, even though I haven’t actually gathered any new evidence. I find it’s a failure mode my mind is prone to, so I watch out for it.
If your mind isn’t as prone to that failure mode as mine, your mileage will of course vary.
I don’t understand how not being differential is supposed to work though. Different groups are irrational in different ways.
I think the failure mode you mention is common enough that we should be concerned about it. I’m just not sure about the right way to handle it.
Suppose it’s actually true in the world that all people are irrational, that blue-eyed people (BEPs) are irrational in a blue way, green-eyed-people (GEPs) are irrational in a green way, and green and blue irrationality can be clearly and meaningfully distinguished from one another.
Now consider two groups, G1 and G2. G1 often discusses both blue and green irrationality. G2 often discusses blue irrationality and rarely discuss green irrationality. The groups are otherwise indistinguishable.
How would you talk about the difference between G1 and G2? (Or would you talk about it at all?)
For my own part, I’m comfortable saying that G2 differentially negatively characterizes BEPs more than G1 does. That said, I acknowledge that one could certainly argue that in fact G1 differentially negatively characterizes BEPs just as much as G2 does, because it discusses blue and green irrationality differently, so if you have a better suggestion for how to talk about it I’m listening.
What if G1=BEP and G2=GEP and discussing outgroup irrationality is much easier than discussing ingroup irrationality? Now suppose G1 is significantly larger than G2, and perhaps even that discussing G1 is more relevant to G2 winning* and discussing G2 is more relevant to G1 winning. How is the situation going to look like for a member of G2 who’s visiting G1? How about if you mix the groups a bit? Is it wrong?
You connotationally implied the behaviour you described to be wrong. Can you denotationally do that?
*rationality is winning
I expect a typical G2/GEP visiting a G1/BEP community in the scenario you describe, listening to the BEPs differentially characterizing GEPs as irrational in negative-value-laden ways, will feel excluded and unwelcome and quite possibly end up considering the BEP majority a threat to their ongoing wellbeing.
I assume you mean, what if G1 is mostly BEPs but has some GEPs as well? I expect most of G1′s GEP minority to react like the G2/GEP visitors above, though it depends on how self-selecting they are. I also expect them to develop a more accurate understanding of the real differences between BEPs and GEPs than they obtained from a simple visit. I also expect some of G1′s BEP majority to develop a similarly more-accurate understanding.
I would prefer a scenario that causes less exclusion and hostility than the above.
How about you?
I’m not sure.
As I said, I’m cautious about characterizing groups by their averages, because it leads me to characterize individuals differently based on the groups I tend to think of them as part of, rather than based on actual evidence, which often leads me to false conclusions.
I suspect this is true of most people, so I endorse others being cautious about it as well.
I definitely want less exclusion and hostility, but I’m not sure the above scenario causes them for all values like GEP and BEP, nor for all kinds of examples of their irrationality. Perhaps we’re assuming different values for the moving parts in the scenario, although we’re pretending to be objective.
Many articles here are based on real life examples and this makes them more interesting. This often means picking an outgroup and demonstrating how they’re irrational. To make things personal, I’d say health care has gotten it’s fair share, especially in the OB days. I never thought the problem was that my ingroup was disproportionally targeted, but I was more concerned about strawmen and the fact I couldn’t do much to correct them.
Would it have been better if I had not seen those articles? I don’t think so, since they contained important information about the authors’ biases. They also told me that perhaps characterizations of other groups here are relatively inaccurate too. Secret opinions cannot be intentionally changed. Had their opinions been muted, I would have received information only through inexplicable downvotes when talking about certain topics.
I’m not exactly sure what reference class you’re referring to, but I certainly agree that there exist groups in the above scenario for whom negligible amounts of exclusion and hostility are being created.
I don’t know what you intend for this sentence to mean.
I share your preferences among the choices you lay out here.
You understood me correctly.
I meant it’s tempting to replace “eye colour” with something less neutral and “irrationality” with something more or less reliably insulting.
I bet you have other choices in mind.
Specific ones? Not especially. But it’s hard to know how to respond when someone concludes that C1 is superior to C2 and I agree, but I have no idea what makes the set (C1, C2) interesting compared to (C3, C4, .., Cn).
I mean, I suppose I could have asked you why you chose those two options to discuss, but to be honest, this whole thread has started to feel like I’m trying to nail Jell-O to a tree, and I don’t feel like doing the additional work to do it effectively.
So I settled for agreeing with the claim, which I do in fact agree with.
I find that difficult to believe.
I suggest this is because all we had was Jell-O and nails in the first place, but of course there are also explanations (E1, E2, .., En) you might find more plausible :)
Perhaps any such characterizations should be explicitly hedged against this failure mode, instead of being tabooed. I also think people should confront ambiguous statements, instead of just assuming they’re malicious.
Ideally, I’d want the people to feel that the behavior is unwelcome rather than that they themselves are unwelcome, but people are apt to have their preferred behaviors entangled with their sense of self, so the ideal might not be feasible. Still, it’s probably worth giving a little thought to discouraging behaviors rather than getting rid of people.
Depends on how you define sexism. Some people consider admitting that men and women are different to be sexism, never mind acting on that belief :-/
TheOtherDave’s answer is basically correct. Crass and condescending people don’t get far, but its possible to have a discussion of issues which cost Larry Summers so dearly.
Since this comment is framed in part as endorsing mine, I should probably say explicitly that while I agree denotationally with every piece of this comment taken individually, I don’t endorse the comment as a whole connotationally.
:-D
I connotationally interpret your question as: “what are the community norms about bad things?”
You’re not giving us enough information so that we could know what you’re talking about, and you’re asking our blind permission to condemn behaviour you disagree with.
Fair critique. Despite the lack of clarity on my part the comments have more than satisfactorily answered the question about community norms here. I suppose the responders can thank g-factor for that :)
Well played.
I don’t have an answer here, just a note that this question actually contains two questions, and it would be good to answer both of them together. It would also be a good example of using rationalist taboo.
A: What are the community norms for defining sexism?
B: What are the community norms for dealing with sexism (as defined above)?
Answering B without answering A can later easily lead to motivated discussions about sexism, where people would be saying: “I think that X is [not] an example of sexism” when what they really wanted to say would be: “I think that it is [not] appropriate to use the community norm B for X”.
(I haven’t seen the LW co-working chat)
If you want to tell people off for being sexist, your speech is just as free as theirs. People are free to be dicks, and you’re free to call them out on it and shame them for it if you want.
I think you should absolutely call it out, negative reactions be damned, but I also agree with NancyLebovitz that you may get more traction out of “what you said is sexist” as opposed to “you are sexist”.
To say nothing is just as much an active choice as to say something. Decide what kind of environment you want to help create.
A norm of “don’t be a dick” isn’t inherently a violation of free speech. The question is, does LW co-working chat have a norm of not being a dick? Would being a dick likely lead to unfavorable reactions, or would objecting to dickish behavior be frowned on instead?
The problem with having “don’t be a dick” as a norm is that people have very different ideas about what constitutes “being a dick”.
Don’t be a dick is code for “Act according to our unspoken social codes”
I’d like to see some evidence that such stuff is going on before pointing fingers and making rules that could possible alienate a large fraction of people.
I’ve been attending the co-working chat for about a week, on and off (I take the handle of ‘fist’) and so far everyone seems friendly and more than willing to accomodate the girls in the chat. Have you personally encountered any problems?
I did encounter this problem (once) and I was experiencing resistance to going back even though I had a lot of success with the chat. I figured having a game plan for next time would be my solution.
Friendship is Optimal just received a quite positive review from One Man’s Pony Ramblings.
So is this person a big actor in the pony fanfic culture?
His site’s not going to drive a giant surge of views, but he’s highly respected among fanfic writers as a thoughtful critic.
The quality of intelligence journalism
According to the survey of experts Steve Sailer outperforms everyone else.
What we actually know about mirror neurons.
Wow. I did not expect my background understanding of what is known about mirror neurons to have been so much hype-influenced.
Identical twins aren’t perfectly identical
That there are differences between identical twins is known, but the article goes into detail about the types of difference, including effects which are in play before birth.
Wirth’s Law:
Is Wirth’s Law still in effect? Most of the examples I’ve read about are several years old.
ETA: I find it interesting that Wirth’s Law was apparently a thing for decades (known since the 1980s, supposedly) but seems to be over. I’m no expert though, I just wonder what changed.
It was my impression that Wirth’s law was mostly intended to be tongue-in-cheek, and refer to how programs with user interfaces are getting bloated (which may be true depending on your point of view).
In terms of software that actually needs speed (numerical simulations, science and tech software, games, etc.) the reverse has always been true. New algorithms are usually faster than old ones. Case in point is the trusty old BLAS library which is the workhorse of scientific computing. Modern BLAS implementations are extremely super-optimized, far more optimized than older implementations (for current computing hardware, of course).
It wasn’t even true in 1995, I don’t think. The first way of evaluating it that comes to mind is the startup times of “equivalent” programs, like MS Windows, Macintosh OS, various Corels, etc.
Startup times for desktop operating systems seem to have trended up, then down, between the ’80s and today; with the worst performance being in the late ’90s to 2000 or so when rebooting on any of the major systems could be a several-minutes affair. Today, typical boot times for Mac, Windows, or GNU/Linux systems can be in a handful of seconds if no boot-time repairs (that’s “fsck” to us Unix nerds) are required.
I know that a few years back, there was a big effort in the Linux space to improve startup times, in particular by switching from serial startup routines (with only one subsystem starting at once) to parallel ones where multiple independent subsystems could be starting at the same time. I expect the same was true on the other major systems as well.
My experience is that boot time was worst in Windows Vista (released 2007) and improved a great deal in Windows 7 and 8. MS Office was probably at its worst in bloatiness in the 2007 edition as well.
It would be interesting to plot the time sequence of major chip upgrades from intel on the same page as the time sequence of major upgrades of MS Word and/or MS Excel. My vague sense is the mid/early 90s had Word releases that I avoided for a year or two until faster machines came along that made them more usable from my point of view. But it seems the rate of new Word releases has come way down compared to the rate of new chip releases. That is, perhaps hardware is creeping up faster than features are in the current epoch?
This seems to be true for video game consoles. Possibly because good graphics make better ads than short loading times.
I think both software and hardware got further out on the learning curve which means their real rates of innovative development have both slowed down which means the performance of software has sped up.
I don’t get how I get to the last part of that sentence from the first part either, but it almost makes sense.
I mean, this formulation is wrong (software isn’t getting slower), except for the tongue-in-cheek original interpretation I guess. On the other hand, software is getting faster at a slower rate than hardware is and that is still an important observation.
Applying probability to fingerprint matches
Finding food in foreign grocery stores, or finding out that reality has fewer joints than you might think.
From the comments:
Making sense of unfamiliar legal systems
I have a strong desire to practice speaking in Lojban, and I imagine that this is the second-best place to ask. Any takers?
.i’enai
There are a couple of commercially available home eeg sets available now, has anyone tried them? Are they useful tools for self monitoring mental states?
[Reposted from last thread because I think i was too late to be seen mch]
Researching EEG biofeedback has been in my “someday maybe” folder of GTD for a while now.
The book Getting Started with Neurofeedback has a chapter on purchasing an EEG set.
I think the studies at the beginning of the book provide pretty compelling evidence that it’s at least worth looking into more.
“Just five years after Kamya’s discovery, Barry Sterman published his landmark experiment (Wyricka & Sterman, 1968). Cats were trained to increase sensorimotor rhythm (SMR) or 12– 15 Hz. This frequency bandwidth usually increases when motor activity decreases. Thus, the cats were rewarded each time that SMR increased, which likely accompanied a decrease in physical movements. Unrelated to his study, NASA requested that Sterman study the effects of human exposure to hydrazine (rocket fuel) and its relationship to seizure disorder. Sterman started his research with 50 cats. Ten out of the 50 had been trained to elevate SMR. All 50 were injected with hydrazine. Much to Sterman’s surprise, the 10 specially trained cats were seizure resistant. The other 40 developed seizures 1 hour after being injected (Budzynski, 1999, p. 72; Robbinsa, 2000, pp. 41– 42). Sterman had serendipitously discovered a medical application for this new technology.”
I’ve been taking notes on the book in workflowy should that be of interest.
A monkey teaching a human how to crush leaves
Mirror neurons? Why does the monkey care about whether a human can crush leaves?
Because enjoying teaching useful stuff to people you get along with is a trait that got selected for?
Why does a human care about if a monkey cares about whether a human can crush leaves? For things like us primates, sometimes these things are their own reward.
It might simply be an interesting activity to teach a human how to crush leaves.
Do the monkeys ever crush leaves like that for themselves? Otherwise I think that it is more likely giving him a gift, hoping that he will reciprocate by giving the monkey a treat, or maybe just pet it. The leaves just happen to be what the monkey has most easily available at the time.
The monkey was folding the man’s fingers, not just handing him leaves.
If the monkey is giving a gift to incur a sense of obligation, it might be even more complex behavior than teaching.
Yes. What I was thinking was that people had previously given the monkeys treats by putting something in the monkey’s hand and closing its fingers, so that this is the monkey is more or less imitating something that it wants the human to do.
It is not that teaching is too complex for a monkey, it is that I don’t see what exactly it’s teaching, but I feel that I recognize what the monkey is doing as the “you keep this” gesture.
I’ve heard it said that, when cats present a kill to their owners, it’s a form of trying to teach the owner to hunt. I can only assume that some mammals will treat animals from other species as part of their tribe/pack/pride/etc if they get along well enough.
If so, I’d predict this happens more often in more social animals. So yes to lions and monkeys, no to bears and hamsters. This would suggest we’d see similar behavior from dogs, though, and I can’t think of examples of dogs trying to teach humans any skills. This is particularly damning for my hypothesis, since dogs are known for their cooperation with humans.
Sheep-herding rabbit—included because it’s an amazing video and who could resist, and because it’s at least an example of learning from dogs.
As for your generalization, maybe the important thing is to look at species which have to teach their young. I’m not sure how much dogs teach puppies.
Dog teaches puppy to use stairs
Your rabbit link is broken.
Fixed now.
It’s hard for me to imagine how this wouldn’t be the case. It is a highly non-trivial sensory/processing problem for a cat to look at another cat and think “This creature is a cat, just like I am a cat, therefore we should take care of each other” but, at the same time, to look at a human and think “This creature is a human, it is not like me, therefore it does not share my interests.”
This problem is especially more acute for cats than dogs, because cats don’t really form tight-knit packs, and they have less available processing power.
I’d like to see some more research on the psychology of pack behavior and how/why animals cooperate with each other though.
Dennett (1982)
Red Queen hypothesis means that humans are probably the latest step in a long sequence of fast (on evolutionary time scale) value changes. So does Coherent Extrapolated Volition (CEV) intend to
1) extrapolate all the future co-evolutionary battles humans would have and predict the values of the terminal species as our CEV, or is it intended somehow to
2) freeze the values humans have at the point in time we develop FAI and build a cocoon around humanity which will let it keep this (nearly) arbitrarily picked point in its evolution forever?
If it is 1), it seems the AI doesn’t have much of a job to do. Presumably interfere against existential risks to humanity and its successor species, perhaps keep extremely reliable stocks for repopulating if humanity or its successor manages still to kill itself. Maybe even in a less extreme interpretation, FAI does what is required to keep humanity and its successors as the pinnacle species, stealing adaptations from unrelated species that actually manage to threaten us and our successors, so we sort of have 1′) which is extrapolate to a future where the pinnacle species is always a descendant of ours.
If 2), it would seem FAI could simply build a sim that freezes in place the evolutionary pressures that brought us to this point as well as freezing in to place our own current state. And then run that sim forever, the sim simply removes genetic mutation from the sim and perhaps has active rebalancing to work against any natural selection which is currently going on.
We could have BOTH futures, those who prefer 2) go live in the Sim that they have always thought was indistinguishable from reality anyway, and those who prefer 1 stay here in the real world and play out their part in evolving whatever comes next. Indeed, the sim of 2) might serve as a form of storage/insurance against existential threats, a source from which human history can be restarted from its point at 0 year FAI whenever needed.
Does CEV crash in to Red Queen hypothesis in interesting ways? Could a human value be to roll the dice on our own values in hopes of developing an even more effective species?
Neither. CEV is supposed to look at what humanity would want if they were smarter, faster, and more the people they wished they were. It finds the end of the evolution of how we change if we are controlled by ourselves, not by the blind idiot god.
Well considering that we at the point we create the FAI are completely a product of the blind idiot god, and so our CEV is some extrapolation of where that blind idiot had gotten us to at the point we finally got the FAI going, it seems very difficult to me to say that the blind idiot god has at all been taken out of the picture.
I guess the idea is that by US being smart and the FAI being even smarter, we are able to whittle down our values until we get rid of the froth, dopey things like being a virgin when you are married and never telling a lie, move through the 6 stages of morality to the top one, the FAI discovers the next 6 or 12 stages and runs sims or something to cut even more foam and crust until there’s only one or two really essential things left.
Of course those one or two things were still placed there by the blind idiot god. And if something other than them had been placed by the blind idiot, CEV would have come up with something else. It does not seem there is any escaping this blind idiot. So what is the value of a scheme who’s appeal is the appearance of escaping the blind idiot if the appearance is false?
We are not escaping the blind idiot god in the sense if it not having any control. We are escaping in the sense that we have full control. To some extent, they overlap, but that doesn’t matter. I only care about being in control, not about everything else not being in control.
By luck, we got some things right. We don’t have to get rid of them just because we got them by a random process.
The value is in escaping the parts that harm us. Evolution made me enjoy chocolate, and evolution also made me grow old and die. I would love to have an eternal happy life. I don’t see any good reason to get rid of the chocolate; although I would accept to trade it for something better.
CEV is supposed to refer to the values of current humans. However, this does not necessarily imply that an FAI would prevent the creation of non-human entities. I’d expect that many humans (including me) would assign some value to the existence of interesting entities with somewhat different (though not drastically different) values than ours, and the satisfaction of those values. Thus a CEV would likely assign some value to the preferences of a possible human successor species by proxy through our values.
An interesting question, is the CEV dynamic? As we spent decades or millennia in the walled gardens built for us by the FAI would the FAI be allowed to drift its own values through some dynamic process of checking with the humans within its walls to see how its values might be drifting? I had been under the impression that it would not, but that might have been my own mistake.
No. CEV is the coherent extrapolation of what we-now value.
Edit: Dynamic value systems likely aren’t feasible for recursively self-improving AIs, since an agent with a dynamic goal system has incentive to modify into an agent with a static goal system, as that is what would best fulfill its current goals.
It’s not dynamic. It isn’t our values in the sense of what we’d prefer right now. It’s what we’d prefer if we were smarter, faster, and more the people that we wished we were. In short, it’s what we’d end up with if it was dynamic.
Unless the FAI freezes our current evolutionary state, at least as involves our values, the result we would wind up with if CEV derivation was dynamic would be different from what we would end up with if it is just some extrapolation from what current humans want now.
Even if there were some reason to think our current values were optimal for our current environment, which there is actually reason to think they are NOT, we would still have no reason to think they were optimal in a future environment.
Of course being effectively kept in a really really nice zoo by the FAI, we would not be experiencing any kind of NATURAL selection anymore, and evidence certainly suggests that our volition is to be taller, smarter, have bigger dicks and boobs, be blonder, tanner, and happier, all of which our zookeeper FAI should be able to move us (or our descendants) towards while carrying out necessary eugenics to keep our genome healthy in the absence of natural selection pressures. Certainly CEV keeps us from wanting defective, crippled, and genetically diseased children, so this seems a fairly safe prediction.
It would seem as defined that CEV would have to be fixed at the value it was set at when FAI was created. That no matter how smart, how tall, how blond, how curvaceous or how pudendous we became we would still be constantly pruned back to the CEV of 2045 humans.
As to our values not even being optimal for our current environment fuhgedaboud our future environment, it is pretty widely recognized that we are evolved for the hunter gatherer world of 10,000 years ago, with familial groups of a few hundred, the necessity for survival of hostile reaction against outsiders, and systems which allow fear to distort in extreme ways our rational estimations of things.
I wonder if the FAI will be sad to not be able to see what evolution in its unlimited ignorance would have come up with for us? Maybe they will push a few other species to become intelligent and social and let them duke it out and have natural selection run with them. As long as their species that our CEV didn’t feel too overly warm and fuzzy about this shouldn’t be a problem for them. And certain as a human in the walled garden I would LOVE to be studying what evolution does beyond what it has done to us, so this would seem like a fine and fun thing for the FAI to do to keep at least my part of the CEV entertained.
Type error. You can evaluate the optimality of actions in an environment with respect to values. Values being optimal with respect to an environment is not a thing that makes sense. Unless you mean to refer to whether or not our values are optimal in this environment with respect to evolutionary fitness, in which case obviously they are not, but that’s not very relevant to CEV.
An FAI can be far more direct than that. Think something more along the lines of “doing surgery to make our bodies work the way we want them to” than “eugenics”.
Do not anthropomorphize an AI.
You are right about the assumptions I made and I tend to agree it is erroneous.
Your post helps me refine my concern about CEV. It must be that I am expecting the CEV will NOT reflect MY values. In particular, I am suggesting that the CEV will be too conservative in the sense of over-valuing humanity as it currently is and therefore undervaluing humaity as it eventually would be with further evolution, further self-modification.
Probably what drives my fear of CEV not reflecting MY values is dopey, low probability. In my case it is an aspect of “Everything that comes from organized religion is automatically stupid.” To me, CEV and FAI are the modern dogma, man discovering his natural god does not exist, but deciding he can build his own. An all-loving (Friendly) all powerful (self-modifying AI after FOOM) father-figure to take care of us (totally bound by our CEV).
Of course there could be real reasons that CEV will not work. Is there any kind of existence proof for a non-trivial CEV? For the most part values such as “lying is wrong” “stealing is wrong” “help your neighbors” all seem like simplifying abstractions that are abandoned by the more intelligent because they are simply not flexible enough. The essence of nation-to-nation conflict is covert, illegal competition between powerful government organizations that takes place in the virtual absence of all other values other than “we must prevail.” I would presume a nation which refused to fight dirty at any level would be less likely to prevail and so such high mindedness would have no place in the future, and therefore no place in the CEV. That is, the fact that I with normal-ish intelligence can see that most values are a simple map for how humanity should interoperate to survive and the map is not the territory, an extrapolation to if we were MUCH smarter would likely remove all the simple landmarks we have on the maps suitable for our current distribution of IQ.
Then consider the value much of humanity places on accomplishment, and the understanding that coddling, keeping as pets, keeping safe, protecting, is at odds with accomplishment, and get really really smart about that and a CEV is likely to not have much in it about protecting us, even from ourselves.
So perhaps the CEV is a very sparse thing indeed, requiring only that humanity, its successors or assigns, survive. Perhaps FAI sits there not doing a whole hell of a lot that seems useful to us at our level of understanding, with its designers kicking it wondering where they went wrong.
I guess what I’m really getting too is perhaps our CEV, perhaps when you use as much intelligence as you can to extrapolate where our values go in the long long run, you get to the same place the blind idiot was going all along- survival. I understand many here will say no you are missing out on the bad vs good things in our current life, how we can cheat death but keep our taste for chocolate. Their hypothesis is that CEV has them still cheating death and keeping their taste for chocolate. I am hypothesizing that CEV might well have the juggernaut of the evolution of intelligence, and not any of the individuals or even species that are parts of that evolution, as its central value. I am not saying I know it will, what I am saying is I don’t know why everybody else has already decided they can safely predict that even a human 100X or 1000X as smart as they are doesn’t crush them the way we crush a bullfrog when his stream is in the way of our road project or shopping mall.
Evolution may be run by a blind idiot but it has gotten us this far. It is rare that something as obviously expensive as death would be kept in place for trivial reasons. Certainly the good news for those who hate death is the evidence is that lifespans are more valuable in smart species, I think we live twice as long as most other trends against other species would suggest we should, so maybe the optimum continues to go in that direction. But considering how increased intelligence and understanding is usually the enemy of hatred, it seems at least a possibility that needs to be considered that CEV doesn’t even stop us from dying.
CEV is supposed to value the same thing that humanity values, not value humanity itself. Since you and other humans value future slightly-nonhuman entities living worthwhile lives, CEV would assign value to them by extension.
That’s kind of a tricky question. Humans don’t actually have utility functions, which is why the “coherent extrapolated” part is important. We don’t really know of a way to extract an underlying utility function from non-utility-maximizing agents, so I guess you could say that the answer is no. On the other hand, humans are often capable of noticing when it is pointed out to them that their choices contradict each other, and, even if they don’t actually change their behavior, can at least endorse some more consistent strategy, so it seems reasonable that a human, given enough intelligence, working memory, time to think, and something to point out inconsistencies, could come up with a consistent utility function that fits human preferences about as well as a utility function can. As far as I understand, that’s basically what CEV is.
Do you want to die? No? Then humanity’s CEV would assign negative utility to you dying, so an AI maximizing it would protect you from dying.
If some attempt to extract a CEV has a result that is horrible for us, that means that our method for computing the CEV was incorrect, not that CEV would be horrible to us. In the “what would a smarter version of me decide?” formulation, that smarter version of you is supposed to have the same values you do. That might be poorly defined since humans don’t have coherent values, but CEV is defined as that which it would be awesome from our perspective for a strong AI to maximize, and using the utility function that a smarter version of ourselves would come up with is a proposed method for determining it.
Criticisms of the form “an AI maximizing our CEV would do bad thing X” involve a misunderstanding of the CEV concept. Criticisms of the form “no one has unambiguously specified a method of computing our CEV that would be sure to work, or even gotten close to doing so” I agree with.
My thought on CEV not actually including much individual protection followed something like this: I don’t want to die. I don’t want to live in a walled garden taken care of as though I was a favored pet. Apply intelligence to that and my FAI does what for me? Mostly lets me be since it is smart enough to realize that a policy of protecting my life winds up turning me into a favored pet. This is sort of the distinction ask somewhat what they want you might get stories of candy and leisure, look at them when they are happiest you might see when they are doing meaningful and difficult work and living in a healthy manner. Apply high intelligence and you are unlikely to promote candy and leisure. Ultimately, I think humanity careening along on its very own planet as the peak species, creating intelligence in the universe where previously there was none is very possibly as good as it can get for humanity, and I think it plausible FAI would be smart enough to realize that and we might be surprised how little it seemed to interfere. I also think it is pretty hard working part time to predict what something 1000X smarter than I am will conclude about human values, so I hardly imagine what I am saying is powerfully convincing to anybody who doesn’t lean that way, I’m just explaining why or how an FAI could wind up doing almost nothing, i.e. how CEV could wind up being trivially empty in a way.
THe other aspect of being empty for CEV I was not thinking our own internal contradictions although that is a good point. I was thinking disagreement across humanity. Certainly we have seen broad ranges of valuations on human life and equality and broadly different ideas about what respect should look like and what punishment should look like. THese indicate to me that a human CEV as opposed to a French CEV or even a Paris CEV, might well be quite sparse when designed to keep only what is reasonably common to all humanity and all potential humanity. If morality turns out to be more culturally determined than genetically, we could still have a CEV, but we would have to stop claiming it was human and admit it was just us, and when we said FAI we meant friendly to us but unfriendly to you. The baby-eaters might turn out to be the Indonesians or the Inuits in this case.
I know how hard it is to reach consensus in a group of humans exceeding about 20, I’m just wondering how much a more rigorous process applied across billions is going to come up with.
You can just average across each individual.
Yes, “humanity” should be interpreted as referring to the current population.
Two connotational objections: 1) I don’t think that “constantly pruned back” is an appropriate metaphor for “getting everything you have ever desired”. The only thing that would prevent us from doing X would be the fact that after reflection we love non-X. 2) The extrapolated 2045 humans would be probably as different from the real 2045 humans, as the 2045 humans are different from the MINUS 2045 humans.
Sad? Why, unless we program it to be? Also, with superior recursively self-improving intelligence it could probably make a good estimate of what would have happened in an alternative reality where all AIs are magically destroyed. But such estimate would most likely be a probability distribution of many different possibilities, not one specific goal.
I’m dubious about the extrapolation—the universe is more complex than the AI, and the AI may not be able to model how our values would change as a result of unmediated choices and experiense.
I am not sure how obvious is the part that there are multiple possible futures. Most likely, the AI would not be able to model all of them. However, without AI most of them wouldn’t happen anyway.
It’s like saying “if I don’t roll a die, I lose the chance of rolling 6”, to which I add “and if you do roll the die, you still have 5⁄6 probability of not rolling 6″. Just to make it clear that by avoiding the “spontaneous” future of humankind, we are not avoiding one specific future magically prepared for us by destiny. We are avoiding the whole probability distribution, which contains many possible futures, both nice and ugly.
Just because AI can model something imperfectly, it does not mean that without the AI the future would be perfect, or even better on average than with the AI.
‘Unmediated’ may not have been quite the word to convey what I meant.
My impression is that CEV is permanently established very early in the AI’s history, but I believe that what people are and want (including what we would want if we knew more, thought faster, were more the people we wished we were, and had grown up closer together) will change, both because people will be doing self-modification and because they will learn more.
The overwhelming majority of dynamic value systems do not end in CEV.
What I mean is that if you looked at what people valued, and gave them the ability to self-modify, and somehow kept them from messing up and accidentally doing something that they didn’t want to do, you’d have something like CEV but dynamic. CEV is the end result of this.
What does the Red Queen hypothesis have to do with value change?
with random mutations and natural selection, old values can disappear and new values can appear in a population. The success of the new values depends only on their differential ability to keep their carriers in children, not on their “friendliness” to the old values of the parents, which is what FAI respecting CEV is meant to accomplish.
The Red Queen Hypothesis is (my paraphrase for purposes of this post) that a lot of the evolution that takes place is not to adapt to unliving environment but to the living and most importantly also evolving environment in which we live, on which we feed, and which does its damdest to feed on us. Imagine a set of smart primates who have already done pretty well against dumber animals by evolving more complex vocal and gestural signalling, and larger neocortices so that complex plans worthy of being communicated can be formulated and understood when communicated. But they lack the concept of handing off something they have with the expectation that they might get something they want even more in trade. THIS is essentially one of the hypotheses of Matt Ridley’s book “The Rational Optimist,” that homo sapiens is a born trader, while the other primates are not. Without trading, economies of scale and specialization do almost no good. With trading and economies of scale and specialization, a large energy investment in a super-hot brain and some wicked communication gear and skills really pays off.
Subspecies with the right mix of generosity, hypocrisy, selfishness, lust, power hunger, and self-righteousness will ultimately eat the lunch of their too generous or too greedy to cooperate or too lustful to raise their children or too complacent to seek out powerful mates brethren and sistern. This is value drift brought to you by the Red Queen.
I’ve noticed something: the MIRI blog RSS feed doesn’t update as a new article appears on the blog, but rather at certain times (two or three times a month?) it updates with the articles that have been published since the last update.
Does anyone know why this happens?
Hmm, not sure why that’s happening. I’ll look into it.
You can see it now in action: the RSS feed is two articles behind the blog. (I waited for the problem to show up.)
EDIT (2013-12-28): The RSS feed has updated.
Because humans are imperfect actors, should the class of Basilisks include evidence in favor of hated beliefs?
I’m not sure what you mean by “the class of Basilisks”. Do you mean “sensations that cause mental suffering” or some such?
Stuff that a rational person would be better off not knowing. For example, if I live among people of religion X, and I find out something disgusting that the religion’s founder did, and whenever someone discussed the founder my face betrayed my feelings of disgust, then knowledge of the founder’s misdeeds could harm me.
Interesting. So, living in Soviet Russia a rational person would treat knowledge about GULAG, etc. as a basilisk? Or a rational person in Nazi Germany would actively avoid information about the Holocaust?
It depends on one’s own risk factors. It’s REALLY important to know about the holocaust if you’re jewish or have jewish ancestry, but arguably safer or at least more pleasant not to if you don’t.
I think the moral question (as opposed to the practical safety question) of “is it better to know a dark truth or not” will come down to whether or not you can effectively influence the world after knowing it. You can categorize bad things into avoidable/changeable and unavoidable/unchangeable, and (depending on how much you value truth in general) knowing about unavoidable bad thing will only make you less happy without making the world a better place.
unfortunately it’s pretty hard to tell whether you can do anything about a bad thing without learning about what it is.
If anything, my impression is that knowing about the Holocaust has made my mother significantly less realistic with respect to assessing potential threats faced by Jews in the present day.
On the other hand, to the extent that it represents a general lesson about human behavior, that understanding might end up being valuable for anyone. Being non-Jewish may actually make it easier to properly generalize the principles rather than thinking of it in terms of unique identity politics.
It’s worth knowing that societies can just start targeting people for no reason. It can be hard to have a sense of proportion about risks.
I suspect the best strategy is to become such a distinguished person that more than one country will welcome you, but the details are left as an exercise for the student.
this is possible but I meant knowing about the holocaust as it’s ongoing, like lumifer’s example of knowing about gulags while living in soviet russia.
I think he meant in Nazi Germany, not today.
First they came for the communists, and I did not speak out—
because I was not a communist;
Then they came for the socialists, and I did not speak out—
because I was not a socialist;
Then they came for the trade unionists, and I did not speak out—
because I was not a trade unionist;
Then they came for the Jews, and I did not speak out—
because I was not a Jew;
Then they came for me—
and there was no one left to speak out for me.
Martin Niemöller
speaking out would’ve gotten you killed.
This is a poem about poor bayesian updating: This person should’ve moved away.
To quote you
This person, a German Protestant minister, followed your advice, did he not?
good point. I totally covered every base with that one line of advice, and meant it to apply to all people in all situations.
More seriously, my advice very clearly was a subset of the more general advice: Be fucking wary of angering powerful entities. He clearly did NOT follow that advice.
The poem is about the importance of speaking out when it’s still safe (or relatively safe) to do so.
It is unclear what will be the consequences and side-effects of not knowing the specific evidence. And on meta level: what will be the consequences of modifying your cognitive algorithms to avoid the paths that seem to lead to such evidence.
Depending on all these specific details, it may be good or bad. Human imperfection makes it impossible to evaluate. And actually not knowing the specific evidence makes it impossible again. So… the question is analogical to: “If I am too stupid to understand the question, should I answer ‘yes’, or should I answer ‘no’?” (Meaning: yes = avoid the evidence, no = don’t avoid the evidence.)
I recently read a blog post claiming that alcohol consumption can increase testosterone levels up to 5 hours after intake:
I’m still not going to drink copious amounts of alcohol after a workout...
As usual, examine.com has some information related to this.
A glass of wine (or two (or three)) or a beer after a workout have noticeably improved how I feel the next day. I didn’t believe this post either, but it appears to have panned out.
What fanfics should I read (perhaps as a HPMOR substitute)?
Harry Potter and the Natural 20.
Object level response To the Stars. Meta level, check the monthly media thread archives and/or HPMOR’s author notes. They have lots of good suggestions, and in depth reviews.
If you haven’t yet taken EY’s suggestion in the author’s notes to read Worm yet, do so. It’s original fiction, but you probably don’t mind.
Edit: also this might belong in the media thread?
There’s a new subreddit dedicated to rationalist fiction. You can check out stories linked there. I’m currently reading Rationalising Death, a Death Note fanfic and it’s pretty good even though I haven’t seen the anime on which it’s based.
I’m also one-thirds into Amends, or Truth and Reconciliation, which is a decent look at how Harry Potter characters would logically react to the end of the Second Wizarding War. So far no idiot balls and pretty good characterization.
Rationalising Death may be better if you haven’t read Death Note; it’s pretty good about explaining everything. As someone familiar with Death Note my feeling so far has been that Rationalising Death hasn’t diverged enough; it sometimes feels like just rehashing the original. Not always, certainly, and I’m overall enjoying it, but that’s seemed like the biggest flaw to me so far (admittedly, the author says divergence will increase as it goes along, and there are signs of that pattern).
Chapter 7 is where it really starts moving on its own track, in my opinion. Things are really shaking up, and unknown forces are now in play.
I quite enjoyed https://www.fanfiction.net/s/2857962/1/Browncoat-Green-Eyes
(Yes, it’s a Harry Potter/Firefly crossover. It’s much, much better than the premise makes it sound)
I took this recommendation, and hated it. Got as far as the thing with Jayne’s mother before I accepted that it wasn’t going to get any better.
If you’re some random person, wondering whether you should listen to me or Alsadius, I recommend the following test: read the first chapter. If you like chapter one you’ll probably like the rest of it, and if you don’t, you won’t.
I agree with this test. True of many stories, really. I’m a fan of the plot, which only really comes together 2⁄3 of the way through, but if you’re not a fan of the banter, it’s not worth it.
I started reading it. Harry isn’t Harry. He’s constantly spouting “Charming” and “Snarky” lines at every character, and is inexplicably expert at piloting and knows everything about the firefly-verse after a time-skip of 2 years. If you hadn’t told me he was Harry Potter I would’ve guessed he was Pham Nuwen. There’s also tons of call-backs to past firefly events and lines of dialogue, which shows pretty weak imagination on the part of the author. A reference is one thing but you don’t make it by characters constantly going “Hey remember that one time when we did X?” “Hey remember your wife?”.
The request was for a HPMOR substitute. I figured that a Harry-like Harry wasn’t exactly a necessity. As I said in an above comment, this author uses canon as a loose suggestion.
I keep running into that. Does it make sense to read if you haven’t watched Firefly?
(I have watched Firefly—an episode or two. Didn’t like it.)
Not really. You can get by without Potter knowledge(as usual, this author mangles it a fair bit anyways), but the plot is heavily tied into that of Firefly/Serenity, and the Firefly characters are more prominent. That said, feel free to read his Potter-only stuff instead—I haven’t gone through his whole oeuvre, but everything I’ve read has been hilarious and well-written.
I think I want to buy a new laptop computer. Can anyone here provide advice, or suggestions on where to look?
The laptop I want to replace is a Dell Latitude D620. Its main issues are weight, heat production, slowness (though probably in part from software issues), inability to sleep or hibernate (buying and installing a new copy of XP might fix this), lack of an HDMI port, and deteriorated battery life. I briefly tried an Inspiron i14z-4000sLV, but it was still kind of slow, and trying to use Windows 8 without a touchscreen was annoying.
I remember reading that it’s unsafe to move or jostle a laptop with a magnetic hard drive while it’s running, because of the moving parts. Based on that, it seems like it’s best to get one with only a solid-state drive and no magnetic drive. Is that accurate?
I’m somewhat ambivalent about how to trade off power against heat and weight, or against cost of replacement if it’s lost or damaged.
(Edit: I eventually ordered a Dell XPS 13.)
What’s your budget?
How much hard drive space are you using currently?
I’d rather not worry about budget.
Not counting external storage, I’m using about 25 GB of the D620′s 38 GB, plus 25 GB (not counting software) on the family desktop PC.
(After ordering the XPS, I realized that it doesn’t have a removeable battery, which seems like a longevity issue; but it seems likely that that’s standard for devices of its weight class.)
Not necessarily. Most laptops nowadays are equipped with anti shock hard drive mounts and the hard drives are specially designed to be resistant to shock. The advantages for an SSD are speed, not reliability.
This reliability report (with this caveat) indicates that Samsung is the most reliable brand on the market for now. I’ve always considered Lenovo and ASUS to be high quality, with ASUS generally having cheaper and more powerful computers (and a trade off in actually figuring out which one you want, that website is terrible).
I would expect an SSD to be MUCH more reliable than a hard drive.
SSDs are solid-state devices with no moving parts. Hard drives are mechanical devices with platters rapidly rotating at microscopic tolerances.
So now that I’ve declared my prior let’s see if there’s data… :-)
“From the data I’ve seen, client SSD annual failure rates under warranty tend to be around 1.5%, while HDDs are near 5%,” Chien said. (where Chien is “an SSD and storage analyst with IHS’s Electronics & Media division”) Source
Reliability for SSDs is better than for HDD. However, they aren’t so much more reliable that it alters best practices for important data keeping—at least two backups, and one off site.
Oh, certainly.
Safety of your data involves considerably more than the reliability of your storage devices. SSDs won’t help you if your laptop gets stolen or if, say, your power supply goes berserk and fries everything within reach.
Thanks for replying. I haven’t looked at your link yet, but it seems like there’d be limits to how much shock protection could be fit in an ultrathin laptop, and it’d be hard to find out how good it is for specific models. (And the speed advantage seems like enough reason to want an SSD in any case.)
Check out /r/suggestalaptop?
General comments: SSDs are generally faster than magnetic drives, but often fail much sooner.
If you’re not positive you want to replace it altogether: You might be able to fix your heat/slowness issues just by taking a can of compressed air to it. And you could probably buy a new battery. Replacing it might still be a better proposition overall, though...
Source on SSDs failing sooner? I thought (or assumed) it was the opposite. A quick Google search turns up the headline “SSD Annual Failure Rates Around 1.5%, HDDs About 5%”.
Looking further, though, I also see: “An SSD failure typically goes like this: One minute it’s working, the next second it’s bricked.”. The page goes on to say that there’s a service that can reliably recover the data from a dead drive, but that seems like a privacy concern (if everything on the drive weren’t logged by the NSA to begin with).
On the pro-SSD side, though, I try to keep anything important online or on an external drive anyway (for easier moving between devices). And I really like the idea of a laptop I can casually carry around without worrying about platters and heads.
Thanks for the suggestions; I may try the Reddit link later. (Edit: posted a thread here.)
If you are backing up your data responsibly, the SSD failure isn’t as much of an issue. And if you aren’t backing up your data, then you need to take care of that before worrying about storage failure.
Update: I’ve provisionally ordered a Dell XPS 13.
This story, where they treated and apparently cured someone’s cancer, by taking some of his immune system cells, modifying them, and putting them back, looks pretty important.
cancer treatment link
Found the actual papers the coverage is based on.
How it was done: removing T cells (the cells which kill body cells infected with viruses directly, unlike B cells which secrete antibody proteins) and using replication-incapable viruses to put in a chimeric gene composed of part of a mouse antibody against human B-cell antigens, part of the human T-cell receptor that activates the T cell when it binds to something, and an extra activation domain to make the T-cell activation and proliferation particularly strong. Cells were reinjected, and they proliferated over 1000-fold, killed off all the cancerous leukemia cells they could detect in most patients, and the T-cells are sticking around as a permanent part of the patients immune systems. Relapse rates have been pretty low (but not zero).
This type of cancer (B-cell originating leukemia) is uniquely extraordinarily well suited for this kind of intervention for two reasons. One, there is an antigen on B cells and B-cell derived cancers that can be targeted without destroying anything else important in the body other than normal B cells. Two, since the modded T cells destroy both normal B cells carrying this antigen and the cancerous B cells, the patients have a permanent lack of antibodies after treatment which makes sure their immune system has a hard time reacting against the modified receptors present on the modded T cells, which has been a problem in other studies. Fortunately people can live without B cells if they are careful—it’s living without T cells you cannot do. They also suspect that pre-treating with chemotherapy majorly helped these immune cells go after the weakened cancer cell population.
You can repeat this with T-cells tuned against any protein you want, but you had better watch out for autoimmune effects or the patient’s immune system going after the chimeric protein you add and eliminating the modded population. And watch out ten years down the line for any T-cell originating lymphomas derived from wonky viral insertion sites in the modded cells—though these days there are ‘gentler’ viral agents than in the old days with a far lower rate of such problems, and CRISPR might make modding cells in a dish even more reliable soon.
Another thing in the toolkit. No silver bullets. Still pretty darn cool.
Nicholas Agar has a new book. I read Humanity’s End and may even read this...eventually.
http://www.amazon.com/gp/aw/d/0262026635/ref=mp_s_a_1_3?qid=1386699492&sr=8-3
Scientology uses semantic stopsigns:
http://www.garloff.de/kurt/sekten/mind1.html
Interesting. Reminds me of Orwell’s “crimestop”:
Hm, this actually sounds like it could be useful...
I wonder if it would be valuable to get partway in to Scientology, then quit, just to observe the power of peer pressure, groupthink, and whatnot.
Part of scientology program involve sharing personal secrets. If you quit they can use those against you. Scientology is set up in a way that makes it hard to quit.
A lot of people still do, though. Last time I looked into this, the retention rate (reckoned between the first serious [i.e. paid] Scientology courses and active participation a couple years later) was about 10%.
It’s not a question of whether they do leave, but whether they do come out ahead.
Scientology courses aren’t cheap. If you are going to invest money into training, I would prefer to buy training from an organisation that makes leaving easy instead of making it painful.
Oh, I’m pretty confident they don’t. But if you had strong reasons for joining and leaving Scientology other than what Scientologists euphemistically call “tech”, then in the face of those base rates it seems unlikely to me that they’d manage to suck you in for real.
There are probably safer places to see groupthink in action, though.
More precisely, sharing personal secrets while connected to an amateur lie detector. And the secrets are documented on paper and stored in archives of the organization. It’s optimized for blackmailing former members.
Relevant, in case you hadn’t already seen it.
A therapist specializing in exposure therapy will be more useful than a cult for this purpose.
And also more expensive. But yeah, easier ways to get it than going in to scientology.
Motivated cognition is pretty much the only kind of cognition people do. It seems epistemically healthy to sample cognition stemming from diverse motivations.
Observation: game theory is not uniquely human, and does not inherently cater to important human values.
Immediate consequence: game theory, taken to extremes already found in human history, is inhuman.
Immediate consequence the second: Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman.
Conjecture: if you attempt to optimize by taking your own use of game theory and similar arts to similar extremes, you will become a monster of a similar type.
Observation: a refusal to use game theory in considerations results in a strictly worse life than otherwise, and possibly its use more often, more intensely, and with less puny human mercy will result in a better life for you alone.
Conjecture: this really, really looks like the scary and horrifying spawn of a Red Queen race, defecting on PD, and being a jerk in the style of Cthulhu.
Thoughts?
Continue laying siege to me; I’m done here.
Sorry, how did you go from “non human agents use X” (a statement about commonality) to “X is inhuman” (a value judgement) to “if you use X you become a monster” (an even stronger value judgement), to “being a jerk in the style of Cthulhu” (!!!???).
Does this then mean you think using eyesight is monstrous because cephalopodes also have eyes they independently evolved?
Or that maximizing functions is a bad idea because ants have a different function than humans?
Nonhuman agents use X → X does not necessarily and pretty likely does not preserve human values → your overuse of X will cause you not to preserve human values. Being a jerk in a style of Cthulhu I use to mean being a jerk incidentally. Eyesight is not a means of interacting with people, and maximization is not a bad thing if you maximize for the right things, which game theory does not necessarily do.
Try replacing “game theory” with “science” or “rationality” in your rant. Do you still agree with it?
The appeal to probability doesn’t work here, since you’re not drawing at random from X.
I suspect all economics is inhuman. I suspect that any complex economy that connects millions or billions of people is going to be incomprehensible and inhuman. By far the best explanation I’ve heard of this thought is by Cosma Shalizi.
The key bit here is the conclusion:
I suspect this sub-thread implicitly defined “human” as “generating warm fuzzies”. There are, um, problems with this definition.
This is a great way to express it. I was thinking about something similar, but could not express it like this.
The essence of the problem is, all “systems of human interaction” are not humans. A market is not a human. An election is not a human. An organization is not a human. Etc. Complaining that we are governed by non-humans is essentially complaining that there is more than one human, and that the interaction between humans is not itself a human. Yes, it is true. Yes, it can (and probably will) have horrible consequences. It just does not depend on any specific school of economics, or anything like this.
not uniquely human does not imply inhuman. Lungs are not uniquely human, hardly inhuman though.
Generally, using loaded, non-factual words like “inhuman” and “monster” and “cthulhu” and “horrifying” and “puny” in a pseudo-logical format is worthy of a preacher exhorting illiterates. But is it helpful here? I”d like to think it isn’t, and yet I’d rather discuss game theory in a visible thread than downvote your post.
“Inhuman” has strong connotations of inimical to human values—your argument looks different if it starts with something like “game theory is a non-human—it’s a simplified version of some aspects of human behavior”. In that case, altruism is non-human in the same sense.
I guess I’m mostly reacting to RAND and its ilk, having read the article about Schelling’s book (which I intend to buy), and am thinking of market failures, as well.
OK Mr Bayeslisk, I am one boxing you. I am upvoting this post now knowing that you predicted I would upvote it and intended all along to include or add some links to the above post so I don’t have to do a lot of extra work to figure out what RAND is and what book you are talking about.
That is actually not true at all. I was actually planning on abandoning this trainwreck of an attempt at dissent. But since you’re so nice:
http://en.wikipedia.org/wiki/RAND_Corporation
http://en.wikipedia.org/wiki/Thomas_Schelling#The_Strategy_of_Conflict_.281960.29
Apparently I was right to one box all along! Thanks!
Are you thinking of failures of market alternatives as well?
What you’re referring to is a problem I’ve been thinking about and chipping away at for some time; I’ve even had some discussions about it here and people have generally been receptive. Maybe the reason you’re being downvoted is that you’re using the word ‘human’ to mean ‘good’.
The core issue is that humans have empathy, and by this we mean that other people’s utility function matters to us. More concisely, our perception of other people’s utility forms a part of our utility which is conditionally independent of the direct benefits to us.
Our empathy not only extends to other humans, but also animals and perhaps even robots.
So what are examples of human beings who lack empathy? Lacking empathy is basically the definition of psychopathy. And, indeed, some psychopaths (not all, but some) have been violent criminals who e.g. killed babies for money, tortured people for amusement, etc. etc.
So you’re essentially right that a game theory where the players do not have models of each other’s utility functions shows aspects of psychopathy and ‘inhumanity’.
But that doesn’t mean game theory is wrong or ‘inhuman’! All it means is that you’re missing the ‘empathy’ ingredient. It also means that it would not be a good idea to build an AI without empathy. That’s exactly what CEV attempts to solve. CEV is basically a crude attempt at trying to instill empathy in a machine.
Yes, that was what I was getting at. Like I said elsewhere—game theory is not evil. It’s just horrifyingly neutral. I am not using inhuman as bad; I am using inhuman as unfriendly.
Then you must be horrified by all science.
Game theory is about strategies, not about values. It tells you which strategy should you use, if your goal is to maximize X. It does not tell you what X is. (Although some X’s, such as survival, are instrumental goals for many different terminal goals, so they will be supported by many strategies.)
There is a risk of maximizing some X that looks like a good approximation of human values, but its actual maximization is unFriendly.
Connotational objection: so is any school of anything; at least unless the problem of Friendliness is solved.
OK, I think I was misunderstood and also tired and phrased things poorly. Game theory itself is not a bad thing; it is somewhat like a knife, or a nuke. It has no intrinsic morality, but the things it seems to tend to be used for, for several reasons, wind up being things that eject negative externalities like crazy.
Yes, but this seems to be most egregious when you advocate letting millions of people starve because the precious Market might be upset.
Who precisely are you thinking of, who advocated allowing mass starvation for this reason?
Millions of people did starve for reasons completely opposed to free markets.
Besides the fact that maximizing a non-Friendly function leads to horrible results (whether the system being maximized is the Market, the Party, the Church, or… whatever), what exactly are you trying to say? Do you think that markets create more horrible results than those other options? Do you have any specific evidence for that? In that case it would be probably better to discuss the specific thing, before moving to a wide generalization.
I have no idea how the Holodomor is germane to this discussion.
The observation being made, I believe, is that the most prominent examples in the 20th century of mass death due to famine were caused by economic and political systems very far from the Austrian school economics. There’s a longish list of mass starvation due to Communist governments.
Is there an example of Austrian economists giving advice that led to a major famine, or that would have led to famine? I cannot offhand think of an example of anybody advocating “letting millions of people starve because the precious Market might be upset.”
You said “letting millions of people starve”.
There were not that many cases of millions of people starving during the last hundred years.
Yes.
I suspect you’re looking at it with a rather biased view.
Sigh. You made a cobman—one constructed of mud and straw. Congratulations.
Game theory is not like calculus or evolutionary theory—something any alien race smart enough to develop space travel is likely to formulate. It does represent human values.
Can you explain this? I always thought of game theory as being like calculus, and not about human values (like this comment says).
You solve games by having solution criteria . Unfortunately, for any reasonable list of solution criteria you will always be able to find games where the result doesn’t seem to make sense. Also, there is no set of obviously correct and complete solution concepts. Consider the following game:
Two rational people simultaneously and secretly write down a real number [0,100]. The person who writes down the highest number gets a payoff of zero, and the person who writes down the lowest number gets that as his payoff. If there is a tie they each get zero. What happens?
The only “Nash equilibrium” (the most important solution concept in all of game theory) is for both players to write down 0, but this is a crazy result because picking 0 is weakly dominated by picking any other number (expect 100).
Game theory also has trouble solving many games where (a) Player Two only gets to move if Player One does a certain thing, (b) Player One’s strategy is determined by what he expects Player Two would do if Player Two gets to move, and (c) in equilibrium Player Two never moves.
I’m not understanding you, the things you describe in this post seem to be the kind of maths a smart alien race might discover just like we did.
Many games don’t have solutions, or the solutions depend on arbitrary criteria.
… and?
Are you agreeing or disagreeing with “the things you describe in this post seem to be the kind of maths a smart alien race might discover just like we did”?
It depends on what you mean by “might” and “discover” (as opposed to invent). I predict that smart aliens’ theories of physics, chemistry, and evolution would be much more similar to ours than their theories of how rational people play games would be.
How so? Game theory basically studies interactions between two (or more) agents which can make choices the outcome of which depends on what the other agent does. You can use game theory to model interaction between two pieces of software, for example.
Please see my answer to PECOS-9.
I still don’t see what does all this have to do with human values.
I am talking about game theory as a field of inquiry. You’re talking about the current state of the art in this field and pointing out that it has unsolved issues. So? Physics has unsolved issues, too.
There are proofs showing that game theory can never be solved.
I still don’t see what does all this have to do with human values.
I also don’t understand what does it mean for game theory to “be solved”. If you mean that in certain specific situations you don’t get an answer, that’s true for physics as well.
Game theory would be solved if there were a set of reasonable criteria which, if applied to every possible game of rational players, would cause you to know what the players would do.
To continue with physics: physics would be solved if there were a set of reasonable criteria which, if applied to every possible interaction of particles, would cause you to know what the particles would do.
Consider a situation in which using physics you could prove that (1) X won’t happen, and (2) X will happen. If this situation existed physics wouldn’t be capable of being solved, but my understanding of science is that such a situation is unlikely to exist. Alas, this kind of situation does come up in game theory.
Well, it’s math but...
Whether you get an answer is dependent on the criteria you choose, but these criteria must have arbitrariness in them even for rational people. Consider the solution concept “never play a weakly dominated strategy.” This is neither right nor wrong but an arbitrary criteria that reflects human values.
Saying “the game theory solution is A,Y” is closer to “this picture is pretty” than “the electron will...”
Also, assuming someone is rational and wants to maximize his payoff isn’t enough to fully specify him, and consequently you need to bring in human values to figure out how this person will behave.
You seem to be talking about forecasting human behavior and giving advice to humans about how to behave.
That, of course, depends on human values. But that is related to game theory in the same way engineering is related to mathematics. If you are building a bridge you need to know the properties of materials you’re building it out of. Doesn’t change the equations, though.
You know that a race of aliens is rational. Do you need to know more about their values to predict how they will build bridges? Yes. Do you need to know more about their values to predict how they will play games? Yes.
Game theory is (basically) the study of how rational people behave. Unfortunately, there will always exist relatively simple games for which you can not use the tools of game theory to determine how players will behave.
Ah. We have a terminology difference. I defined my understanding of game theory a bit upthread and it’s not about people at all. For example, consider software agents operating in a network with distributed resources and untrusted counterparties.
I do not feel up to defending myself against multiple relatively hostile people. My apologies for having a belief that does not correspond to the prevailing LW memeplex. Kindly leave me alone to be wrong.