Open thread, Sep. 14 - Sep. 20, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Could we live forever? - hey guys. I made a film about transhumanism for BBC News. It features some people in this community and some respected figures. Let me know what you think and if i missed anything etc.
https://www.youtube.com/watch?v=STsTUEOqP-g&feature=youtu.be
Awesome! You should repost this as a top level post.
26 Things I Learned in the Deep Learning Summer School
Probably the biggest cryonics story of the year. In the print edition of The New York Times, it appeared on the front page, above the fold.
A Dying Young Woman’s Hope in Cryonics and a Future, by Amy Harmon
http://www.nytimes.com/2015/09/13/us/cancer-immortality-cryogenics.html
You can also watch a short documentary about Miss Suozzi here:
http://www.nytimes.com/video/science/100000003897597/kim-suozzis-last-wishes.html
Yet som there be that by due steps aspire
To lay their just hands on that Golden Key
That ope’s the Palace of Eternity.
(John Milton, Comus, lines 12-14)
May Kim find that Golden Key some day.
I wonder if the article will increase Alcor’s membership? As “Why have so few people signed up for cryonics” is a big mystery for cryonics supporters such as myself we should use the opportunity of the article to make predictions about the article’s impact. I predict that the article will boost Alcor’s membership over the next year by 10% above trend which basically means membership will be 10% higher a year from now than it is currently.
EDIT: I predict Alcor’s membership will be 11% higher a year from now than it is today. Sorry for the poorly written comment above.
Are those two 10% figures equal only by coincidence?
To me, “boost membership by 10% above trend” means either “increase this year’s signups by 10% of what they would otherwise have been” or else “increase this year’s signups enough to make membership a year from now 10% higher than it otherwise would have been”.
The second of these is equivalent to “membership will be 10% higher a year from now” iff membership would otherwise have been exactly unaltered over the year, which would mean that signups are a negligibly small fraction of current membership.
The first is equivalent to “membership will be 10% higher a year from now” iff m+1.1s = 1.1m where m,s are current membership and baseline signups for the next year, which is true iff m = 11s.
Those are both rather specific conditions, and the first seems pretty unlikely. Did you actually mean either of them, or have I misunderstood?
I am reading the grandparent literally as “increase membership” which does imply that the current trend is flat and the membership numbers are not increasing.
Could be. But is Alcor really doing so badly? (Or: does James_Miller think they are?)
The graphs on this Alcor page seem to indicate that membership is in fact increasing by at least a few percent year on year, even if people are no longer counted as members after cryosuspension.
Hm. Yes, Alcor’s membership is going up nicely. I don’t know what James_Miller had in mind, then.
I made this into a prediction on PredictionBook.
Is the relevant data publically accessible?
Yes, the data is online.
My understanding is that the number of people signed up is in the thousands, which if it is correct means probably a bit less than one in a million persons.
You might have meant it rhetorically, but if it is true that it is a “big mystery” to you why most people have not signed up, then your best guess for the reason for this should be that signing up for cryonics is foolish and useless, just as if a patient in a psychological ward finds himself thinking, “I wonder why so few people say they are Napoleon?”, his best guess should be that the reason for this is that the people he knows, including himself, are not in fact Napoleon.
As another example, if you are at the airport and you see two lines while you are checking in, a very long one and a very short one, and you say, “It’s a big mystery to me why so many people are going in that long line instead of the short one,” then you’d better get in that long line, because if you get in the short one, you are going to find yourself kicked out of it. On the other hand if you do know the reasons, you may be able to get in the short line.
In the cryonics case, this is pretty much true no matter how convincing you find your reasons, until you can understand why people do not sign up.
But the intellectual quality of some of the people who have signed up for cryonics is exceptionally high (Hanson, Thiel, Kurzweil, Eliezer). Among the set of people who thought they were Napoleon (excluding the original), I doubt you would find many who had racked up impressive achievements.
What if you see Hanson, Thiel, Kurzweil, and Eliezer in the short line, ask them if you should get in the short line, and they say yes?
“What if you see Hanson, Thiel, Kurzweil, and Eliezer in the short line, ask them if you should get in the short line, and they say yes?”
As I pointed at last time you brought this up,these people aren’t just famous for being smart, they’re also famous for being contrarians and futurists. Cryonics is precisely an area in which you’d expect them to make a bad bet, because it’s seen as weird and it’s futuristic.
This depends on whether you model contrarianism and futurism as a bias (‘Hanson is especially untrustworthy about futurist topics, since he works in the area’) v. modeling contrarianism and futurism as skills one can train or bodies of knowledge one can learn (‘Hanson is especially trustworthy about futurist topics, since he works in the area’).
My typical heuristic for reliable experts (taken from Thinking Fast and Slow I think) is that if experts have tight, reliable feedback loops, they tend to be more trustworthy. Futurism obviously fails this test. Contrarianism isn’t really a “field” in itself, and I tend to think of it more as a bias… although EY would obviously disagree.
Then it might be that futurism is irrelevant, rather than being expertise-like or bias-like. (Unless we think ‘studying X while lacking tight, reliable feedback loops’ in this context is worse than ‘neither studying X nor having tight, reliable feedback loops.’)
Thiel, Yudkowsky, Hanson, etc. use “contrarian” to mean someone who disagrees with mainstream views. Most contrarians are wrong, though correct contrarians are more impressive than correct conformists (because it’s harder to be right about topics where the mainstream is wrong).
In this case futurism is two things in these people:
A belief in expertise about the future.
A tendency towards optimism about the future. Combined, these mean that these people both think cryonics will work in the future, and are more confident in this assertion than warranted.
I don’t think so… it’s more someone who has the tendency(in the sense of an aesthetic preference) to disagree with mainstream views. In this case, they would tend to be drawn towards cryonics because it’s out of the mainstream, which should give us less confidence that they’re drawn towards cryonics because it’s correct.
One of the most common ways they use the word “contrarian” is to refer to beliefs that are rejected by the mainstream, for whatever reason; by extension, contrarian people are people who hold contrarian beliefs. (E.g., Galileo is a standard example of a “correct contrarian” whether his primary motivation was rebelling against the establishment or discovering truth.) “Aesthetic preference” contrarianism is a separate idea; I don’t think it matters which definition we use for “contrarianism”.
I think it matters in this context. If these people are contrarian simply because they happen to have lots of different views, then it’s irrelevant that they’re contrarian. If they’re contrarian because they’re DRAWN towards contrarian views, it means they’re biased towards cryonics.
I agree it matters in this case, but it doesn’t matter whether we use the word “contrarianism” vs. tabooing it.
Also, your summary assumes one of the points under dispute: whether it’s possible to be good at arriving at true non-mainstream beliefs (‘correct contrarianism’), or whether people who repeatedly outperform the mainstream are just lucky. ‘Incorrect contrarianism’ and ‘correct-by-coincidence contrarianism’ aren’t the only two possibilities.
Ok, so to summarize:
These people are futurists.
1a. If you believe futurists have more expertise on the future, then they are more likely to be correct about cryonics.
1b. If you believe expertise needs tight feedback loops, they are less likely to be correct about cryonics.
1c. If you believe futurists are drawn towards optimistic views about they future, they are less likely to be correct about cryonics.
2.These people are contrarians
2a. If you believe they have a “correct contrarian cluster” of views, they are more likely to be correct about cryonics.
2b. If you believe that they arrived at contrarian views by chance, they are no more or less likely to be correct about cryonics.
2c. If you believe that they arrived at contrarian views because they are drawn to contrarian views, they are less likely to be correct about cryonics.
I believe 1b, 1c, and 2c. You believe 1a and 2a. Is that correct?
The intellectual quality of some people who have NOT signed up for cryonics is exceptionally high as well.
But the average is lower, and not signing up for cryonics is a “default” action: you don’t have to expend thought or effort in order to not be signed up for cryonics. A more relevant comparison might be to people who have written refutations or rejections of cryonics.
I don’t think the average matters, it’s the right tail of the distribution that’s important.
Take, say, people with 130+ IQ—that’s about 2.5% of your standard white population and the overwhelming majority of them are not signed up. In fact, in any IQ quantile only a miniscule fraction has signed up.
entirelyuseless made the point that low cryonics use rates in the general population are evidence against the effectiveness of cryonics. James Miller responded by citing evidence supporting cryonics: that cryonicists are disproportionately intelligent/capable/well-informed. If your response to James is just that very few people have signed up for cryonics, then that’s restating entirelyuseless’ point. “The intellectual quality of some people who have NOT signed up for cryonics is exceptionally high” would be true even in a world where every cryonicist were more intelligent than every non-cryonicist, just given how few cryonicists there are.
No, I don’t think he did. The claim that low uptake rate is evidence against the effectiveness of cryonics is nonsense on stilts. entirelyuseless’ point was that if you are in a tiny minority and you don’t understand why the great majority doesn’t join you, your understanding of the situation is… limited.
James Miller countered by implying that this problem can be solved if one assumes that it’s the elite (IQ giants, possessors of secret gnostic knowledge, etc.) which signs up for cryonics and the vast majority of the population is just too stupid to take a great deal when it sees it.
My counter-counter was that you can pick any measure by which to choose your elite (e.g. IQ) and still find that only a miniscule fraction of that elite chose cryonics—which means that the “just ignore the stupid and look at the smart ones” argument does not work.
Someone who mistakenly believes that he is Napoleon presumably thinks that he himself is impressive intellectually, and in the artificial example I was discussing, he would think that others who believe the same thing are also impressive. However, it’s also true that outside observers would not admit that, and in the cryonics case many people would, so in this respect the cryonics case is much more favorable than the Napoleon example. However, as Lumifer pointed out, this is not a terribly strong positive argument, given that you will be able to find equally intelligent people who have not signed up for cryonics.
In the Hanson etc airport situation, I would at least ask them why everyone else is in the long line, and if they had no idea then I would be pretty suspicious. In the cryonics case, in reality, I would expect that they would at least have some explanation, but whether it would be right or not is another matter. Ettinger at least thought that his proposal would become widely accepted rather quickly, and seems to have been pretty disappointed that it was not.
In any case, I wasn’t necessarily saying that signing up for cryonics is a bad thing, just that it seems like a situation where you should understand why other people don’t, before you do it yourself.
gjm posted a link to the data: Alcor says it has about 1,000 members at the moment.
Yes, I meant including other groups. It might be around 2,000 or so total but I didn’t want to assert that it is that low because I don’t know that for a fact.
But the logic that makes signing up for cryonics make sense is the same logic that humans are REALLY BAD AT doing. Following the crowd is generally a good heuristic, but you have to recognize it’s limitations.
In principle this is saying that you know why most people don’t sign up, so if you’re right about that, then my argument doesn’t apply to your case.
I’m impressed at how positively the author portrayed cryonicists. The parts which described the mishaps which occurred during/before the freezing process were especially moving.
The article discusses the Brain Preservation Foundation. The BPF has responded here:
A COURAGEOUS STORY OF BRAIN PRESERVATION, “DYING YOUNG” BY AMY HARMON, THE NEW YORK TIMES.
http://www.brainpreservation.org/a-courageous-story-of-brain-preservation-dying-young-by-amy-harmon-the-new-york-times/
How Grains Domesticated Us by James C. Scott. This may be of general interest as a history of how people took up farming (a more complex process than you might think), but the thing that I noticed was that there are only a handful (seven, I think) of grain species that people domesticated, and it all happened in the Neolithic Era. (I’m not sure about quinoa.) Civilized people either couldn’t or wouldn’t find another grain species to domesticate, and civilization presumably wouldn’t have happened without the concentrated food and feasibility of social control that grain made possible.
Could domestcatable grain be a rather subtle filter for technological civilization? On the one hand, we do have seven species, not just one or two. On the other, I don’t know how likely the biome which makes domesticable grain possible is.
I suspect that developing a highly nutritious crop that is easy to grow in large quantities is a prerequisite for technological civilization. However, I wonder if something other than grains might have sufficed (e.g. potatoes).
One of the points made in the video is that it’s much easier to conquer and rule people who grow grains than people who grow root crops. Grains have to be harvested in a timely fashion—the granaries can be looted, the fields can be burnt. If your soldiers have to dig up the potatoes, it just isn’t worth it.
Yes, it’s easier to loot people who grow grains than roots, but I don’t think that’s so relevant to taxation by a stationary bandit.
Hmm, abundant and easily accessible food is also a requisite for the evolution of eusocial animal colonies. I guess that’s what cities ultimately are.
Grain is just food that happened to possess two essential features:
Making it was sufficiently productive, that is, a group of humans could grow more grain than they themselves would need;
It could be stored for a long time with only minor spoilage. Having reserves of stored food to survive things like winters, droughts, and plagues of locusts is rather essential for a burgeoning civilization. Besides, without non-perishable food it’s hard to have cities.
You left out an important property:
Making it requires that the makers stay in the same place for a large fraction of the year. Furthermore, if they are forced to leave for any reason, all the effort they have expended so far is wasted and they probably can’t try again until next year.
That’s a relevant feature for figuring out the consequences of depending on grain production. I’m not sure it’s a relevant feature for the purposes of deciding why growing grains became so popular.
This seems somewhat unlikely to me, and we might be able to answer it by exploring “grain.” It seems to me that there are a handful of non-grain staple crops around the world that suggest that a planet would need to have no controllable vegetation sufficient for humans to sustain themselves on (either directly, or indirectly through feed animals). Even ants got agriculture to work.
Potatoes, sweet potatoes, turnips, taro, tapioca, those weird south american tubers related to Malibar spinach, and the tubers of runner beans immediately come to mind as long term storable calorie crops.
Of note, the consumption of flour has recently been pushed back to at the very least 32,000 years ago, probably much longer, even if field agriculture has not:
http://www.npr.org/sections/thesalt/2015/09/14/440292003/paleo-people-were-making-flour-32-000-years-ago
Doesn’t that depend on the climate? I don’t know for how long can you store potatoes and such in tropical climates—my guess is not for long. If you are in, say, Northern Europe, the situation changes considerably.
Plus, the tubers you name are predominantly starch and people relying on them as a staple would have issues with at least insufficient protein.
Climate does make a difference, for sure. But there are two things to consider. One, climates that are warmer let things rot easier but tend to have longer or continuous growing seasons. Two, climate control is a thing that people do (dig deep enough and you get cooler temperature almost anywhere on Earth) as is processing for storage via drying or chemical treatment.
Forgot to mention nuts too.
You are certainly right about protein. Something else must be added, be it meat or veggies of some sort or legumes.
Hm, interesting. I don’t know of any culture which heavily relied on nuts as a food source. I wonder why that is so. Nuts are excellent food—fairly complete nutritionally, high caloric density, don’t spoil easily, etc. Moreover, they grow on trees, so once you have a mature orchard, you don’t need to do much other than collect them. One possibility is that trees are too “inflexible” for agriculture—if your fields got destroyed (say, an army rolled through), you’ll get a new crop next year (conditional, of course, on having seed grain, labour to work the fields, etc.). But if your orchard got chopped down, well, the wait till the new crop is much longer. A counter to this line of thought is complex irrigation systems which are certainly “inflexible” and yet were very popular. I wonder how land-efficient (calories/hectare) nut trees are.
Ah, I just figured out that coconuts are nuts and there are Polynesian cultures which heavily depend on them. But still, there is nothing in temperate regions and there are a lot of nut trees and bushes growing there.
I’m aware of pre-european Californian societies whose main calorie crop was acorns, rendered edible by soaking after crushing to remove irritating tannins and then cooked, and sometimes preserved by soaking in various other substances.
Yes, a good point. But weren’t these American Indians mostly hunter-gatherers? I don’t know if you can say that they engaged in agriculture. Some other tribes did, but those didn’t rely on nuts or acorns.
Eh, to my mind the boundary between agriculture and gathering is fuzzy when your plants live a long time and grow pretty thickly and you encourage the growth of those you like.
Like, there’s 11.5k year old seedless fig trees found in the middle east, a thousand years before there’s any evidence of grain field agriculture. Those simply don’t grow unless planted by humans.
All true. Still, grain very decisively won over nuts. I wonder if there’s a good reason for that or it was just a historical accident. Maybe you can just make many more yummy things our of flour than out of nuts. Or maybe nuts don’t actually store all that well because of fats going rancid...
AI risk going mainstream
Looks like Stephen Hawking is finally someone high enough status that he can say this sort of think and people will take him seriously.
That’s a pretty self serving explanation from the BBC. I think that Bostrom book plays in a major role for the change we have seen in the last year. It can be read by intelligent people and then they understand the problem. Beforehand there was no straightforward way to get a deep understanding in a straightforward way.
I came across Bostrom a decade ago. I’m sure his book is great but ‘Bostrom writes a book’ isn’t that different from ‘Bostrom has a website’. Also, Kurtzweil had some highly selling books out a long time ago.
Elon Musk also made similar claims lately, and Bill Gates. Bostrom is pretty smart, but he’s not a pre-existing household name like these guys.
Yes, but with a quite different message.
No, it’s quite different.
I don’t think Bill Gates would have made those claims when it wasn’t for Bostrom’s book. Bill Gates also promotes the book to other people. Bill Gates likely wouldn’t tell important people: “Go read up on Bostroms website how we should think about AGI risk”, the way he does it with the book.
Elon Musk is a busy guy with 80 hour workweeks. Bostrom and FHI made to him the case that UFAI risk is important. Personal conversations were likely important but reading Bostrom’s book helped raise the importance of the issue in Elon’s mind.
Oh, so Bostrom was behind these three people? Then his book is more important than I thought.
I’m not saying that Bostrom was behind Stephen Hawking remarks but I think he’s partly responsible for Musk and Gates positions.
When it comes to Musk I think there was a facebook post a while ago about FHI efforts in drafting Musk for the cause.
With Gates there’s https://www.youtube.com/watch?v=6DBNKRYVY8g where Gates and Musk sit at a conferece for the Chinese elite and get interviewed by Baidu’s CEO. As part of that Gates get asked for his take on AI risk and he says that he’s concerned and people who want to delve deeper into the issue should read Bostroms book. As far as the timeline goes I think it’s probable that Gates public comments on the issue come after him reading the book.
I don’t think that a smart person suddenly starts to fear AI risk because they read in a newspaper that Steven Hawking is afraid of it. On the other hand a smart person who reads Bostrom’s book can be convinced by the case of the book that the issue is really important.
That’s something a book can do but that newspapers usually don’t do. Books that express ideas in a way that convinces a smart person that reads them are powerful.
Well, Steven Hawking is far smarter than most people, so on most subjects which Steven Hawking is familiar it would be a good idea to update in the same direction as him, unless you are an expert on it too.
Also, it raises AI risk as a possible concern, at which point people might then try to find more information, such as Bostrom’s book, or website.
So yes, people get more information from reading a book than reading a newspaper article, but the article might be what lead them to read the book in the first place.
A while back, I was having a discussion with a friend (or maybe more of a friendly acquaintance) about linguistic profiling. It was totally civil, but we disagreed. Thinking about it over lunch, I noticed that my argument felt forced, while his argument seemed very reasonable, and I decided that he was right, or at least that his position seemed better than mine. So, I changed my mind. Later that day I told him I’d changed my mind and I thought he was right. He didn’t seem to know how to respond to that. I’m not sure he even thought I was being serious at first.
Have other people had similar experiences with this? Is there a way to tell someone you’ve changed your mind that lessens this response of incredulity?
Sometimes saying why you changed your mind can help. In more detail than “his position seemed better than mine”. But sometimes it takes doing some action that is in line with the new idea in order for other people to think you may be serious.
Another thing that may help is to wait some time before telling the person. “Later that day” makes it seem like a quick turnaround. Waiting until the next day to say something like “I’ve had some time to think about it, and I think you were right about X” might make more sense to the other person and lessen the incredulity.
Also, it depends on what your past history has been with this person, and what they have observed in your behaviour.
It happened to me only with people who were extremely, unreasonably cynical about people’s rationality in the first place (including their own). People who couldn’t update on the belief of people being unable to update on their beliefs. There’s an eerie kind of consistency about these people’s beliefs, at least for that much one can give them credit...
You have to engage in some extra signaling of having changed your own mind; just stating it wouldn’t be as convincing.
The Fallacy of Placing Confidence in Confidence Intervals
pdf
I just read through this, and it sounds like they’re trying to squish a frequentist interpretation on a Bayesian tool. They keep saying how the confidence intervals don’t correspond with reality, but confidence intervals are supposed to be measuring degrees of belief. Am I missing something here?
I briefly skimmed the paper and don’t see how you are getting this impression. Confidence intervals are—if we force the dichotomy—considered a frequentist rather than Bayesian tool. They point out that others are trying to squish a Bayesian interpretation on a frequentist tool by treating confidence intervals as though they are credible intervals, and they state this quite explicitly (p.17–18, emphasis mine):
Hmmm, yes, I suppose I was making the same mistake they were… I thought that what confidence intervals were are actually what credible intervals are.
I see. Looking into this, it seems that the (mis)use of the phrase “confidence interval” to mean “credible interval” is endemic on LW. A Google search for “confidence interval” on LW yields more than 200 results, of which many—perhaps most—should say “credible interval” instead. The corresponding search for “credible interval” yields less than 20 results.
How many hours of legitimate work do you get done per day?
Legitimate = uninterrupted, focused work. Regarding the time you spend working but not fully focused, use your judgement in scaling it. Ie. maybe an hour of semi-productive work = .75 hours of legitimate work.
Edit: work doesn’t only include work for your employer/school. It could be self-education, side projects etc. It doesn’t include chores or things like casual pleasure reading though. Per day = per day that you intend to put in a full days work.
[pollid:1029]
I do about 3 hours of legit work when I’m in my usual situation (family, work), but I do way more when I’m alone, both on- and off-the-grid: 12 hours or even more (of course assuming that the problem I’m working on is workable and I don’t hit any serious brick walls). My last superfocus period lasted for about two weeks, it happened when my family went on vacation, and I took a mini-vacation from work myself (though the task I was working on was pretty trivial). My longest superfocus period was about 45 days, it happened on a long off-the-grid vacation.
In the absence of any indications whether this included weekends I assumed that it doesn’t include weekends. On weekends my producivity is way lower.
Good point. I intended for it to mean “on days where you intend to put in a full days work”. I’m a little crazy so for me that’s every day :) But I definitely should have clarified.
I also don’t strictly distinguish between work days and other, but you also clarified that the time shouldn’t include chores which are work too but not usually associated with work for money or education so I had to make some cut. If you had included any kind of productive work the number would have read differently. Lots of pleasure reading e.g. LW can count as such; the line (or factor) could be how much it contributed to your own future development.
This is way lower than I expected. Thoughts?
Maybe you should have added another poll that asked for formally expected or billed hours.
It’s about where I expected. I think 6 is probably the best you can do under ideal circumstances. Legitimate, focussed work is exhausting.
If you’re looking for bias, this is a community where people who are less productive probably prefer to think of themselves as intelligent and akrasikal (sp?). Also you’ve asked at the end of a long holiday for any students here.
Recent discussion topics on Omnilibrium:
Should people be allowed to ear-mark their taxes to specific policy areas for a price?
Are there any arguments for wealth inequality being desirable?
Should we actually expect ‘big world immortality’ to be true? I know the standard LW response is that what we should care about is measure, but what I’m interested in is whether it should be true that from every situation in which we can find ourselves in, we should expect a never-ending continuity of consciousness?
Max Tegmark has put forth a couple of objections: the original one (apart from simple binary situations, a consciousness often undergoes diminishment before dying and there’s no way to draw continuity from it to a world in which it survives) and a newer one in Our Mathematical Universe (he doesn’t think there are “actual” infinities in nature and, therefore, a relevant world doesn’t always exist). Any others?
Could you define exactly what you mean with ‘big world immortality’?
Quantum immortality is an example, but something similar would arguably also apply to, for example, a multiverse or a universe of infinite size or age. Basically the idea that an observer should perceive subjective immortality, since in a big world there is always a strand in which they continue to exist.
Edit: Essentially, I’m talking about cryonics without freezers.
I have a variant on linear regression. Can anyone tell me what it’s called / point me to more info about it / tell me that it’s (trivially reducible to / nothing like) standard linear regression?
Standard linear regression has a known matrix X = x(i,j) and a known target vector Y = y(j), and seeks to find weights W = w(i) to best approximate X * W = Y.
In my version, instead of knowing the values of the input variables (X), I know how much each contributes to the output. So I don’t know x(i,j) but I kind of know x(i,j) * w(i), except that W isn’t really a thing. And I know some structure on X: every value is either 0, or equal to every other value in its row. (I can tell those apart because the 0s contribute zero and the others contribute nonzero.) I want to find the best W to approximate X * W = Y, but that question will depend on what I want to do with the uncertainty in X, and I’m not sure about that.
I should probably avoid giving my specific scenario, so think widget sales. You can either sell a widget in a city or not. Sales of a widget will be well-correlated between cities: if widget sells well in New York, it will probably sell well in Detroit and in Austin and so on, with the caveat that selling well in New York means a lot more sales than selling well in Austin. I have a list of previous widgets, and how much they sold in each city. Received wisdom is that a widget will sell about twice as much in New York as in Detroit, and a third more than in Austin, but I want to improve on the received wisdom.
So I’m told that a widget will sell 10 million, and that it will be sold in (list of cities). I want to come up with the best estimate for its sales in New York, its sales in Austin, etc.
Hopefully this is clear?
Sounds like your problem is fitting a sparse matrix, i.e. where you want many entries to be 0. This is usually called compressed sensing, and it’s non-trivial.
Well, it’s going to depend on some specifics and on how much data do you have (with the implications for the complexity of the model that you can afford), but the most basic approach that comes to my mind doesn’t involve any regression at all.
Given your historical data (“I have a list of previous widgets, and how much they sold in each city”) you can convert the sales per widget per city into percentages (e.g. widget A sold 27% in New York, 15% in Austin, etc.) and then look at the empirical distribution of these percentages by city.
The next step would be introducing some conditionality—e.g. checking whether the sales percentage per city depends, for example, on the number of cities where the widget was sold.
Generally speaking, you want to find some structure in your percentages by city, but what kind of structure is there really depends on your particular data.
The problem—at least the one I’m currently focusing on, which might not be the one I need to focus on—is converting percentages-by-city on a collection of subsets, into percentages-by-city in general. I’m currently assuming that there’s no structure beyond what I specified, partly because I’m not currently able to take advantage of it if there is.
A toy example, with no randomness, would be—widget A sold 2⁄3 in city X and 1⁄3 in city Y. Widget B sold 6⁄7 in city X and 1⁄7 in city Z. Widget C sold 3⁄4 in city Y and 1⁄4 in city Z. Widget D is to be sold in cities X, Y and Z. What fraction of its sales should I expect to come from each city?
The answer here is 0.6 from X, 0.3 from Y and 0.1 from Z, but I’m looking for some way to generate these in the face of randomness. (My first thought was to take averages—e.g. city A got an average of (2/3 + 6⁄7)/2 = 16⁄21 of the sales—and then normalize those averages. But none of the AM, GM and HM gave the correct results on the toy version, so I don’t expect them to do well with high randomness. It might be that with more data they come closer to being correct, so that’s something I’ll look into if no one can point me to existing literature.)
So, there’s some sort of function mapping from (cities,widgets)->sales, plus randomness. In general, I would say use some standard machine learning technique, but if you know the function is linear you can do it directly.
So:
sales=constant x cityvalue x widgetvalue + noise
d sales/d cityvalue = constant x widgetvalue
d sales/d widgetvalue = constant x cityvalue
(all vectors)
So then you pick random starting values of cityvalue , widgetvalue, calculate the error and do gradient decent.
Or just plug
Error = sum((constant x cityvalue x widgetvalue—sales)^2)
Into an optimisation function, which will be slower but quicker to code.
Thank you! This seems like the conceptual shift I needed.
You need to specify what kind of randomness you are expecting. For example, the standard ordinary least-squares regression expects no noise at all in the X values and the noise in Y to be additive, iid, and zero-mean Gaussian. If you relax some of these assumptions (e.g. your noise is autocorrelated) some properties of your regression estimates hold and some do not any more.
In the frequentist paradigm I expect you to need something in the maximum-likelihood framework. In the Bayesian paradigm you’ll need to establish a prior and then update on your data in a fairly straightforward way.
In any case you need to be able to write down a model for the process that generates your data. Once you do, you will know the parameters you need to estimate and the form of the model will dictate how the estimation will proceed.
Sure, I’m aware that this is the sort of thing I need to think about. It’s just that right now, even if I do specify exactly how I think the generating process works, I still need to work out how to do the estimation. I somewhat suspect that’s outside of my weight class (I wouldn’t trust myself to be able to invent linear regression, for example). Even if it’s not, if someone else has already done the work, I’d prefer not to duplicate it.
If you can implement a good simulation of the generating process, then you are already done—estimating is as simple as ABC. (Aside from the hilariously high computing demands of the naive/exact ABC, I’ve been pleased & impressed just how dang easy it is to use ABC. Complicated interval-censored data? No problem. Even more complicated mixture distribution / multilevel problem where data flips from garbage to highly accurate? Ne pas!)
Even if you know only the generating process and not an estimation procedure, you might be able to get away with just feeding a parametrization of the generating process into an MCMC sampler, and seeing whether the sampler converges on sensible posterior distributions for the parameters.
I like Stan for this; you write a file telling Stan the data’s structure, the parameters of the generating process, and how the generating process produced the data, and Stan turns it into an MCMC sampling program you can run.
If the model isn’t fully identified you can get problems like the sampler bouncing around the parameter space indefinitely without ever converging on a decent posterior. This could be a problem here; to illustrate, suppose I write out my version of skeptical_lurker’s formulation of the model in the obvious naive way —
— where brackets capture city & widget-type indices, I have a β for every city and a γ for every widget type, and I assume there’s no odd correlations between the different parameters.
This version of the model won’t have a single optimal solution! If the model finds a promising set of parameter values, it can always produce another equally good set of parameter values by halving all of the β values and doubling all of the γ values; or by halving α and the γ values while quadrupling the β values; or by...you get the idea. A sampler might end up pulling a Flying Dutchman, swooping back and forth along a hyper-hyperbola in parameter space.
I think this sort of under-identification isn’t necessarily a killer in Stan if your parameter priors are unimodal and not too diffuse, because the priors end up as a lodestar for the sampler, but I’m not an expert. To be safe, I could avoid the issue by picking a specific city and a specific widget as a reference widget type, with the other cities’ β and other widgets’ γ effectively defined as proportional to those:
Then run the sampler and back out estimates of the overall city-level sales fractions from the parameter estimates (1 / (1+sum(β)), β(2) / (1+sum(β)), β(3) / (1+sum(β)), etc.).
And I’d probably make the noise term multiplicative and non-negative, instead of additive, to prevent the sampler from landing on a negative sales figure, which is presumably nonsensical in this context.
Apologies if I’m rambling at you about something you already know about, or if I’ve focused so much on one specific version of the toy example that this is basically useless. Hopefully this is of some interest...
I know JAGS lets you put interval limits onto terms which lets you specify that some variable must be non-negative (looks something like
dist(x,y)[0,∞]
), so maybe STAN has something similar.It does. However...
I see now I could’ve described the model better. In Stan I don’t think you can literally write the observed data as the sum of the signal and the noise; I think the data always has to be incorporated into the model as something sampled from a probability distribution, so you’d actually translate the simplest additive model into Stan-speak as something like
which could give you a headache because a normal distribution puts nonzero probability density on negative sales values, so the sampler might occasionally try to give
sales[n]
a negative value. When this happens, Stan notices that’s inconsistent withsales[n]
’s zero lower bound, and generates a warning message. (The quality of the sampling probably gets hurt too, I’d guess.)And I don’t know a way to tell Stan, “ah, the normal error has to be non-negative”, since the error isn’t explicitly broken out into a separate term on which one can set bounds; the error’s folded into the procedure of sampling from a normal distribution.
The way to avoid this that clicks most with me is to bake the non-negativity into the model’s heart by sampling
sales[n]
from a distribution with non-negative support:Of course, bearing in mind the last time I indulged my lognormal fetish, this is likely to have trouble too, for the different reason that a lognormal excludes the possibility of exactly zero sales, and you’d want to either zero-inflate the model or add a fixed nonzero offset to
sales
before putting the data into Stan. But a lognormal does eliminate the problem of sampling negative values forsales[n]
, and aligns nicely with multiplicative city & widget effects.Thanks to both you and gwern. It doesn’t look like this is the direction I’m going in for this problem, but it’s something I’m glad to know about.
I made a rationalist Tumblr, primarily to participate in rationalist conversations there. Solid posts will still be posted to LW, when I finish them.
Singer v.s. Van der Vossen:
Singer asks, if it’s obligatory to save the drowning child you happen to encounter at the expense of your shoes, why isn’t it obligatory not to buy the shoes in the first place, but instead to save a child in equally dire straits?
Bas van der Vossen
The article he is quoted in goes on to explain:
,,,
...
...
This is quite right—the best case for development aid in poor countries is through its positive feedback on institutions (most plausibly, civil society). Then again, most proponents of effective giving favor interventions that would plausibly have such feedbacks—for instance, it turns out that a lot of the money GiveDirectly hands out to poor folks is spent on entrepreneurship and capital acquisition, not direct consumption.
South Korea and Taiwan had no problem with Malaria killing children in which the society invested resources.
I don’t understand why Van der Vossen thinks that there is clear evidence that the difference between what happened in a country like South Korea and what happened in subsaharan Africa has nothing to do with genes. Of course that’s the politically correct belief. But standing there and saying that development economics proved it beyond all odds seems strange to me.
The rule of law does happen to be an important ingridiant to producing wealth but I don’t think you get rule of law directly through buying iPhones.
To the extend that you believe that the rule of law is very useful in helping third world countries the next question would be whether there are cost effective interventions to increase it. That’s a standard EA question.
That seems like something nice to say, but politically it’s very hard to imagine that a First World government gives up the ability of the First World country to feed it’s own population without being dependent on outside forces.
Politically it’s easier to ship excess grain from Europe to Africa than burning it but the excess grain doesn’t get produced with the goal of feeding African’s at all but to have European farmers that provide Europe with a food supply that also can supply in times of crisis.
Well, let’s see. It’s quite convenient for us that there’s a country right next door to South Korea, called North Korea. North Korea has the same genes as South Korea, and yet its economy is much more similar to the economy of Sub-saharan Africa than South Korea. Sure, that’s just N=1, anecdotes are not data and all that, but I’d call that pretty good evidence.
The fact that the bad policies of North Korea lead to bad economic outcomes is no evidence that all bad economic outcomes are due to bad policies. It simply isn’t.
Nobody in the EA camp denies that policies of countries matter and that the property rights and the rule of law aren’t important. I haven’t seen Peter Singer argue either that putting Embargos on other countries to put economic pressure on them instead of engaging in trade is bad.
Most African countries on the on the other hand don’t suffer under strong embargo’s. They are in the sphere of the IMF who preaches property rights for decades and tries to get the countries to respect property rights.
I don’t understand how the karma system here works. One my posts below, about the usefulness of prostitutes for learning how to get into sexual relationships through dating regular women, dropped off for awhile with a −4 karma. Then I just checked, and it has a +4 karma. Where did the 8 karma points come from?
This has happened to some of my posts before. Do I have some fans I don’t know about who just happen to show up in a short interval to upvote my controversial posts?
I think someone is using a bunch of alts to occasionally mega-upvote posts they like.
I think you do—what you do NOT have is a good model for predicting future karma scoring of your posts :-/
Welcome to everybody’s on this forum world.
I make sense of karma, and generally have been using it to tune my efforts towards more helpful and more useful posts to people (or at least I think I am doing that more than pandering)
Many LWers, myself particularly, write awkwardly. Did you know Word can check your writing style, not just your spelling with a simple option change. I’m learning how to write with better style already.
This is a good occasion for relying on natural rather than artificial intelligence. Here’s a list of style suggestions that can be made by Word. It checks for a lot of things that can be considered bad style in some contexts but not in others, and to my knowledge it’s not smart enough to differentiate between different genres. (For example, it can advise you both against passive voice – useful for writing fiction, sometimes – and against use of first-person personal pronouns, which is a no-no in professional documents. If it needs mentioning, sometimes you cannot follow both rules at once.) There’s plenty of reason to doubt that a human who can’t write very well can have an algorithm for a teacher in matters of writing style; we’re not there yet, I think.
The Importance, tractability, and neglectedness approach is the go-to hereustic for EA’s.
The open philanthropy project approaches it like this:
I reckon it’s a simplification of the rational planning model:
What do you reckon?
If a graduate student approached you to do a section of the data analyses of your research in return for credit/authorship and to degree requirements, what would you give her? Not, she’s specified just a ″section″ and is not interested in any data collection, research administration or the link, she just wants to fulfill her mini-research project requirements.
That depends obviously on the skills of the individual.
I think giving someone the Mnemosyne database to analyse for better ways to predict Spaced Repetition System learning would be useful if that person has enough skills to do genuine work. Gwern works to bring that data into a nicely downloadable format: https://archive.org/details/20140127MnemosynelogsAll.db
From the Foreword to Brave New World:
We can remember things we don’t believe and believe things we don’t remember. Which source of knowledge if a better authority for our expectations and priors?
Before asking that question it’s useful to ask why one wants to know priors and what one means with the term. A person with arachnophobia has one a system I level a prior about spiders being dangerous but often doesn’t have that one a system II level for small spiders.
I’m trying to wrap my mind around Stuart Armstrong’s post on Doomsday argument, and to do so I’ve undertook the task of tabooing ‘randomness’ in the definitions of SIA and SSA.
My first attempt clearly doesn’t work: “observers should reason giving the exact same degree of belief to any proposition of the form: ‘I’m the first observer’, ‘I’m the second observer’, etc.” As it has been noted before many times, by me and by others, anthropic information changes the probability distribution, and any observer has at least a modicum of that. I suspect this conflict is what’s thwarting my attempts at making sense of the topic.
Trying to assign the same degree of belief to infinitely many mutually exclusive options doesn’t work. The probability of being an observer #1 is greater than the probability of being an observer #10^10, simply because some possible universes contain more than 1 but less than 10^10 observers.
I’m not sure how exactly the distribution should look; just saying in general that larger numbers have smaller probabilities. The exact distribution would depend on your beliefs about the universe, or actually about the whole Tegmark multiverse, and I don’t have much strong beliefs in that area.
For example, if you believe that universe has a limited amount of particles and a limited amount of time, that would put an (insanely generous) upper bound on the number of observers in this universe.
Yeah, but the class of observers in the Doomsday argument is not infinite, usually one takes a small and a huge set, both finite. So in theory you could assign a uniform distribution.
Exactly, and that’s an assumption I’m always willing to make, to circumvent the problem of an infinite class of reference.
The problem though is not the cardinality of the set, it’s rather the uniformity of the distribution, which I think is what is implied by the word ‘randomness’ in S(S|I)A, because I feel intuitively it shouldn’t be so, due to the very definition of observer.
http://www.asofterworld.com/index.php?id=1239
Against akrasia
The other day I heard someone reply ‘so am I’ to ″I’m sorry″. Never heard that before. Polite, but not as askward to replying I’m sorry right after the counterparty.
Brave New World, Chapter 3:
“And after all,” Fanny’s tone was coaxing, “it’s not as though there were anything painful or disagreeable about having one or two men besides Henry. And seeing that you ought to be a little more promiscuous …”
Lenina shook her head. “Somehow,” she mused, “I hadn’t been feeling very keen on promiscuity lately. There are times when one doesn’t. Haven’t you found that too, Fanny?”
Fanny nodded her sympathy and understanding. “But one’s got to make the effort,” she said, sententiously, “one’s got to play the game. After all, every one belongs to every one else.”
“Yes, every one belongs to every one else,” Lenina repeated slowly and, sighing, was silent for a moment; then, taking Fanny’s hand, gave it a little squeeze. “You’re quite right, Fanny. As usual. I’ll make the effort.”
A child of four is just as intelligent as the adult they will eventually become. They just have less knowledge to work with.
Four seems very young. It looks like brain development continues in a non-trivial way for a long time, and late adolescence (i.e. 15ish) is when IQs stabilize, if I remember the literature correctly.
If this was true, wouldn’t 4-year-olds perform very nearly as well as adults on IQ tests like Raven’s Progressive Matrices? I haven’t actually looked it up, but I would be very surprised if pre-adolescent children score the same as adults on these tests.
This sounds a lot like the theory of crystallized vs fluid intelligence: https://en.wikipedia.org/wiki/Fluid_and_crystallized_intelligence
As far as I know, by most any commonly used metric, both of these will increase well beyond four years of age. Vaniver mentions 15 years old, and I recall 19-20 years old being the number given for maximum fluid intelligence in the psychology textbook I had in undergrad.
Is it true?
It’s a thought that occurred to me. Opinions are welcome.
Some explanatory footnotes:
“Four-year old” is just a focal example. Consider anyone, young, your own age, or old (these are the three ages of man) with whom you think there is a large conceptual distance to cross. Blues will be thinking of Greens and vice versa, but that’s nothing on the scale of what I’m trying to point to. But very young children are a real example, not merely a parable.
When you are face to face with a stranger, you are looking at an alien AI. Their fundamental mechanism of discerning truth from falsity may be just as acute as yours, but is operating on completely different data. And it is the same for them, face to face with you.
There is more context for that thought, but that will do for now.
We share the same physical reality, don’t we?
So that I may get a better overview of this account I have undertaken one task and plan to undertake another.
This is the task I plan to complete in the near future: In the near future I will be reviewing what I have replied to and deciding whether to bookmark that link and the parent link for further consideration or to disregard them.
This is the task already completed: Looking back over my post history, these are the only one’s for which I currently expect that looking back upon them in the future will be a worthwhile exercise
etc
Posts after this post was published are not included
Discussion board posts
Synthetic biology for good
Deworming a movement
Rationality (psychology, computer sci and behavioural economics)
daily checklist
internalised discounting
meta cognitive checklist
Change, help and understanding
Reducing dimensions of complexity
high performance psychology
What does correlation mean?
triaging problems with plausible psychological and physical solution spaces
Perhaps there should be trigger warning warnings
Uncertainty is okay
Critical thinking emergency department
Hunt down false memories
What does it mean when mental health conditions shift to one another?
time and expertise
towards evidence based meta-cognition and conscious thought
Extinction addiction
Public health*
Lack of community interest in extremely high payoff risky projects, AI excepted
GiveWell’s irreverence for the controversial topics in developmental economic
Is there any evidence GiveWell’s recommended charities make anyone feel better any more effectively than alternatives?
Could preoccupation with neglectedness is an easy way to keep the charity evaluation space free of its own analytical competitors, and who’s keeping track of open problems in ea?
Precision medicine
Human genomics has been a let down so far
Some people may be more depressed because they aren’t traumatised enough
health startups go with a whimper not a bang, if they take off at all
Regulators hold back precision medicine
Political economy
young people can become politicians too
Are market economies conjured evolutions that can’t bridge the Lucas critique?
recent Thiel quotes
individuals might be able to bet beliefs and vote values but is there enough homogeniety in what beliefs and values mean to set up a multiagent system around it?
radio opinions
Openheimmer and his documentary on paramilitaries, similar things and the people behind them
Motivation
competition and ordinariness
yo elliot
Affective truths in motivation
Procrastination and fear of success
Unexpected behaviour
What’s MIRI hiding?
Structured sexism
Not only disinterested in playing paranoid debating online, LW’s dislike the idea
Nobody seems interested in making a planning tool for lay people based to construct game theoretic scenarios
An effective altruism related post in an easy-points thread with minimal texts getting downvotes
LW’ers don’t seem to like poetry
LW’ers don’t like don’t like tangents or counterforenscs
Dude, you ain’t Eliezer or Yvain, it’s a bit too early for you to start constructing Greatest Hits lists...
To be fair, if it helps someone find useful information, so much the better. If not, who does it harm?
It’s noise and so harms everyone who is actually looking for information or whose time has value.
Imagine if I started to post random extracts from Wikipedia onto LW. You argument would apply to them as well, would it not?
Sorry, I had the impression that my posts were more helpful than unhelpful because my karma balance is above zero.
I’m not confident I’m interpreting karma right however since I rarely see upvoted posts but see many downvoted posts.
There is also evidence suggesting I am misinterpreting karma and that it is around zero:
Edit: also funny hearing this from you Lumifer. You’re a very prolific poster and most of your content isn’t probably the kind of information others are looking for. But extension of your logic, why don’t you boo people who speak different languages, or people selling goods you don’t want to buy yourself?
I would boo people who came to LW and started having long public conversations in, say, German.
Besides, I think you’re confusing “I have a right to do X” and “doing X is a good idea”.
By removing the posts which individually have zero or negative karma you could have made the list half as long (and therefore more useful, per unit of time). I’d even say that post karma less than 3 is mere noise.
I reckon if that’s your opinion as a blanket policy just update your account preferences to not show posts with less than 3 karma. It defaults to not showing less than 2 in any case.
I can see the usefulness for you to have that list.
I don’t see the usefulness for anyone else to have it.
I’m leaving LessWrong in a few days and want to save myself time if I wonder what if any of my posts are worth revisiting once I’ve forgotten
In that case you might have put these on you user page.
I tried to but it (edit: the Wiki) says I don’t have one and don’t have permission to create one.
Alcor Cryonics should rename itself to Alicorn Cryonics, because associating your brand with ponies makes marketing easier.
I’ll leave it to you to determine if I meant for this comment to be taken seriously or sarcastically.
I don’t know of any Bronies with cryonics arrangements.
The LW group on Fimfiction is pretty big, and I recognize a couple people with cryonics arrangements from that list. I’m leaning towards signing up for neuro with Alcor myself.
namechk
What?
(No, I am not going to follow some random link just because someone posted it to LW.)
So how does getting sexual experience with prostitutes translate over to getting into sexual relationships with regular women through dating, any way?
I met a 20-something woman at the Venturist cryonics convention in Laughlin, Nevada, last year who talked to me more than she needed to as a social acknowledgment, which made me wonder if she felt attracted to me. I don’t know how to interpret these situations in the handful of times they have happened in my life, so I don’t know what to do, and they make me anxious.
If I had sexual learning experiences only from prostitutes, and I had nothing else to go on, should I have asked this woman how much money she wanted to come with me to my room in Laughlin’s hotel for sex?
That would likely be perceived as highly inappropriate and carries with it the chance of you getting banned from that convention in the future.
That generally doesn’t work on women who don’t already sell sex for a living.
Maybe a sex surrogate could be useful for you. She would provide you with more emotional and social guidance than a regular hooker, and the learning process would advance at your own pace and on your own terms.
Epistemic status: speculation
It doesn’t (unless you’re subconsciously self-sabotaging because you’re scared that you will make a bad impression with your first sexual performance or something). OTOH, it doesn’t hurt either (except via opportunity costs, but then so does anything else). So how does eating at restaurants translate over to learning how to cook? It doesn’t, but that’s not what people eat at restaurants for.
It doesn’t, in no way. The top positive effect you could get from sex workers is the relief of pressure and anxiety, but if you’re not getting even that then I guess you could stop wasting your money.
99.9% it would have had a bad outcome. Why didn’t you just simply invited her to discuss the things further in front of a drink in a more intimate space?
I’d rather people actually said “Do you want to come back to my room for sex?” rather than “Do you want to come back to my room for coffee?” where coffee is a euphemism for sex, because some people will take coffee at face value, which can lead to either uncomfortable situations, including fear of assault, or lead to people missing opportunities because they are bad at reading between the lines.
Or if you do want to invite someone for a drink, go somewhere public.
Edit: I’m not saying that people should go round propositioning people for sex without getting to know them first. I’m saying that drinks in public are good, and that I, personally, prefer to think that adults should be able to say what they mean without euphemisms. I’m not saying that I get to ignore societies’ rules. And I realise that people find what I have been saying creepy, but personally, I think if I was a girl I would find it very creepy that there could be situations where I’m in a private room with no witnesses and I want to drink coffee and the guy expects sex.
Actually, it’s often not—it’s a declaration of interest and an euphemism for “let’s move this thing along for the time being and see where we’ll end up”.
Imagine the answer to your “Do you want to come back to my room for sex?” being “I don’t know yet, why don’t we have coffee while I evaluate you a bit more thoroughly?”
So, someone who wanted to take things slowly would turn them down, where they might have accepted an invitation for coffee in starbucks. If invitation to bars = drink , bedroom = sex then everyone knows where they stand.
Maybe? It’s a negotiation. For example that someone could have counter-suggested a coffee at Starbucks, that’s a “you’re going too fast” signal. Or said “Sure, but I’ll have to run in 10 minutes, I have an appointment to catch”, that’s a “yes to coffee, no to sex” signal. There is a VERY large variety of ways to signal interest, intentions, etc.
Plausible deniability, dude. It’s much easier to dispel the awkwardness of rejection if you can reasonably fall back on the claim that, hey, maybe coffee was all you wanted anyway. Successful courtship depends on making the other person feel comfortable around you; it’s a human relationship, not resource extraction, and it has to be framed in appropriate terms. (Edit: oh, sorry, I thought I was replying to advancedatheist; removed a sentence that assumed this.)
In table format. The second strategy is much more likely to lead to (2,1) than to (2,2).
I get that it’s not resource extraction, but its not espionage either, and I personally don’t see the need for ‘I can neither confirm nor deny that I want sex’.
I also get that its about making people feel comfortable. I’m more comfortable if people are fairly upfront about what they want, but I get that it’s just me who feels this way. I’m really bad at picking up on subtext, I have conversations like this:
Other person: “We’re spending a lot of time together, its almost like we’re being a couple.”
Me: “Yeah, we have been hanging out a lot.”
several months later...
Oh. I get it now. Why couldn’t he just say he wanted a relationship?
And things can get even worse if one person thinks coffee means sex and one thinks it means coffee. I know a girl who has been accidentally raped because of drunken misunderstandings.
BTW I’m impressed that you went to the fuss of making a table :)
The usual term is flirting.
A lot of the time people are not sure about what they want (or whether the cost-benefit is favorable). Socially acceptable delaying tactics are important.
A girl saying yes to coffee isn’t an excuse to not look for consent when having sex. Saying yes to coffee just means consent to move to a different location.
This is true, but its not that simple. When you’re in private, its a far more dangerous situation, and, for instance, some girls will be scared to say no because of the possibility of violence.
She will be even more afraid to say “no” while in private if she beforehand explicitely said “yes” to sex instead of having said “yes” to coffee.
If you ask: “Do you want to come to my room with me to have sex” and she says “Yes”, that can be interpreted as a promise to have sex if the girl comes to the room. Asking for “coming to the room to drink coffee” doesn’t do that to the same extend.
But that presumes that the girl changes her mind about the sex when she reaches his room, which seems strange.
I suppose the room could be a sex dungeon, but in that case he should have asked “Wanna come home with me for kinky sex?”
(Obviously, people have the right to withdraw consent at any time for any reason, it just seems unlikely that it would be necessary)
In the example the girl usually don’t just want sex but she wants sex while she’s turned on and that brings her pleasure. Even in the case of asking directly for sex a girl would assume that the guy will engage in foreplay that puts her then in an emotional state where she will have pleasurable sex.
When a guy asks: “Do you want to come to my room for coffee” a girl might think “That’s exciting and hopefully the night will end with great sex” but depending on how the interaction in the room goes it might or might not end up in sex.
I am assuming that the people involved have probably been out drinking and having fun and getting into an exciting emotional state beforehand.
That’s such BS. Rapists know what they’re doing, even when they pretend otherwise; rape is predatory behavior. The only way you could accidentally rape someone is in the “whoops, found the wrong hole!” sense.
That depends on how one defines the word “rape”. The fact that there is currently an attempt by certain groups to massively expand the definition of that word (while keeping the connotations of the original meaning) isn’t helping.
The issue in this case seems to be that the man thought that the fact that the woman said “yes” to having coffee means that she expressed consent while the woman thought it didn’t.
Why do you think that in every case both people have the same idea whether there’s consent? Or do you think that rape means something different than having sex without consent?
The data show otherwise. As it turns out, an overwhelming portion of rapes is due to a minority of repeat offenders who never get caught, due in no small part to prevailing social attitudes which all-too-readily construe rapes as nothing more than one-off “misunderstandings” which can be “forgiven”. But again, that’s just wrong. Rape is not something that just happens once—they do it again and again.
Someone who thinks that a woman saying “Yes” to coffee means that she expresses consent to sex is likely going to repeat the error multiple times.
Believes such as: ‘Her mouth that “no” but her eyes said “yes”’ can also repeat to repeated offending without the rapist thinking he’s a rapist.
Understanding how to determine consent is vital and not all problems are due to bad intent.
I note that people who misunderstand something once seem above-averagely likely to misunderstand similar things in future, especially (but not exclusively) if they don’t receive correction.
Maybe you’re right about the vast majority of cases. In the specific anecdote I mentioned, the victim told me that it was a misunderstanding—they were friends, she thought she was going home with him to sleep, he thought they were going to have sex, they were both very, very, drunk and he didn’t understand that she wasn’t consenting. She has forgiven it and they are still friends, although perhaps less close.
I’m not endorsing anyone’s actions here. Perhaps this guy is a threat, and she should not have forgiven him. But I think my original point stands, which is that it is safer for people to get to know each other over drinks in public and only go home if they both sure whether or not they want sex.
Would this be the same “data” that claims that 1 in 4 college women are “raped”?
I’d rather that too, and I’ve had it go wrong in both directions. But the whole point of much of this site is that outcomes are more important than principles. Saying “do you want to come back to my room for sex?” is not going to change society, it’s just going to make you personally come off as a creep.
I’m not sure its always creepy, not if you’ve already kissed them. Depends on circumstances. Inviting someone in for coffee and then trying to fuck them can be pretty creepy too.
But I agree that I can’t change society, and so I might as well conform to the rules.
It’s almost always creepy in the context of an early relationship: whether you’ve kissed or not, it’s a strong signal of contempt for or unfamiliarity with sexual norms. About the only exceptions I can think of would occur in very sex-positive cultures with very strong norms around explicit verbal negotiation. There aren’t many of those cultures, and even within them you’d usually want some strong indications of interest beforehand.
On the other hand, if you’ve invited someone up for coffee (or just said “do you want to come back to my place?”, which is pretty much the same offer), that’s not license for them to tear your clothes off as soon as the door closes either. Doing that would be creepy, unless you’ve practically been molesting each other on the way over, but normally the script goes more like this: you walk in, there’s maybe some awkward chitchat, you sit down on the bed or couch, they sit down next to you, you start kissing, and things progress naturally from there. If at any point they break script or the progression stalls out… well, then you make coffee.
I can think of a few examples where I’ve seen directly propositioning someone work, but these examples were among rather promiscuous people, so I think your point stands.
Actually, I’d interpret this very differently—inviting someone back for coffee is, on the face of it, saying that the reason you are inviting them is for coffee, not sex. Its a false pretext. But “do you want to come back to my place?” gives no pretext and its obviously for sex (assuming you’ve kissed already).
Obviously, I do know that inviting someone for coffee means sex might happen (or at least it does in some contexts). But there’s also people who invite people over to “watch a movie” or “smoke weed” and this is more of a grey area because they might actually want to watch a movie.
It’s a pretext, sure. That’s the point. The standard getting-to-know-you script does not allow for directly asking someone for sex (unless you’re already screwing them on the regular; “wanna get some ice cream and fuck?” is acceptable, if a little crass, on the tenth date) so we’ve developed the line as a semi-standardized cover story for getting a couple hours of privacy with someone. You shouldn’t read it as “I want coffee”, but rather as “I want to be alone with you, so here’s a transparent excuse”. There are more creative ways to ask the same thing, but because they’re more creative (and therefore further outside the standard cultural script), they’re more prone to misinterpretation.
Compare the Seventies-era cliche of “wanna come look at my etchings?”
I think there’s a deeper point: human interactions are multilayered and the surface layer does not necessarily carry the most important meaning. The meaning can be—and often is—masked by something else which should not be interpreted literally.
“It’s a false pretext” is not even wrong—it’s just not a correct way to think about the situation. A “pretext” is a way to express in a socially acceptable fashion a deliberately ambiguous meaning which, if said explicitly aloud, would change the dynamics of the situation completely.
Human interaction, especially of a sexual nature, just is not reducible to the straightforward exchange of “wanna fuck?” information bits.
I agree with you, and that’s indeed what is implied by my “a more intimate space”. I meant a bar where you can create a two people bubble, with more overlapping of intimate space, rather than “come back to my room”.
The error I see socially inexperienced people making over and over is presupposing that others have the same need and way of communicating that they have. It’s not so, especially when dealing with a person of the opposite sex.
A good rule of thumb in these matters is to incrementally test for more intimacy in a gradual manner:
The problem with “Do you want to come back to my room for sex?” can be that it requires the woman to commit in that moment. A woman might very well think: “I would enjoy making out in a more private space but at the moment I don’t know whether I actually want to have sex, and I want to make that decision based on how I feel in the moment”
I find this strange, because if I’m attracted to someone, this attraction doesn’t change on a second-by-second basis, although perhaps its just me that feels like that. I think if this hypothetical woman doesn’t know whether she wants sex, maybe it would be best for her to wait until the next date, where she might have a better idea of what she wants.
I heard some advice saying that if you’re not enthusiastic about something its not worth doing, and while I’m not sure this applies in general, I would apply it to sex. No point in half-hearted sex.
Being attracted to someone and wanting to have sex with them next minute aren’t the same thing. You usually want to also be horny to have sex. Women also want to feel comfort and trust.
A woman might feel: “I’m attracted to this guy but I’m menstruating and I don’t like it to have sex while I’m menstruating.”
That assumes that a mental idea of what she wants drives her behavior. I think in most cases a woman will instead listen to her emotions that tell her what she likes in that particular moment instead of relying too much on mental concepts.
That desire might simply be: “I want to be more intimite with this guy than I’m at the moment but I don’t want to be in public when we get more intimite.”
There is the section of Surely You’re Joking, Mr. Feynman with the phrase “you just ask them!”, which does endorse explicitly asking people if they’re interested in sex. I don’t think this is a replacement for understanding and displaying social cues, though.
Does America’s health care system have a bias against incels?
Today I went to get my first physical in years now that I have Obamacare, and during the interview with the nurse practitioner, when she got to the questions about my marital status and whether I have any children, I just went straight to the point about my adult virginity, along with providing some context about how I wasted my time “dating” earlier in life because I could never close the deal with a woman. Otherwise she might assume that I had gone to prison for 30 years or something ridiculous like that to explain what kept me away from women for so long. (A woman actually asked me one time if I had spent decades in prison to account for my lack of sexual experience.)
And this nurse then started arguing with me about not giving up on finding sexual relationships – at my age (55). She sounded like the dating advice scolds that incel bloggers like The Black Pill have written about. This pissed me off, and I may have to find a different health care provider.
People with sexual experience really, really don’t understand the situation of guys like me, even ones with medical training.
Don’t rant to strangers about how incel you are. If you do, don’t be surprised if some of those strangers try to offer you comfort.
So how should I answer questions about my sexual history in a medical context?
I find it odd that gays and promiscuous women have become socially acceptable now, while incels with normal desires have become the freaks, weirdos and expendables. This has turned completely around from what people considered normal sexuality 50 years ago.
Men without families have always been considered expendable. The whole concept of army is built around that. I’m not saying it’s right; I’m just saying it’s old as history.
The new thing is that “having sex” has been completely divorced from “having a family”, so now some stigma (less) is associated with not having a family, and some stigma (more) is associated with not having sex. It makes sense this way, because being unable to attract someone implies being unable to start a family. Again, I’m describing here, not making a moral judgement; I don’t have a problem with people not reproducing.
It sucks to have low status. But it is stupid to needlessly tell strangers “hey, I have low status”.
“No.”
Or if there are looking to be a lot of questions, you can head them off with “no, I’m a virgin”.
I don’t believe that I have seen any statement that incels are freaks stronger than your own statement that “otherwise she might assume that I had gone to prison for 30 years”. I’m sure that there are some people who might assume that—or worse—but I would not expect that most people would.
Likewise, when someone overshares about their problems (and if you defined yourself as an ‘involuntary celebrate’, you are framing it as a problem), the default social response is “don’t give up, you can handle it!” whether you’re talking about dandruff or cancer. Her response may not be what you hoped for, but it wasn’t a clear indicator of prejudice.
The increasing visibility of incels in developed countries, especially in Japan, where the numbers of adult male virgins has gotten ridiculous, makes the correspondingly decreasing percentage of sexually experienced men uneasy for some reason. I have to wonder if the unease resembles the effects of mortality salience in terror management theory. We provide empirical evidence that women’s sexual freedom hasn’t resulted in a sexual utopia, despite all the propaganda to that effect going back to the Enlightenment.
I’m tempted to create a drinking game for every time the Enlightenment gets blamed for whatever somebody thinks is wrong with the world.
I doubt very much that your context was medically relevant. She behaved inappropriately and of course you should change providers if you can and prefer to, but there was no reason to do anything but answer “no” to her questions in the first place, especially if the alternative involved phrases like “close the deal”.
I’m curious. If you had been examined by a male nurse, would you have felt the same need to give an extended explanation?
Even more so, because the male nurse might assume I’m gay otherwise.
I’ve noticed some little-studied cognitive biases here, because sexually experienced people tend to force ready-made “explanations” on male incels that make them comfortable, instead of trying to study and understand incel as its own phenomenon. The canned explanations lead to bad conclusions and useless advice for men like me. How would seeing a prostitute teach me how to get into sexual relationships? Men who get their sexual experience exclusively from prostitutes can remain as inept at dating as incels. You usually can’t just pick up a girl at the coffee shop with your “day game” and expect her to do the whore tricks you have become accustomed to with escorts.
That also shows why I consider sexbots a really stupid and dangerous notion. Sexbots could just increase the proportions of socially retarded men who have no clue how to deal with real women.
What you need from the nurse is her set of skills. Her personal opinion of you is irrelevant to doing her job. I understand that we may see health professionals as higher-status than us, but they’re actually doing us a service. You don’t need to feel intimidated by an unspoken imagined condemnation.
It’s reasonable to assume that any bias which is common in the culture will also show up in how patients are treated.