I might as well point out my solution- I’ve set the date of the Austin meetup to be six years from now, and edit the date each week. It stays on the map, it stays on the sidebar (so I remember to edit the date- if this were automatic, then it could be correct), and it stays out of discussion.
This issue has been brought up many times, and I agree that it’s a major problem. The solution I suggested was to have all the meetup locations be brought together into a single weekly meetup thread, with all the city names in the title. This could be done either automatically with a little bit of coding, or by just having someone do the coordination. I even volunteered to be the one doing the coordinating. But no one seemed to be interested in actually agreeing to do it. I still stand by my suggestion, if it is adopted.
It seems to me that the forum format is ill suited for the subject matter in the first place. Unless the intent is to use the discussion forum raise awareness of the meetups, it seems to me that a website specifically designed for the meetups would make more sense. Especially since most people aren’t going to have much interest in a meetup more than, say, 100 miles away from where they live. If people really want to be notified about the meetups without having to go to a separate website, couldn’t that be accomplished through an RSS feed or some such solution? Granted, that would take more effort on individual LWers, but I have doubts about how much clutter should be accepted simply to make becoming aware of every meetup as effortless as possible.
Is there a way on my end to tell my computer to not include any post that includes “Meetup” in the title?
Frank Adamek has done what you suggest for years. That you don’t notice it being done is a pretty bad sign about the idea. If you want to contribute, you should be trying to get people to use his system, rather than trying to introduce a new system. Or maybe you should suggest modifications. But the first step is knowing the current system.
I’m aware of Frank’s posts to main. It came up during the last discussion about this idea. What I am suggesting is to remove the individual meetup threads from discussion, to clear up the clutter. In addition, the meetup cities would be right up there in the title (to respond to objections that having a single thread would result in reduced visibility). Instead of everyone submitting to discussion and then someone gathering up everything in main, everyone would simply submit to the person doing the coordinating. The reason I proposed myself as a volunteer was that I didn’t know if Frank would be willing to do this, given that it would require daily correspondence with the people organizing the meetups.
I’m not sure how much of the difference is more activity. It feels like a higher proportion of things I’m interested in, but that could just be more frequency of things I’m interested in.
… and of those five, for two of them the comments consist of me complaining that the meetup location hasn’t been included in the title.
That being said, personally I don’t mind the meetup posts that much, and I’m not sure that moving them to their own section would be an improvement. I find it pretty likely that nobody would ever look there.
Next iteration: meetup announcements occupy their own tab, top of Discussion starts with an “ad” line about recent announcements, in a bright color or otherwise distinguished: “Recent meetup announcements: Moscow, Tel-Avid, Boulder, London”, every city is a link.
That policy forces everybody to see the meetup announcements, and thus probably increases meetup attendance (and knowing your announcement will have a wide (forced) public encourages people to create meetups).
Recent work shows that it is possible to use acoustic data to break public key encryption systems. Essentially, if one can send specific encrypted plaintext then the resulting sounds the CPU makes when decrypting can reveal information about the key. The attack was successfully demonstrated to work on 4096 bit RSA encryption. While some versions of the attack require high quality microphones, some versions apparently were successful just using mobile phones.
Aside from the general interest issues, this is one more example of how a supposedly boxed AI might be able to send out detailed information to the outside. In particular, one can send surprisingly high bandwith even accidentally through acoustic channels.
I think it is probably simpler to enable the microphone from a web or mobile application than to install a keylogger in the OS. But then if you consider acoustic keyloggers...
That might have been true a few years ago, but they point out that that’s not as true anymore. For example, they suggest one practical application of this technique might be to put your own server in a colocation facility, stick a microphone in it and slurp up as many keys as you can. They also were able to get a version of the technique to work 4 meters away, which is far enough that this becomes somewhat different from having direct physical access. They also point out that laser microphones could also be used with this method.
Yesterday I noticed a mistake in my reasoning that seems to be due to a cognitive bias, and I wonder how widespread or studied it is, or if it has a name—I can’t think of an obvious candidate.
I was leaving work, and I entered the parking elevator in the lobby and pressed the button for floor −4. Three people entered after me—call them A, B and C - but because I hadn’t yet turned around to face the door, as elevator etiquette requires, I didn’t see which one of them pressed which button. As I turned around and the doors started to close, I saw that −2 and −3 were lit in addition to my −4. So, three floors and four people, means two people will come out on one of the floors, and I wondered which one it’ll be.
The elevator stopped at floor −2. A and B got out. Well, I thought, so C is headed for −3, and I for −4 alone. As the doors were closing, B rushed back and squeezed through them. I realized she didn’t want −2, and went out of the elevator absent-mindedly. I wondered which floor she did want. The elevator went down to −3. The doors opened and B got out… and then something weird happened: C didn’t. I was surprised. Something wasn’t right in my idle deductions. I figured it out in the few seconds it took for the elevator to descend to my floor and let me out together with C.
Where did I go wrong? When I knew that B left on −2, I deduced, correctly, that C will get out on −3. But then B came back; the fact of her leaving on −2 turned out to be wrong; yet I didn’t cancel my deduction about C and didn’t return him the “freedom” of leaving either on −3 or on −4. It didn’t even occur to me to do that. Why didn’t it?
It seems important that the new information was a correction of a known fact, and not just some other fact. If I treat the new information “B does not leave at −2” purely as a fact, the consequence for C is “C may leave either on −3 or on −4“, which is already clear as it is and not worth updating. No, it seems “B does not leave at −2” has a special character when it comes to correct the previously-assumed “B left at −2”. It comes as a “rollback” of existing information and I need to “roll back” everything I deduced from that information. And that seems hard to do and easy to forget. So if wasn’t just a failure to update that I committed. It was a failure to “roll back”.
On reflection, this mistake seems like something we might be doing often, and something to keep an eye out for. Is there a name for this mistake, has it been studied?
Seems related to the studies where people are told a fact, but it’s in red, which they’re told means it’s not true. After seeing lots of different facts in colours blue or red (blue means true) they’re asked about certain facts, and they’re more likely to remember a false fact as true than a true fact as false—we’re more likely to believe things, and don’t tend to take on contrary evidence as easily.
Thanks. Cached thoughts seem applicable, but also too broad for what I’m describing. After all, if I failed to update on A and B exiting on −2, and continued thinking C may get out either on −3 or −4, that could also be described as a cached thought which I retained even when new evidence contradicted it. But I didn’t do that, and was in no danger on doing that. I think that it’s the necessity to roll back to the previous state, rather than just, in general, update on new evidence and get rid of the cached thought, that seems important here.
This post is very interesting. It reminds me very much of some variations of the change scam. You seem to be describing something really similar, the rollback of information you speak of is applicable to the counting of change. I also feel like this sort of mistake happens often but I might not notice it. I feel like this deserves a name like rollback deduction failure or something.
Seems a bit Monty Hall-ish. You updated when B got out on 2 but didn’t retract your update when B re-entered. After your update, C—or maybe you were thinking “the remainder of strangers on this elevator”—had near certain chance of getting out on 3 so when B came back in it looks like you mashed the two together as “the remainder of strangers on this elevator”.
I have no clue if this phenomenon has a name or not.
Is it possible to give just the merest of hints about what the theorem might have to do with AI?
Qiaochu Yuan, a past MIRI workshop participant, gave a concise answer:
Suppose you want to design an AI which in turn designs its (smarter) descendants. You’d like to have some guarantee that not only the AI but its descendants will do what you want them to do; call that goal G. As a toy model, suppose the AI works by storing and proving first-order statements about a model of the environment, then performing an action A as soon as it can prove that action A accomplishes goal G. This action criterion should apply to any action the AI takes, including the production of its descendants. So it would be nice if the AI could prove that if its descendants prove that action A leads to goal G, then action A in fact leads to goal G.
The problem is that if the AI and its descendants all believe the same amount of mathematics, say PA, then by Lob’s theorem this implies that the AI can already prove that action A leads to goal G. So it must already do the cognitive work that it wants its smarter descendants to do, which raises the question of why it needs to build those descendants in the first place. So in this toy model Lob’s theorem appears as a barrier to an AI designing descendants which it both can’t simulate but can provably trust.
Qiaochu’s answer seems off. The argument that the parent AI can already prove what it wants the successor AI to prove and therefore isn’t building a more powerful successor, isn’t very compelling because being able to prove things is a different problem than searching for useful things to prove. It also doesn’t encompass what I understand to be the Lobian obstacle, that being able to prove that if your own mathematical system proves something that thing is true implies that your system is inconsistent.
It’s entirely possible that my understanding is incomplete, but that was my interpretation of an explanation Eliezer gave me once. Two comments: first, this toy model is ignoring the question of how to go about searching for useful things to prove; you can think of the AI and its descendants as trying to determine whether or not any action leads to goal G. Second, it’s true that the AI can’t reflectively trust itself and that this is a problem, but the AI’s action criterion doesn’t require that it reflectively trust itself to perform actions. However, it does require that it trust its descendants to construct its descendants.
As there was some interest in Soylent some time ago, I’m curious what people who have some knowledge of dietary science think of its safety and efficacy given that the recipe appears to be finalized. I don’t know much about this area, so it’s difficult for me to sort out the numerous opinions being thrown around concerning the product.
ETA: Bonus points for probabilities or general confidence levels attached to key statements.
Given that dogfood and catfood work as far as mono-diets go, I’m pretty hopeful that personfood is going to work out as well. I don’t know enough about nutrition in general to identify any deficiencies (and you kind of have to wait 10+ years for any long-term effects), but the odds are good that it or something like it will work out in the long run. I’d go with really rough priors and say 65% safe (85% if you’re willing to have a minor nutritional deficiency), up to 95% three years from now. These numbers go up with FDA approval.
Given that dogfood and catfood work as far as mono-diets go
They mostly seem to, but if they cause a drop in energy or cognitive capability because of some nutrient balance problems, the animals won’t become visibly ill and humans are unlikely to notice. A persistent brain fog from eating a poor diet would be quite bad for humans on the other hand.
Most of the selective breeding has been done while these animals were on simple diets, so perhaps some genetic adaptation has happened as well. Besides, aren’t carnivore diets quite monotonous in nature anyway?
Most of the selective breeding has been done while these animals were on simple diets
I am not so sure of that. People have been feeding cats and dogs commercial pet food only for the last 50 years or so and only in wealthy countries. Before that (and in the rest of the world, still) people fed their pets a variety of food that doesn’t come from a bag or a can.
aren’t carnivore diets quite monotonous in nature anyway?
In terms of what you kill and eat, mostly yes, but in terms of (micro)nutrients prey not only differs, but also each body contains a huge variety (compared to plants).
aren’t carnivore diets quite monotonous in nature anyway?
There’s probably seasonal variation—Farley Mowat described wolves eating a lot of mice during the summer when mice are plentiful. Also, I’m pretty sure carnivores eat the stomach contents of their prey—more seasonal variation. And in temperate-to-cold climates, prey will have the most fat in the fall and the least in the early spring.
It wouldn’t surprise me if there’s a nutritional variation for dry season/rainy season climates, but I don’t know what it would be.
I actually thought this way at first, but after reading up more on nutrition, I’m slightly skeptical that soylent would work as a mono-diet. For instance, fruits have been suggested to contain chemical complexes that assist in absorption of vitamins. These chemical complexes may not exist in soylent. In addition, there hasn’t really been any long-term study of the toxic effects of soylent. Almost all the ingredients are the result of nontrivial chemical processing, and you inevitably get some impurities. Even if your ingredient is 99.99% pure, that 0.01% impurity could nevertheless be something with extremely damaging long-term toxicity. For instance, heavy metals, or chemicals that mimic the action of hormones.
Obviously, toxic chemicals exist in ordinary food as well. This is why variety is important. Variety in what you eat is not just important for the sake of chemicals you get, but for the sake of chemicals you don’t get. If one of your food sources is tainted, having variety means you aren’t exposed to that specific chemical in levels that would be damaging.
I still think it’s promising though, and I think we’ll eventually get there. It may take a few years, but I think we’ll definitely arrive on a food substitute that has everything the body needs and nothing the body doesn’t need. Such a food substitute would be even more healthy than ‘fresh food’. I just doubt that this first iteration of Soylent has hit that mark.
It seems to me that Soylent is at least as healthy as many protein powders and mass gainers that athletes and bodybuilders have been using for quite some time. That is to say, it dependson quality manufacturing. If Soylent does a poor job picking their suppliers, then it might be actively toxic.
I’d like to see creatine included, just because most people would see mental and physical benefits from supplementation. The micronutrients otherwise look good. I’ve read things to the effect that real food is superior to supplementation (example), so I don’t think that this is a suitable replacement to a healthy diet. I do think that this will be a significant improvement over the Standard American Diet, and a step up for the majority of people.
The macronutrients also look good—especially the fish oil! 102g of protein is a solid amount for a non-athlete, and athletes can easily eat more protein if desired. Rice protein is pretty terrible to eat, I hope that they get that figured out. I’d probably prefer less carbs and more fat for myself, but I think that’s just a quirk of my own biology.
Well, my estimates for long-term consequences would probably be:
Soylent is fine to consume occasionally -- 98% Soylent is fine to be a major (but not sole) part of your diet -- 90% Soylent is fine to be the sole food you consume -- 10%
Given that you didn’t mention otherwise, I assumed that you were mostly going off priors in the absence of much domain-specific knowledge, as ThrustVectoring was. I haven’t read enough of your posts to accurately gauge how heavily to weight your opinion—if my assumption is incorrect, I’d appreciate it if you would let me know.
There is no data about long-term effects of Soylent. Everyone has only priors and nothing but priors. By the way, “domain-specific knowledge” is a prior as well.
I am not sure how are you going to gauge the proper weighting for people’s opinions. This is the internet, after all. If I tell you “I’m highly credentialed. Just trust me” :-D will that satisfy you?
On a bit more serious note I prefer arguments that stand on their own, regardless of their source (and its credibility or lack thereof). In fact, nutrition is such a screwed-up field that I would probably downgrade opinions from someone who claims to be a nutritionist...
This is the internet, after all. If I tell you “I’m highly credentialed. Just trust me” :-D will that satisfy you?
Eh; it would be medium-strength evidence. Even though I have no way to verify what you say, I don’t think that you have any real incentive or motive to deceive me (given that simple trolls are unlikely to amass >2K karma). :P
(I think we’ve exhausted the usefulness of this subthread, so I probably won’t respond to any replies—tapping out.)
Um. Probably lack of noticeable health/fitness problems. But yes, it’s a vague word. On the other hand, the general level of uncertainty here is high enough to make a precise definition not worthwhile. We are not running clinical trials here.
By the way, the vagueness of “major … part of … diet” is a bigger handwave here :-/
Probably lack of noticeable health/fitness problems.
The more I read about nutrition the more I come to the conclusion that most diets do have effects. Some advantages and some disadvantages.
I thing there a good chance that A diet without any cholesterol might reduce some hormone levels and some people who look hard enough might see that as an issue.
The first one sounds underconfident (at least if you don’t count people allergic or intolerant to one of the ingredients, nor set a very high bar for what to call “fine”).
The first one can be read as saying that 2% of people occasionally drinking Soylent will have problems because of that. That doesn’t sound outlandish to me.
Thanks for your input. Are there any existing dietary replacements you recommend that are similarly easy to prepare? (Soylent Orange seems to be working well for you as a solution, but I don’t think I would actually go to the trouble to put the ingredients together.)
On a related note, do you have any new/more specific criticisms of Soylent, other than those that you presented in this post?
None that I would recommend. None of my criticisms are original, Soylent still seems a very haphazard concoction to me. I do have a bunch of specific issues with Soylent that I haven’t discussed in detail e.g. lack of cholesterol and saturated fat not being great for hormones. But yeah, I’m not super motivated to get deep into it unless I decide to try to turn the latest variant of Soylent Orange into an actual service. I’m still working on it.
Thanks for your input. Are there any existing dietary replacements you recommend that are similarly easy to prepare?
Easy as in time requirements or easy as in money? The kind of fluid food replacement that they use in hospitals is probably better than what Soylent produces.
Liquid diets are not exactly a new idea, and most of them don’t have to be prepared at all but come in portions. Since most of them have been developed for medical use, the price tag is significantly higher. Some of them have been developed for patients who can’t swallow normal food at all, so I doubt they lack anything important that Soylent contains and probably have been much more rigorously tested. If anyone knows studies that have been done on these people, I’m all ears.
Never mind it’s safety, I do not like it’s hedonics at all.
Basic: If you currently are eating blandly enough that shifting to a liquid mono-diet for any reason other than dire medical necessity is not a major quality of life sacrifice, you need to reprioritize either your time or your money expeditures.
Loosing one of the major pleasures of life is not a rational sacrifice. Life is supposed to be enjoyable!
Perhaps eating isn’t a major pleasure of life for everyone.
I’m imagining an analogous argument about exercise. Someone formulates (or claims to, anyway) a technique combining drugs and yoga that provides, in a sweatless ten minutes per week, equivalent health benefits to an hour of normal exercise per day. Some folks are horrified by the idea — they enjoy their workout, or their bicycle commute, or swimming laps; and they can’t imagine that anyone would want to give up the euphoria of extended physical exertion in exchange for a bland ten-minute session.
To me, that seems like a failure of imagination. People don’t all enjoy the same “pleasures of life”. Some people like physical exercise; others hate it. Some people like tasty food; others don’t care about it. Some people like sex; others simply lack any desire for it; still others experience the urge but find it annoying. And so on.
I’m imagining an analogous argument about exercise.
It’s a weak analogy as humans are biologically hardwired to eat but are not hardwired to exercise.
Some people like tasty food; others don’t care about it. Some people like sex; others simply lack any desire for it; still others experience the urge but find it annoying.
True, but two comments. First, let’s also look at the prevalence. I’m willing to make a wild approximation that the number of people who truly don’t care (and never will care) about food is about the same as the number of true asexuals and that’s what, 1-2%?
Second, I suspect that many people don’t care about food because of a variety of childhood conditioning and other psychological issues. In such cases you can treat it as a fixable pathology. And, of course, one’s attitude towards food changes throughout life (teenagers are notoriously either picky or indifferent, adults tend to develop more discriminating tastes).
Preparing food is an annoying hassle which tends to interfere with my workflow and distract from doing something more enjoyable. Food does provide some amount of pleasure, but having to spend the time actually making food that’s good enough to actually taste good (or having to leave the house to eat out) is enough of an annoyance that my quality of life would be much improved if I could just cease to eat entirely.
Soylent’s creator argues that it increases the quality of life benefits of food, since the savings from the Soylent diet meant that when he chooses to eat out, he can afford very good quality food and preparation.
For myself, while I enjoy eating good food, I do not enjoy preparing food (good or otherwise), and in fact I enjoy eating significantly less than I dislike preparing food. So the total event (prepares good food → eats good food) has negative utility to me, other than the nutritional necessity.
Additionally, if one’s schedule is so tight that preparing simple home-made meals (nothing complicated, just stuff that can be prepared with 5 minutes of work) is out of the question, that seems like a fast route to burnout.
I noticed that the micro quantities appear to be very different between Soylent and Jevity. Dropped a post on the Soylent forum here if anyone’s interested.
I’m not ready for my current employer to know about this, so I’ve created a throwaway account to ask about it.
A week ago I interviewed with Google, and I just got the feedback: they’re very happy and want to move forward. They’ve sent me an email asking for various details, including my current salary.
Now it seems to me very much as if I don’t want to tell them my current salary—I suspect I’d do much better if they worked out what they felt I was worth to them and offered me that, rather than taking my current salary and adding a bit extra. The Internet is full of advice that you shouldn’t tell a prospective employer your current salary when they ask. But I’m suspicious of this advice—it seems like the sort of thing they would say whether it was true or not. What’s your guess—in real life, how offputting is it for an employer if a candidate refuses to disclose that kind of detail when you ask for it as part of your process? How likely are Google to be put off by it?
I work at Google. When I was interviewing, I was in the exact same position of suspecting I shouldn’t tell them my salary (which I knew was below market rate at the time). I read the same advice you did and had the same reservations about it. Here’s what happened: I tried to withhold my salary information. The HR person said she had to have it for the process to move forward and asked me not to worry about it. I tried to insist. She said she totally understood where I was coming from, but the system didn’t allow her flexibility on this point. I told her my salary, truthfully. I received an offer which was substantially greater than my salary and seemingly uncorrelated with it.
My optimistic reading of the situation is that Google’s offer is mostly based on approximate market salary for the role, adjusted perhaps by how well you did at the interviews, your seniority, etc. (these are my guesses, I don’t have any internal info on how offers are calculated by HR). Your current salary is needed due for future bookkeeping, statistics, or maybe in case it’s higher than what Google is prepared to offer and they want to decide if it’s worth it to up the offer a little bit. That’s my theory, but keep in mind that that it’s just a bunch of guesses, and also that it’s a big company and policies may be different in different countries and offices.
I think it is worth mentioning that “the system won’t allow for flexibility on this” is just about the oldest negotiation tactic in the book. (Along with, “let me check with my boss on that...”)
In reality, there is zero reason Google, or any employer, should need to know your current or past salary information apart from that information’s ability to work as a negotiation tactic in their favor.
Google has something you want (a job that pays $) and you have something they want (skill to make them $). Sharing your salary this early in the process tips the negotation scales (overwhelmingly) in their favor.
That said, Google is negotiating from a place of immense strength. They choose from nearly anyone they want, while there is only one Google...
...so, if Google wants to know your salary, tell them your salary. And enjoy your career at one of the coolest companies around. You win. :)
And if salary is what matters use them as resume-points to get a higher salary somewhere else.
There are some things you may want to consider when using this strategy. For example, choose the appropriate amount of time you want to spend at Google. Too short may be suspicious, but too long would be a lost purpose if your goal is to make more money somewhere else later.
Optimize for having the most impressive CV when you leave. This means you should have an impressively sounding job description. Think about your CV items “on project X I worked as Y and my responsibilities were Z”, and try to manage your career within Google to optimize these.
Have a plausible story about why you decided to work for Google, and why you later decided to work somewhere else. This story can also be made up later, but if you prepare it in advice, you can make it more realistic.
The most simple version of this advice would be: If you choose Google with hope of having an impressive CV and a higher salary later, don’t stay there for the next 10 years in a role of code monkey working all the time on some completely unknown project that will be cancelled shortly after you leave.
Interview with other companies (Microsoft, Facebook, etc.) and get other offers. When the competition is other prospective employers, your old salary won’t much matter.
In real life, the sort of places where employers take offense by you not disclosing current salary (or generally, by salary negotiations -that is, they’d hire someone else if he’s available more cheaply) are not the places you want to work with: if they’re putting selection pressure for downscaling salaries, all your future coworkers are going to be, well, cheap.
This is anecdotally not true for Google; they can afford truckloads, if they really want to have you onboard. So this is much more likely to come from standardized processes. Also note in Google’s case, that decisions are delegated to a board of stakeholders, so there isn’t really one person who can be put off due to salary (and they probably handle the hire/no hire decisions entirely separate to the salary negotiations).
if they’re putting selection pressure for downscaling salaries, all your future coworkers are going to be, well, cheap.
Also, the company will probably be less likely to buy you a decent computer for work, install a new server when your department needs it, or hire new people when there is more work than you can handle. Even if you somehow don’t care about money for yourself, you probably do at least care about having decent working conditions. Maybe the just-world hypothesis makes you believe that lower salary will somehow be balanced by better working conditions, but it’s probably the other way round.
I’m a manager at a financial firm and I’ve hired people. I’d consider it pretty normal not to want to say. “Everyone” knows that trying to get the other person to name a number first is a common negotiating tactic, no real grownup is going to take it personally or get upset about this.
I don’t know how “normal” a company Google is in this way, but I’d guess it’s pretty normal.
If you are challenged on this, you can try stating it as a rule: “I’m not prepared to discuss my current salary, I’m here to talk about working for Google.” Or, “As a policy I don’t disclose my current salary. I’m sure you understand.” Or make up some blah about how that’s proprietary information for your current employer and you don’t feel comfortable disclosing it.
If they absolutely refuse to process your application without this (which is a bad move on their part if they really want you, but some companies are stubborn that way), other options are to fudge your number upwards somehow, though personally I wouldn’t try the ones that actually involve telling a literal lie:
Give them a wide range of expectations instead of your current salary. Say that of course it depends on the other details of the offer, any other offers you might get, etc.
Roll in as much stuff as you plausibly can (adding in bonuses or other moneylike benefits, and making an adjustment if the cost of living in Googleland is higher than where you live now). Example: I could add my salary of $80k, my last year’s bonus of $5k (or next year’s bonus, or my average bonus in percentage terms, whichever is highest), and my $2k transit benefits for a total of $87k.
Round up to the nearest $10k and say it’s an approximate figure. So if I make $87k I might say I’m in the ballpark of $90k.
State a range (e.g. if I made $69k I might say I make something in the high 5 figures, or somewhere near the $70-80k range)
My guess is that a mild refusal would be acceptable to Google. They are unlikely to be put off by a change of subject.
A hard refusal might annoy them, if they persist in asking.
I suggest naming a very high figure first, to gain benefit from the anchoring effect. Then mentioning your salary will make it the anchor for the negotiation. Google has a reputation for paying high salaries.
This reminded me to ask about a similar question: I am currently interviewing. Assuming I get an in-person interview, that will involve a long flight. I feel like I shouldn’t tell my current employer that I’m interviewing until I have an offer, but in order to hide it I presumably will have to take holiday on fairly short notice, have a plausible reason for why I’m taking it, and generally act like I’m not taking a long flight to an interview. There’s a chance that I’ll have to do this multiple times. (Though ideally I’d take multiple in-person interviews in the same trip.)
I don’t particularly like the idea of doing this. It feels deceitful and stressful. How bad an idea would it be to just let my employer know what’s going on?
How bad an idea would it be to just let my employer know what’s going on?
Extremely bad. People have been fired or denied promotion because of this. Don’t even tell any of your colleagues.
I am not discussing the legal aspects of this, but you will probably be perceived as not worth investing in the long term. Imagine that your interview fails and you decide to stay. Your current employer is not going to trust you with anything important anymore, because they will be expecting you to leave soon anyway.
Okay, this may sound irrational, because you are not your employer’s slave, and technically you are (any anyone else is) free to leave sooner or later. But people still make estimates. It is in your best interest to pretend to be a loyal and motivated employee, until the day you are 100% ready to leave.
It feels deceitful
This is part of the human nature; what we have evolved to do. Even your dislike for deceit is part of the deceit mechanism. If you unilaterally decide to stop playing the game, it most likely means you lose.
There is probably an article by Robin Hanson about how LinkedIn helps us to get in contact with new job offers while maintaining plausible deniability, which is what makes it so popular, but I can’t find the link now.
I founded it by searching site:overcomingbias.com social network. The key was generalizing from the specific linkedin to “social network,” though I can’t say why I thought to do that.
It is only deceitful if you haven’t made a honest effort to improve your situation in your current company. It is as deceitful to stay silent and don’t give your employer a chance to increase your salary of your position.
It depends on the type of company of course. There are those that see you as an exchangable human resource where it may be appropriate to see the company as a slave owner who has to be hidden tghe truth of your escape from.
But there are companies where honesty about work situations is seen as interest in the company and critique used as feedback to improve the environment.
Salary negotiatons will be always tough though. Strictly comparing offers is the only reliable way to sell yourself. Everything else is falling prey to the salary negotiation tricks of the business world.
EDIT: I’m from Germany so my view my be country specific.
In this case, I don’t want to leave, just there are things that I want more than I want to stay. Not that it couldn’t be improved, but they probably can’t offer anything to change my mind.
But there are companies where honesty about work situations is seen as interest in the company and critique used as feedback to improve the environment.
If you are in such company, that’s great! Try to improve things; provide the feedback.
But don’t mention the fact that you are doing interviews with another company.
There is probably an article by Robin Hanson about how LinkedIn helps us to get in contact with new job offers while maintaining plausible deniability, which is what makes it so popular, but I can’t find the link now.
I can’t find it either. Nothing on OB, nothing in Google for ‘Linkedin “Robin Hanson”’ or sharpened to add ‘hypocrisy’. Sure it was Linkedin he was talking about?
Even your dislike for deceit is part of the deceit mechanism. If you unilaterally decide to stop playing the game, it most likely means you lose.
I think I ADBOC. It’s not like the “disliking to be deceitful” gene evolved to make its bearers lose the game.
Certainly there are risks to being honest, but there are also benefits. Admittedly, the most salient one to me right now is “I don’t want to treat my current employer poorly”, and I’m not sure that lying about going to interviews is actually significantly worse than merely not telling them about interviews.
It’s not like the “disliking to be deceitful” gene evolved to make its bearers lose the game.
For people who properly compartmentalize, it helps to win the game. By signalling dislike to deceit, they gain other people’s trust… and then at the right moment they do something deceitful and (if they are well-calibrated) most likely profit from it..
It’s the people in the valley of bad rationality who may lose the game when they realize all the consequences and connections, and try to tune their honesty up to eleven. (For example by telling their boss that they would be willing to quit the company if they had a better offer from somewhere else, and that they actually look at the available information about other companies.)
It depends a lot on your company, so I think your inside view will be better than our outside view. I told my employer when I went out to do a tryout with CFAR, and that went well. One reason I told my boss was that, if I were hired, I’d need to scramble to get all my projects annotated well enough to be able to pass of seamlessly, and I didn’t want her to be left in the lurch or to make any plans that hinged on having her quant around for the next month. (Hiring sometimes took a while at my old company).
My boss really appreciated my being forthright and it saved me a lot of tsuris. I think it also worked better because it was expected that people in my role (Research Associate) wouldn’t stick around forever.
Yes, not leaving my employer in the lurch is important to me, but I do feel like they expect me to be around for a while. I’m glad to hear of your positive experience.
Almost everyone finds an explicit refusal to answer offputting. Don’t do it. But that doesn’t mean that you should actually answer. Usually a good choice is to answer a different question, such as to make them an offer.
I don’t feel qualified to answer your question, though if I were to make a guess, I wouldn’t expect them to be put off by refusal. Assuming Google behaves at least somewhat rationally, they should at this point have an estimate of your value as an employee and it doesn’t seem like your current salary would provide much additional information on that.
So, the question is, to what extent Google behaves rationally. This ties to something that I always wonder whenever I read salary negotiation advice. What is the specific mechanism by which disclosing current salary can hurt you? Yes, anchoring, obviously. But who does it? Is the danger that the potential employer isn’t behaving rationally after all and will anchor to the current salary, lowering the upper bound on what they’re willing to offer? Or is the danger primarily that anchoring will undermine your confidence and willingness to demand more (and if you felt sufficiently entitled, it wouldn’t hurt you at all)?
Or is the danger primarily that anchoring will undermine your confidence and willingness to demand more (and if you felt sufficiently entitled, it wouldn’t hurt you at all)?
I would guess this one. It can make you ask less, with almost zero effort on the employer’s side; they don’t even have to read your answer. So the cost:benefit ratio of asking you this question is huge. And even if it doesn’t work on some people, it most likely does on average, so it can save a lot of money.
The mechanism that seems most important to me doesn’t really involve any sort of cognitive bias much. It goes like this. You are on (say) $50k/year. You are good enough that you’d be good value at $150/year, but you’d be willing to move if offered $60k/year, if that were all you could get. You apply for a new job and have to disclose your current salary to every prospective employer. So you get offers in (say) the $60k-80k range because everyone knows, or at least guesses, that that’s enough to tempt you and that no one else will be offering much more. You might get a lot more if you successfully start a bidding war, but otherwise you’re going to end up paid way less than you could be.
Note that everyone in this scenario acts rationally, arguably at least. Your prospective employer offers you (say) $75k. This would be irrational if you’d turn that down but accept a higher offer. But actually you’ll take it. This would be irrational if you could get more elsewhere. But actually you can’t because no one else will offer you much more than your current salary.
(You could try telling them that you have a strict policy of not taking offers that are way below what you think you’re worth, in the hope that it’ll stop them thinking you’d accept an offer of $75k. But you might not like the inference they’d draw from that and your current salary.)
Obvious note: Of course people care about lots of other things besides money, your value to one employer isn’t the same as your value to another, etc. This has only limited effect on the considerations above.
I was recently in a similar position, but I nonetheless managed to negotiate a large salary increase by taking a job in a different city, quoting the salary level that I wanted, and pleading cost-of-living increases when I was asked to justify it. They did negotiate me down by about $5000, and I wouldn’t say I’m quite at market rates yet for my level of experience, but it did seem to successfully anchor the negotiations on my asking price rather than my previous salary.
The new city actually did have a higher cost of living than the old one, but I get the impression that the hiring manager didn’t care about the actual rate so much as he cared about having a rationale that looked good on paper.
Well, assuming your example numbers, if my work would bring $150k+$x/year and the company didn’t hire me because I refused to take $60k/year, instead demanding, say, $120k/year (over twice the current salary, how greedy), then they just let $30k+$something/year walk out the door. Would they really do that (assuming rational behavior blah blah)?
I don’t see how they would benefit from playing the game of salary-negotiating chicken to the bitter end. Having a reputation for not offering market salaries for people with unfortunate work history? That actually sounds like it could be harmful.
The company doesn’t really know your true value. If you are really worth $150k it raises the question why you can’t get your present employeer to pay you that wage. Your present employeer has a lot more information about your skills then they do.
Something’s brewing in my brain lately, and I don’t know what. I know that it centers around:
-People were probably born during the Crimean War/US Civil War/The Boxer Rebellion who then died of a heart attack in a skyscraper/passenger plane crash/being caught up in, say WWII.
-Accurate descriptions of people from a decade or two ago tend to seem tasteless. (Casual homophobia) Accurate descriptions of people several decades ago seem awful and bizarre. (Hitting your wife, blatant racism) Accurate descriptions of people from centuries ago seem alien in their flat-out implausible awfulness. (Royalty shitting on the floor at Versailles, the Albigensian Crusade, etc...)
-We seem no less shocked now by social changes and technological developments and no less convinced that everything major under the sun has been done and only tweaks and refinements remain than people of past eras did.
I guess what I’m saying is that the Singularity seems a lot more factually supported-ly likely than it otherwise might have been, but we won’t realize we’re going through it until it’s well underway because our perception of such things will also wind up going faster for most of it.
We seem no less shocked now by social changes and technological developments and no less convinced that everything major under the sun has been done and only tweaks and refinements remain than people of past eras did.
I do expect the future to be different.
I could imagine a future where people see illegalizing LSD as strange as illegalizing homosexuality.
I can imagine that Google’s rent an AI car service will completely remove personal ownership of cars in a few decades. This removes cars as status symbols with means that they will be built on other design criteria like energy efficiency.
I can imagine a constructed language possibly overtaking English.
There are a lot of other things that are more vague.
Cthulhu always swims left isn’t an observation that on every single issue society will settle on the left’s preferences, but that the general trend is leftward movement. If you interpreted it as that, the fall of the Soviet Union and the move away from planned economies should be a far more important counterexample.
Before continue I should define how I’m using left and right. I think them real in the sense they are the coalitions that tend to form in under current socioeconomic conditions, when due to the adversarial nature of politics, you compress very complicated preferences into as few dimensions (one) as possible. Principle component analysis makes for a nice metaphor on this.
Back to Cthulhu. As someone who’s preferences can be described as right wing I would be quite happy with returning to 1950s levels of state intervention, welfare and relative economic equality in exchange for that period’s social capital and cultural norms. Controlling for technological progress obviously. Some on the far right of mainstream conservatives might accept the same trade. This isn’t to say I would find it a perfect fit, not by long shot, but it would be a net improvement. I believe most Western far right people would accept this trade, most Western far left people would not accept this trade. And in America at least, centrists would be uncomfortable both with that level of state intervention and the social norms of the 1950s.
Now that we have this claim about revealed preferences, let’s invoke a very simple heuristic. Imagine you have two players playing a zero sum game of politics, they are offered to move the game to the position it had 50 moves ago. One player accepts, the other refuses. Ceteris paribus which one do you think is winning?
The “who would prefer to return 50 years back?” argument is interesting, but I think the meaning of “winning” has to be defined more precisely. Imagine that 50 years ago I was (by whatever metric) ten times as powerful as you, but today I am only three times as powerful than you. Would you describe this situation as your victory?
In some sense, yes, it is an improvement of your relative power. In other sense, no, I am still more powerful. You may be hopeful about the future, because the first derivative seems to work for you. On the other hand, maybe the second derivative works for me; and generally, predicting the future is a tricky business.
But it is interesting to think about how the time dimension is related to politics. I was thinking that maybe it’s the other way round; that “the right” is the side which self-identifies with the past, so in some sense it is losing by definition—if your goal is to be “more like yesterday than like today”, then tautologically today is worse according to this metric than the yesterday was. And there is a strong element of returning to the past in some right-wing movements.
But then I realized that some left-wing movements have this component too. I remember communists emphasising that millenia ago humans lived in perfectly egalitarian hunter-gatherer societies (before the surplus value was taken by the evil slavers / feudal lords / capitalists), so when the true communism comes, this ancient harmony will be restored. Similarly, some feminists (maybe just a small minority of them, I don’t know) have stories about how exactly the ancient matriarchal societies were organized, so overthrowing patriarchy would kinda restore this ancient order.
At this moment my working hypothesis is that “returning to the perfect past” is simply an universal human bias, and the main political difference is where exactly is your Golden Age located. Then it would seem that the right wing puts the Golden Age in the more recent past, while the left wing prefers prehistorical societies.
That makes it pretty likely that in its heart, the left is about returning towards our hunter-gatherer instincts and abandoning as much as possible of our disappointing civilization, while the right is about insisting on some specific adaptations to scarcity. Something like what Yvain said, with the connotational objection that the category of danger does not include only zombies, but also criminals or dysfunctional bureaucracies, which are everyday reality for some people. Generally, as we improve economically, we can afford to remove some of the adaptations to scarcity; the trade-offs that are no longer necessary. But sometimes while doing so we fuck up things horribly and the scarcity returns; often in a way that university professors don’t notice, simply because it does not happen to them.
Imagine that 50 years ago I was (by whatever metric) ten times as powerful as you, but today I am only three times as powerful than you. Would you describe this situation as your victory?
I would describe it as me playing very well for the past 50 years and the game going my way.
Cthulhu always swims left isn’t an observation that on every single issue society will settle on the left’s preferences, but that the general trend is leftward
Dropping the “always” might lead to less confusion on this point.
Why does politics compress issues into a single linear dimension?
I suppose it makes sense in a two party system, but why do parliamentary systems with several small parties mainly have parties on the extremes, rather than mainly having parties with orthogonal preferences that could ally with either major party? In principle, a Green party ought not to care much about the left-right access, but in practice it cares very much.
Why does politics compress issues into a single linear dimension?
I don’t know, it might be an artifact of representative systems in the West where the all important thing is getting the majority vote. “Left” vs. “Right” being a strong signal of who your allies tend to be seems to work pretty well descriptively for most people’s political identities and preferences.
A slight nitpick: I wouldn’t describe politics as anything remotely near zero-sum. The actions of the players of a country’s game of politics have very far-reaching effects on the citizens and residents of that country, and in some cases of the residents of the entire world.
The actions of the players definitely do affect the world at large in ways outside the scope of the game, which makes it about as far from zero-sum as it could possibly be. I’m pretty sure that this changes the outcome of your thought experiment dramatically.
That depends on the definition of “left.” What is your definition?
(I am skeptical that “left” is a useful concept.)
Moldbug sometimes seems to define it as Puritan or Protestant more generally. But at other times he seems to say that the two are the same, but not by definition.
In the early 19th century, the Temperance movement was the same people seeking the abolition of slavery and women’s suffrage. Surely this counts as left? A century later it won by having a broader base, but it was still Protestant. Indeed, much of the appeal was as a way to attack Catholic immigrants.
Marijuana and cocaine were banned at about the same time as alcohol. One interpretation was that this was a side effect of the temperance movement. (This is more clear in the case of cocaine, which was a dry run of prohibition; less clear with marijuana, which was banned later.) Another interpretation is that they were race war (just like alcohol). That sounds right-wing, but what is your definition? The prohibition of LSD was much later and seems much more clearly right-wing.
I am talking only of America. I believe that prohibitions elsewhere were imported. If you think America is right-wing, that makes them seem right-wing.
Added: I forgot opium. It was nationally banned with cocaine, but it was banned in SF much earlier, when the Temperance movement was a lot weaker. I think most local Prohibitions were left-wing, but opium in SF might not fit that.
Marijuana and cocaine were banned at about the same time as alcohol. One interpretation was that this was a side effect of the temperance movement. (This is more clear in the case of cocaine, which was a dry run of prohibition; less clear with marijuana, which was banned later.)
I think the ban of Marijuana in 1937 was a win for DuPond business interests. From a right/left perspective of the 19th century that’s difficult to parse as either left or right.
Yes, the commercial aspects probably pushed it over the line despite it not being banned earlier, but the fact that lots of things were banned suggest that there is probably a common cause and that the commercial aspects were only secondary. Whether the common cause is that one group opposed everything or that one group opposed alcohol and moved the Overton window is harder to decide.
If I had to guess, I’d say that as Konkvistador is against democracy and voting in general, he wants voting rights to be denied to everyone, and as such, starting with 51% of the population is a good step in that direction.
starting with 51% of the population is a good step in that direction
Sure, but the process would likely have hysteresis depending on which group you remove first, and “women” doesn’t seem like the best possible choice to me—even “people without a university degree” would likely be better IMO.
Maybe it is because of our instincts that scream at us that every woman is precious (for long-term survival of the tribe), but the males are expendable. Taking the votes away from the expendable males could perhaps get popular support even today, if done properly. The difficult part in dismantling democracy are the votes of women.
(Disclaimer: I am not advocating dismantling democracy by this comment; just describing the technical problems.)
If you stop thinking of democracy as sacred and start seeing letting various groups vote as a utility calculation, one starts looking at questions like how various groups vote, how politicians attempt to appeal to them, and what effect this has on the way the country winds up being governed.
It’s not just a question of whether they vary, it’s whether they vary in a way that systematically correlates with better (or worse) decisions. Also there are Campbell’s law considerations.
I chose those examples in particular because in the United States the movement behind prohibition, making prostitution illegal and expanding the franchise to women was basically one and the same.
I disapprove of voting obviously. I chose it as the example because in the US the same movements argued for making these three things, among others, as they are in the first place.
But these are not, seemingly, as different as, say, the discovery of LSD. Or psychotropics. Or the establishment of homosexuality as relatively innate. Or the invention of the car, or the very first creation of a constructed language.
The invention of the car wasn’t that big a deal. At the beginning it wasn’t clear that cars are all that great. It took time for people to figure out that cars are much more awesome than horse carts.
I think you underrate the effects of legalizing LSD. If you say you legalize all drugs, you have to ask yourself questions such as why pharma company pay a lot of money for clinical trials when all substances can be legally sold. As a society you have to answer those questions.
As far as the establishment that homoesexuality is relatively innate, I think you have to keep in mind how vague the term homosexuality happens to be. At the moment homosexuality seems to be an identity label. To me it’s not clear that this will be the case in 200 years.
A lot of men who fuck other men in prisons don’t see themselves as homosexual. Plenty of people who report that they had pleasureable sex with a person of the same sex don’t label themselves as homosexual.
There are also a lot of norms about avoiding physical contact with other people. A therapist is supposed to work on the mind and that doesn’t mean just hugging a person for a minute. I can imagine a society in which casual touches between people are a lot more intimate than they are nowadays and behavior between males that a conversative American would label as homosexual would be default social behavior between friends.
If you run twin studies you find that being overweight has a strong genetic factor. The same goes for height. Yet the average of both changed a lot during the last two hundred years. The notion of something being innate might even be some rest of what Nietzsche called the God in the grammar. It might not be around in 100 years anymore as it exists nowadays.
There are also a lot of norms about avoiding physical contact with other people. A therapist is supposed to work on the mind and that doesn’t mean just hugging a person for a minute. I can imagine a society in which casual touches between people are a lot more intimate than they are nowadays and behavior between males that a conversative American would label as homosexual would be default social behavior between friends.
This futuristic society of casual male intimacy was known as the 19th century.
In it, the Russia of the 1950s and the modern Middle East you could observe men dancing together, holding hands, cuddling, sleeping together and kissing.
More generally, ISTM that displays of affection between heterosexual men correlate negatively with homophobia within each society but positively across societies. (That’s because the higher your prior probability for X is, the more evidence I need to provide to convince you that not-X.)
If I look at that description it seems to me that the current way of seeing homosexuality won’t be permanent.
It seems being homesexual became a separate identity to the extend that people focused in not engaging in certain kinds of intimacy to signal that they aren’t gay.
If the stigma against homosexuality disappears, homosexuality as identity might disappear the same way.
The word homosexuality is even in decline in google ngrams.
There’s a distinction occasionally drawn between homosexual and gay; homosexual is the sexual preference, gay is the cultural lump/stereotype populated mainly by homosexuals. So the ‘metrosexual’ thing in the early 00s was a kind of fad for heterosexual men adopting gay culture.
This distinction is mainly drawn to point out that the political right’s objection is largely to ‘gay’ rather than to ‘homosexual’.
Under this distinction: Men who prefer to have sex with men rather than women are homosexual. Men who prefer to have sex with women rather than men are heterosexual.
Prison sex may be homosexual (that’s a matter of fuzzy definitions), but (under this distinction) definitely isn’t gay.
This distinction is mainly drawn to point out that the political right’s objection is largely to ‘gay’ rather than to ‘homosexual’.
No the political right’s objection is to people engaging in homosexual sex and to popular culture telling people this is a normal and healthy thing to do. The subtler objection is to it telling people that if they find 19th century style male bonding appealing it means that they’re “gay” and should thus engage in homosexual sex.
I see no reason to believe that is the case; gay culture, by its nature of growing out of highly-liberal communties during the 60s and 70s, is highly hedonistic and permissive, both things the political right objects to already. That they strongly dislike (perceived) core attributes of this culture and the associated homosexuality looks like a strictly simpler hypothesis than that they dislike (perceived) core attributes of this culture, and also homosexuality.
In short: Occam appears to be on my side, so you’ll need some evidence for that.
Read what traditionalists actually write for one thing. They’re against hedonistic behaviors and that includes homosexual sex (this is not the only reason they’re against it). Notice that this was true long before the current cultural concept of what it means to “act gay”.
The subtler objection is to it telling people that if they find 19th century style male bonding appealing it means that they’re “gay” and should thus engage in homosexual sex.
What? ISTM it’s right-wingers who say things like that. EDIT: I guess I had misread that (I had read “should” as ‘are likely to’ rather than ‘had better’), in which case… what??? I can’t remember anyone ever suggesting anything remotely like that with a straight face, and I know plenty of left-wingers; are you sure you aren’t attacking a straw man?
I guess I had misread that (I had read “should” as ‘are likely to’ rather than ‘had better’), in which case… what??? I can’t remember anyone ever suggesting anything remotely like that with a straight face,
They tend to phrase it as encouraging people to “find out if they’re gay”, i.e., encourage people to declare themselves “gay” if what amounts to 19th century style male bonding appeals to them. Furthermore, once someone has been declared “gay” it’s considered a horrendous hate crime to discourage him from engaging in homosexual sex.
They tend to phrase it as encouraging people to “find out if they’re gay”, i.e., encourage people to declare themselves “gay” if what amounts to 19th century style male bonding appeals to them.
Never heard that either.
Furthermore, once someone has been declared “gay” it’s considered a horrendous hate crime to discourage him from engaging in homosexual sex.
And once someone has been declared “straight” it’s considered a horrendous hate crime to discourage him from engaging in heterosexual sex (except by fundamentalist Christians and the like, but that also applies to gay sex), so what’s your point?
And once someone has been declared “straight” it’s considered a horrendous hate crime to discourage him from engaging in heterosexual sex
Encouraging “gays” to become “straight” is considered a hate crime, encouraging “straights” to become “gay” is framed as encouraging them to “find out if they’re gay” and considered commendable.
Also, at least in the US, encouraging “straights” to hold of until marriage is considered old fashioned but not nearly as bad as attempting to deconvert “gays”. The latter has in fact been made illegal in California.
encouraging “straights” to become “gay” is framed as encouraging them to “find out if they’re gay” and considered commendable.
What the hell are you talking about? AFAICT nearly all straight people I know would find such an, ahem, encouragement quite annoying at the very best, and most of them would be utterly disgusted by it. “I’m flattered, but I’m straight” said with a poker face is about as positive a reaction as I’d ever anticipate seeing.
You and Eugine seem to be talking past one another;
He’s saying that society tends to see it as (at worst) a bit of a faux pas for a gay man to try to get a straight to switch teams whereas a gay converter is one step off from an SS officer in terms of the hatred they get.
You, on the other hand, seem to be talking about how annoyed straight guys get when being harassed by gays trying to convert them, and presumably vice versa. That people get pissed off, with good reason, when people try to dictate terms to them on whom they desire.
Oddly enough, both of you are right. It is much more acceptable for gay men to be “straight chasers” and try to get straight guys to “come out” than it is for Christians to be “deconverters” and try to get gay guys to “find Jesus,” at least everywhere I’ve lived (admittedly, my favorite cities tend to be pretty deep blue). People confronted with this kind of obnoxious behavior don’t appreciate it in either case, but the straight guy has to be a lot more careful not to say anything “offensive” to the guy grabbing him (God forbid throwing a punch) than the gay guy who can tell the pastor to go to hell and walk off with the full force of the law / media behind him.
There seems to be a pretty big asymmetry here that you’re ignoring. Christian “deconverters” aren’t simply saying “Hey, why don’t you try straight sex? You might end up enjoying it.” They’re saying “There is something deeply wrong with your sexual orientation and you will suffer eternally unless you sincerely attempt to change it.” I doubt that attempts to convert straight men result in higher rates of depression or suicide among them.
The appropriate analog of the gay “straight chasers” you’re talking about would be a straight woman who attempts to “convert” gay guys by, say, trying to convince them to sleep with her, maybe because she likes the challenge. Do you think such a person would also be seen as one step off from an SS officer?
The appropriate analog of the gay “straight chasers” you’re talking about would be a straight woman who attempts to “convert” gay guys by, say, trying to convince them to sleep with her, maybe because she likes the challenge. Do you think such a person would also be seen as one step off from an SS officer?
BTW, IME straight men who manage to convince lesbians to sleep with them usually inspire awe, not disgust. (I can’t think of any concrete examples of the gender-reversed situation, which you described.)
Or the establishment of homosexuality as relatively innate.
When did this actually happen? All the arguments I’ve boil down to either the “it shows up on brain scans and is thus innate” fallacy, or if you don’t agree it’s innate you must be an EVIL HOMOPHOBE!!!11!!
if you don’t agree it’s innate you must be an EVIL HOMOPHOBE!
What? I can’t see why knowing that genetics (assuming that’s what’s meant by “innate”) affects how likely people are to commit violent crimes would make me dislike violent criminal any less, nor why knowing that (say) the concentration of lead in the air also affects that would make me dislike them any more.
I can’t see why knowing that genetics (assuming that’s what’s meant by “innate”) affects how likely people are to commit violent crimes would make me dislike violent criminal any less,
Well, there are a lot of people arguing that we should go easy on violent criminals since “it’s not their fault”. I don’t agree with this argument, but a lot of people seem to be convinced by it.
Relative to what? If “lots of things” are “relatively” something, your standards are probably too low.
Yes, twin studies give a simple upper bound to the genetic component of male homosexuality, but it is very low. As an exercise, you might try to name 10 things with a lower genetic contribution. But I think defining “innate” as “genetic” is a serious error, endemic in all discussions of human variety.
Added, months later: Cochran and Ewald suggest as a benchmark leprosy, generally considered an infection, not at all innate. Yet it has (MZ/DZ) twin concordance of 70⁄20. For something less exotic, TB is 50⁄20. That’s higher than any reputable measure of the concordance of homosexuality. The best studies I know are surveys of twin registries: in Australia, there is a concordance of 40⁄10 for Kinsey 1+ and 20⁄0 for Kinsey 2+; in Sweden, 20⁄10 and 5⁄0.
Since everybody in this subthread is talking about the numbers without mentioning them, from Wikipedia:
Biometric modeling revealed that, in men, genetic effects explained .34–.39 of the variance [of sexual orientation], the shared environment .00, and the individual-specific environment .61–.66 of the variance. Corresponding estimates among women were .18–.19 for genetic factors, .16–.17 for shared environmental, and .64–.66 for unique environmental factors.
Numbers like ”.34–.39″ imply great precision. In fact, that is not a confidence interval, but two point estimates based on different definitions. The 95% confidence interval does not exclude 0 genetic contribution. I’m getting this from the paper, table 1, on page 3 (77), but I find implausible the transformation of that raw data into those conclusions.
Ok, taboo “relatively innate”. The common analogy used in the ‘civil rights’ arguments is to things like skin color. By that standard homosexuality is not innate.
I can’t speak for Bayeslisk, but I’d say it means that things other than what happens to you after your birth have a non-negligible effect (by which standard your accent is hardly innate). But I agree it’s not a terribly important distinction.
The common analogy used in the ‘civil rights’ arguments is to things like skin color. By that standard homosexuality is not innate.
I probably agree. (But of course it’s a continuum, not two separate classes. Skin colour also depends by how long you sunbathe and how much carotene you eat, yadda yadda yadda.)
The problem is that human social mores seem to change on the order of 20-40 years which is consistent with the amount of time it takes a new generation of people to take the helm and for the old generation to die out. I have personally seen extreme societal change within my own country of origin, change that happened in only the span of 30 years. In comparison, Western culture over this same time has seemed almost stagnant (despite the fact that it, too, has undergone massive changes such as acceptance of homosexuality).
However, by some estimates, we are already just 20-40 years away from the singularity (2035-2055). This seems like too short a time for human culture to adapt to the massive level that is required. For instance, consider a simple thing like food. Right now, the idea of eating meat that has been grown in a lab seems unsettling and strange to many people. Now consider what future technology will enable, step-by-step:
Food produced by nanotech with simple feedstock, with no slow and laborious cell growth required.
Food produced by nanotech with household waste, including urine and feces (possibly the feces of other people as well), thus creating a self-contained system.
Changing human biochemistry so that waste is simply recycled inside our bodies, requiring no food at all, and just an energy source plus some occasional supplements.
Uploading brains. Food becomes an archaic concept.
There is likely to not be a very large span of time between each of these steps.
Absent mass mind uploading, I doubt that food in some relatively recognizable form will ever die out, or that we will ever find it economically feasible to eat food known to be made from human waste. Sunlight and feedstock are cheap, people get squicked easily, and stuff that’s stuck around for a long time is likely to continue sticking around. You may as well say we’ll outgrow a need for fire, language, or tools; indeed, I’d believe any of those over the total abandonment of food.
Absent mass mind uploading, I doubt that food in some relatively recognizable form will ever die out, or that we will ever find it economically feasible to eat food known to be made from human waste.
Fecal implants do seem to have some health benefits.
There are people who do drink their own urine. Spirilina can be grown on urine.
Algae also have the advantage that they are signal cell organism with means that it’s easy to introduce new genes into them via DIY-bio efforts.
That means you can easily change the way the stuff tastes and let it produce vitamins and other substances. If you want a cheap source of THC you can transplant the relevant genes needed to produce the THC into an algae and grow it at home in a way that isn’t as easily discovered as growing hemp.
You can trade different algae species and get more interesting compounds than THC.
Few people do, and I doubt that it will catch on; spirulina can also be grown on runoff fertilizer, which will probably sound more appealing to most people.
Oh, makes sense. That’s not food, though; that’s a very easy organ(?) transplant.
You don’t transplant the organ but the feces. They get processed in the intestine. Stuff that enters the body to be processed in the intestine is food for some definition of “food”.
But once you accept the goal to get feces into the gut, the way is only a detail that’s open to change.
No, I know that the colon is not transplanted; the flora is. Hence the (?). Also, it hopefully doesn’t get processed but rather survives to colonize the gut. Further, an enema would probably be far more effective, given its lack of strong acid and pepsin designed to kill the flora.
Few people do, and I doubt that it will catch on; spirulina can also be grown on runoff fertilizer, which will probably sound more appealing to most people.
Sounding appealing is a question of marketing. Plenty of people prefer organic food that grown with feces of animals over food grown with “chemical” fertilizer. They even pay more money for the product.
I also think you underrate the cost of fertilizer for some poor biohacker in Neirobi who has plenty of access to empty bottles. Human urine should also be pretty cheap to buy in third world megacities.
Access to cheap natural gas and oil is also central for the current way of doing agriculture. Without having access to those resources for cheap prices resource reuse might be a bigger deal.
human social mores seem to change on the order of 20-40 years which is consistent with the amount of time it takes a new generation of people to take the helm and for the old generation to die out
If there’s a causal link here, then it’s possible the biggest problem with social change and technological advances would be due to increased longevity, in which case it might not matter how long the time span is… even if there were decades, it wouldn’t be enough.
In some sci-fi settings they have rules where people above a certain ‘age’ can’t directly enter politics anymore. Although I’m not sure exactly how effective that would be, since they would still hold power and influence, and human nature seems to be that we allow more power and influence to the elderly than to the young.
Vinge said something of the sort—that the Singularity would be unimaginable from its past, but after the Singularity (he’s assuming one which includes humans), the path to the Singularity will be known, and it will seem quite plausible.
That’s something a little different—I think that’s already talked about here. Maybe under the Hindsight Bias? At any rate, I’m not talking about looking back; I’m talking about looking from within. The march of history is almost always too slow to see, and even with a significant speedup it’d still probably seem “normal”. Only right at the end would it be clear that a Singularity is occurring.
I want my family to be around in the far future, but they aren’t interested. Is that selfish? I’m not sure what I should do, or if I should even do anything.
I don’t think the odds are good. Getting serious about cryonics will break a whole bunch of implicit assumptions about the order of life, and people who haven’t signed out from the norms and conventions layer of mainstream society to the degree of your average outcast LessWronger are going to be keenly aware of the unspoken rules that are being broken.
Telling people that there’s reason to think cryonics is a valid option and that you support it is good, but trying to get to the bottom of all disagreements beyond that seems like taking it on yourself to make a religious fundamentalist relative accept evolution. It’s probably not going to happen, because the surface level argument is tied up to a head full of invisible machinery that won’t respond to reasoning about technical feasibility.
Do you think that any resuscitation technology, including defibrillation is a sin (or use their favorite objection against cryonics)?
How about one that enables resuscitation on a longer time frame? How long is still OK? Hours? Days? Years?
Would you take a treatment that makes one feel younger and live longer?
Would you approve of being cooled down for a day or two until a life-saving liver/heart/kidney transplant is available? What if it requires cooling deep enough that your heart stops beating?
Their replies, if any, might give you a hint of their true objections. If they are truly religious in nature, and your family attends church regularly, consider having a talk with your local pastor (or whatever religious authority figure they look up to). To paraphrase Ender’s game and HPMoR, children’s opinions have zero weight, so try to engage someone actually being listened to.
They weren’t arguing that it wouldn’t work. They think that being revived is selfish, that spending money on having your head frozen is selfish, and my mom says she wants to die. The old death=good cached thought seems to be one of the main driving factors. She also said there’d be no place for her in the future, that the world might be inconceivably different and strange, and that she would be unable to deal with it.
When I explained that some thousand people have done it, and a lot more are signed up, she said that was only “insane rich eccentrics” and when I explained that ordinary people do it, she said some nasty things about those people, along the lines of calling them nuts.
My main question was related towards figuring out if I should keep pursuing it, and try to change their minds, or if I should respect their wishes. I don’t know what the right thing to do in this situation is- because saving lives is very important, but respecting others’ rights is also pretty important. But the difficulty of this situation is compounded, because I’m angry with her and I don’t want to give up because I’m angry.
I don’t know what the right thing to do in this situation is- because saving lives is very important, but respecting others’ rights is also pretty important.
First, there is no objectively right thing to do. At this point you are expending effort on an essentially selfish goal: saving your mother’s life against her current wishes. Not that “selfish” is in any sense bad or negative. But if you actually cared about saving lives in general, you would apply your effort where it is more likely to pay off. Your current position is no more defensible than hers: you selfishly want her to have a chance to live in some far future with you, she selfishly disregards your wishes and wants to expire when it’s her time. Certainly telling her that her wishes are less valid than yours is not likely to convince her. You can certainly point out that by deciding to forgo cryo she behaves just as selfishly as you do by wanting her to sign for cryo. Maybe then you and her can discuss what “selfish” means to each of you, and maybe have some progress from there. Of course, you should be fully prepared to change your mind and do your best to steelman her arguments. Can you make them better than she does, have her agree and then discuss potential weaknesses in them?
My main question was related towards figuring out if I should keep pursuing it, and try to change their minds, or if I should respect their wishes … the difficulty of this situation is compounded, because I’m angry with her.
As the first step I would recommend to stop being angry with her.
Also keep in mind that for a true-believer Christian cryonics is basically trying to cheat oneself out of heaven—not a very appealing idea :-/
The old death=good cached thought seems to be one of the main driving factors.
Have you read this? Might give you some useful tools to speak against that idea.
I don’t know what the right thing to do in this situation is- because saving lives is very important, but respecting others’ rights is also pretty important.
Would you rather act on your own preferences, or some lesswrongian’s?
I’m angry with her and I don’t want to give up because I’m angry.
Anger is temporary, so not a great basis for long term decisions. Also, anger will affect your tone and therefore make you less convincing.
I feel my own judgement is suspect on this occasion. I don’t know. I want to help her and she’s alternating between being incredibly blase and being furious with me. It’s not like I can just point her at some books to read, because her and my dad don’t like to read. And the things that convinced me, my parents regard as rubbish or nonsense and get-your-head-out-of-space-go-get-married-and-be-normal-goddamnit!
If I continue to pursue this, either the relationship between my parents and I will suffer and they won’t choose to freeze themselves, or they’ll choose to freeze themselves and our relationship won’t suffer. Large risk, large benefit.
My other consideration is to attempt to be subtle, plant the seeds in their heads that give them the sense that maybe the world doesn’t work how they think it does (I managed to convince my dad that the earth was old and that dinosaurs did not roam the earth with humans this way, so it has some merit.)
(I managed to convince my dad that the earth was old and that dinosaurs did not roam the earth with humans this way, so it has some merit.)
This aside’s quite important; it sounds like the inferential distance between you and your parents is huge. Trying to bridge it in one fell swoop is quite ambitious, so I’d err towards a slow & subtle approach. (Not that I have much experience with this problem!)
I think subtlety usually works the best with stubborn individuals, but might easily backfire now that you’ve been in their face. If you were to use that strategy, I’d recommend you let the issue settle for a while so that they don’t immediately see what you’re trying to do. If they realize you’re manipulating them, that might make them even less susceptible to your ideas. Planning is the key, unless it’s an emergency.
Don’t try to push an idea in a way that costs you something.
When it comes to convincing others it helps to understand the other person. Nobody get’s angry if you show genuine interest into how they think the world works. Listen a lot.
It might also help to reduce the amount of things that make her furious with you. If those wouldn’t exist it might be easier to convince her on other questions.
(YouTube version for people who don’t want to download QuickTime.)
Huh, a major Hollywood movie about superintelligence, uploading and the Singularity that seems like its creators might actually even have a mild clue of what they’re talking about. Trailers can always be misleading, of course, but I’ll have to say that this looks very promising—expect to enjoy this one a lot.
I’m not sure how to react to that. While the trailer does get some points correct (an intelligence explosion is dangerous, and much smarter than you things can likely do stuff that you can’t even imagine) it looks like it is essentially from the technological-progress-is-bad-because-hubris end of science fiction, akin to the rebooted Outer Limits. And this seems to ignore the implicit issue that uploads are one of the safer results, not only because they would be near us in mindspace, but because the incredible kludge that is the human brain makes recursive self-improvement less likely.
And this seems to ignore the implicit issue that uploads are one of the safer results, not only because they would be near us in mindspace, but because the incredible kludge that is the human brain makes recursive self-improvement less likely.
Given that the film probably doesn’t end up with all of humanity being dead, it probably rather overstates than understates the safety of uploads.
it looks like it is essentially from the technological-progress-is-bad-because-hubris end of science fiction
I didn’t get that vibe: it looked like the terrorists blowing up AI labs were being depicted as being bad (or at least not-good) guys, whereas some of the main characters seemed genuinely conflicted and torn about whether to try to upload their friend in an attempt to save him, and whether to even keep him running after he’d uploaded. If they had been going for the hubris angle, I would have expected a lot more of a gung-ho attitude towards building potential superintelligences.
And maybe I’m reading too much into it, but I get the feeling that this has a lot more of a shades-of-gray morality than is normal for Hollywood: e.g. it’s not entirely clear whether the terrorists really are bad guys, nor whether the main character should have been uploaded, etc.
And this seems to ignore the implicit issue that uploads are one of the safer results, not only because they would be near us in mindspace, but because the incredible kludge that is the human brain makes recursive self-improvement less likely.
Well, there’s only as much that you can pack into a two-hour movie while still keeping it broadly accessible. If it manages to communicate even a couple of major concepts even semi-accurately, while potentially getting a lot of people interested in the field in general, that’s still a big win. A movie doesn’t need to communicate every subtlety of a topic if it regardless gets people to read up on the topic on their own. (Supposedly science fiction has historically inspired a lot of people to pursue scientific careers, particularly related to e.g. space exploration, though I don’t know how accurate this common-within-the-scifi-community belief is.)
If it manages to communicate even a couple of major concepts even semi-accurately, while potentially getting a lot of people interested in the field in general, that’s still a big win.
If that were it (couple major concepts semi-accurately, the rest entertainment/drama), I’d agree. However, “imagine a machine with a full range of human emotion” (quote from the trailer) and the invariable AI-stopped-using-stupid-gimmicks ending (there’s gonna be a happy ending) is more likely to create yet another Terminator-style distortion/caricature to fight. The false concepts that get planted along with the semi-accurate ones can do large net harm by muddling the issue using powerful visual saliency cheats (how can ‘boring forum posts’ measure up against flashy Hollywood movies).
“Oh, you’re into AI safety? Yea, just like Terminator! Oh, not like that? Like Transcendence, then?” anticipatory facepalm
I expect that any people whose concepts get hopelessly distorted by this movie would be a lost cause anyway. Reasoning correctly about AI risk already requires the ability to accept a number of concepts that initially seem counterintuitive: if you can’t manage “this doesn’t work the way it does in movies”, you probably wouldn’t have managed “an AI doesn’t work the way all of my experience about minds says a mind should work” either.
Granted. Still, the general public is never going to have an accurate understanding of any complex concept, be that concept evolution, climate change, or the Singularity. The understanding of non-specialists in any domain is always going to be more or less distorted. The best we can hope for is that the popularizations that make the biggest splash are even semi-accurate so that the popular understanding won’t be too badly distorted: and considering everything that Hollywood could have done with this movie, this looks pretty promising.
There’s been some prior discussion here about the problem of uncertainty of mathematical statements. Since most standard priors (e.g. Solomonoff) assume that one can do a large amount of unbounded arithmetic, issues of assigning confidence to say 53 being prime are difficult, as are issues connected to open mathematical problems (e.g. how does one discuss how likely one is to estimate that the Riemann hypothesis is true in ZFC?). The problem of bounded rationality here seems serious.
I’ve run across something that may be related, and at minimum seems hard to formalize. For a mathematical statement A, let F(A) be “A is proveable in ZFC) (you could use some other axiomatic system but this seems fine for now). Let G(A) be “A will be proven in ZFC by 2050”. Then one can give examples of statements A and B, where it seems like P(F(A)) is larger than P(F(B)) but the reverse holds for P(G(A)) and P(G(B)).
The example that originally came to mind is technical: let A be the statement “ZPP is contained in P^X where X is an oracle for graph isomorphism” and let B be the statement “ZPP is contained in P^Y where Y is an oracle that answers whether Ackermann(n)+1 has an even or odd number of distinct prime factors.” The intuition here is that one expects Ackermann(n)+1 to be essentially random in the parity of its number of distinct prime factors, and a strong source of pseudorandom bits forces collapse of ZPP. However, actually proving that Ackermann(n)+1 acts this way looks completely intractable. In contrast, there’s no strong prior reason to think graph isomorphism has anything to do with making ZPP type problems easier (aside from some very minor aspects) but there’s a lot of machinery out there that involves graph isomorphism and people thinking about it.
So, is this sort of thing meaningful? And are there other more straightforward, less complicated or less technical examples? I do have an analog involving not math but space exploration. P(Life on Mars) might be lower than P(Life on Europa) even though P(We discover life on Mars in the next 20 years) might be higher than P(We discover life on Europa in the next 20 years) simply because we send so many more probes to Mars. Is this a helpful analog or is it completely different?
A: “The number of prime factors of 4678946132165798721321 is divisible by 3”
B: “The number of prime factors of 9876216987326578968732678968432126877 8498415465468 5432159878453213659873 1987654164163415874987 3674145748126589681321826878 79216876516857651 64549687962165468765632 132185913574684613213557 is divisible by 2”
P(F(A)) is about 1⁄3 and P(F(B)) is about 1⁄2.
But it’s far more likely that someone will bother to prove A, just because the number is much smaller.
ETA: To clarify, I don’t expect it to be particularly hard to prove or disprove, I just don’t think anyone will bother.
Whether someone will bother really depends on why someone wants to know. You can simple type “primefactors of 9876216987326578968732678968432126877” into Wolfram Alpha and get your answer. It’s not harder than typing “primefactors of 4678946132165798721321″ into Wolfram Alpha
I don’t know if this was due to an edit, but the second number in Khoth’s post is far larger than 9876216987326578968732678968432126877, and indeed Alpha won’t factor it.
To be honest I’m sort of surprised that Alpha is happy to factor 4678946132165798721321, I’d have thought that that was already too large.
The reason nobody will bother is that it’s just one 200 digit number among another 10^200 similar numbers. Even if you care about one of them enough to ask Wolfram Alpha, it’s vanishingly unlikely to be that particular one.
Technically it is harder, since there are more digits; apart from the additional work involved this also makes more opportunities for mistakes. In addition, of course, the computer at the other end is going to have to do more work.
If there’s some new hypothesis it’s likely to be proven or disproven quickly. If you look at an old one, like the Riemann hypothesis, that people have tried and failed to prove or disprove, it probabily won’t be proven or disproven any time soon. Thus, it’s not hard to find something more likely to be proven quickly than the Riemann hypothesis, but is still less likely to be true.
Let A = “pi is normal”, and B = “pi includes in it as a contiguous block the first 2^128 digits of e”. B is more likely to be provable in ZFC, simply because A requires B but not vice versa. A is vastly more likely to be proven by 2050. Is this a valid example, or do you see it as cheating in some way?
I’m not sure if this question is meaningful/interesting. It may be, but I’m not seeing it.
Suggested repair of your example: A= “Pi is normal” and B= “Pi includes as a contiguous block the first 2^128 digits of e within the first Ackermann(8) digits” which should do something similar.
Eliezer said in his Intelligence Explosion Microeconomics that Google is maybe the most potential candidate to start the FOOM scenario.
I’ve gotten the impression that Google doesn’t really take this Friendliness business seriously. But beyond that, what is Google’s stance towards it? On the scale of “what a useless daydreaming”, “an interesting idea but we’re not willing to do anything about it”, “we may allocate some minor resources to it at some point in the future”, or something else?
Are there solid examples of people getting utility from Lesswrong?
The Less Wrong community is responsible for me learning how to relate openly to my own emotions, meeting dozens of amazing friends, building a career that’s more fun and fulfilling than I had ever imagined, and learning how to overcome my chronic bouts of depression in a matter of days instead of years.
As opposed to utility they could get from other self-help resources?
Who knows? I’m an experiment with a sample size of one, and there’s no control group. In the actual world, other things didn’t actually work for me, and this did. But people who aren’t me sometimes get similar things from other sources. It’s possible that without Less Wrong, I might still have run across the right resources and the right community at the right moment, and something else could have been equally good. Or maybe not, and I’d still be purposeless and alone, not noticing my ennui and confusion because I’d forgotten what it was like to feel anything else.
I did self help before I joined lesswrong, and had almost no results. I’d partially attribute Lesswrong to changing me in ways such that I switched my major from graphic design to biology, in an effort to help people through research. I’ve also gotten involved in effective altruism in my community, starting the local THINK club for my college, which is donating money to various (effective) charities. I have a lovely group of friends from the Lesswrong study hall who have been tremendously supportive and fun to be around. There are a number of other small things, like learning about melatonin, which fixed my insomnia...etc. but those are more of a result of being around people who are knowledgeable of such things, not necessarily lesswrong-people.
True. It’s surprisingly difficult to think about the hypothetical figures since I’m not short on cash, can’t seem to make myself much happier spending more money, and still don’t know any viable alternative to LW. It also seems thinking about this in terms of a subscription fee instead of getting a cash offer changes the figures significantly, which I guess tells us something about the diminishing marginal utility of money.
This makes me wonder if there are any threads here discussing how to convert money into experiential happiness. ETA: yes there are.
how to convert money into experiential happiness. ETA: yes there are.
I am wary of such type of advice because it almost always aims itself at an average person. Someone who is not average might not find such advice useful and it could turn out the be misleading and harmful.
Also a large part of it comes from psychology papers which are, um, not an unalloyed source of truth.
Well, that depends on the person, doesn’t it? Some are sufficiently different and some are not.
Generic advice is generic. Only you can prevent wildfires.. err.. decide whether it is appropriate specifically for you or not. My point is really that you shouldn’t treat it as “scientifically established” gospel and get unhappy if you are weird enough for it not to apply.
Guessing here is a bad idea though, because it is specifically in relation to an area where people are known to be bad at predicting their own responses.
decide whether it is appropriate specifically for you or not.
Understanding if reading lesswrong is more or less a waste of time than other internet stuff I read.
I think that depends a lot on how you interact with it. You can read a post on commitment contracts and adopt the technique or you can read the post and just accept the new information. The impact on your life will the very different.
I’m not sure if TDT is available elsewhere as I gave up on self-help books many years ago.
I don’t know about self-help books, but the moral advice to choose as if you are choosing more than the immediate consequences is found in moral philosophy.
“I want to be the kind of agent that chooses X (habitually), therefore I will choose X (now)” reasoning can be found in virtue ethics, although the argument there is based on habit and character development rather than being an algorithm. Aristotle discusses the importance of practicing good decisions in the Nichomachean Ethics: “Similarly we become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts.”(source)
“I want to live in a world where people choose X, therefore I will choose X” is a line of reasoning I’ve heard connected to the Jewish moral idea of tikkun olam, though I don’t have a source on that.
I agree that is similar to TDT but I would say it is too vague and general for it to have been much use to me. Part of the advantage of Lesswrong—or any intermet based medium—is that people can comment and sharpen ideas.
A while back, I posted in an open thread about my new organisation of LW core posts into an introductory list. One of the commenters mentioned the usefulness of having videos at the start and suggested linking to them somehow from the welcome page.
Can I ask who runs the welcome page, and whether we can discuss here whether this is a good idea, and how perhaps to implement it?
What’s so great about rationality anyway? I care a lot about life and would find it a pity if it went extinct, but I don’t care so much about rationality, and specifically I don’t really see why having the human-style half-assed implementation of it around is considered a good idea.
“Rationality” as used around here indicates “succeeding more often”. Or if you prefer, “Rationality is winning”.
That’s the idea. From the looks of it, most of us either suck at it, or only needed it for minor things in the first place, or are improving slowly enough that it’s indistinguishable from “I used more flashcards this month”. (Or maybe I just suck at it and fail to notice actually impressive improvements people have made; that’s possible, too.)
[Edit: CFAR seems to have a better reputation for teaching instrumental rationality than LessWrong, which seems to make sense. Too bad it’s a geographically bound organization with a price tag.]
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation. Or at least to measure winning, so we could say whether CFAR lessons contribute to winning.
Sometimes income is used as a proxy for winning. It has some problems. For our purposes I would guess a big problem is that the changes of income within a year or two (since when CFAR provides workshops) are mostly noise. (Also, for employees this metric could be more easily optimized by preparing them for job interviews, helping them to optimize their CVs, and pressuring them into doing as many interviews as possible.)
The biggest issue with using income as a metric for ‘winning’ is that some people—in fact, most people—do not really have income as their sole goal, or even as their most important one. For most people, things like having social standing, respect, and importance, are far more important.
I think the point was government handout programs. This is a massive external control on many people’s incomes, and it is part of how the world is not a meritocracy.
(Please note, I ADBOC with CellBioGuy, so don’t take my description as anything more than a summary of what I think he is trying to say.)
This is closer to what I was getting at. Above someone mentioned government assistance programs, which is also true to a point but not really what I meant (another ‘disagree connotatively’).
I was mostly going for the fact that circumstances of birth (family and status not genetics), location, and locked-in life history have far more to do with income than a lot of other factors. And those who make it REALLY big are almost without exception extremely lucky rather than extremely good.
The value of income varies pretty widely across time and place (let alone between different people), so using it as a metric for “winning” is highly problematic. For instance, I was mostly insensitive to my income before getting married (and especially having my first child) beyond being able to afford rent, internet, food, and a few other things. The problem is, I don’t know of any other single number that works better.
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation.
Since in the local vernacular rationality is winning, you need no measures: the correlation is 1 by defintion :-/
Sometimes income is used as a proxy for winning.
It’s a very bad proxy as “winning” is, more or less, “achieving things you care about” and income is a rather poor measure of that. For the LW crowd, anyway.
talk of “rationality as winning” is about instrumental rationality; when Viliam talks about the correlation between rationality and winning, it’s not clear whether it’s instrumental rationality (taking the best decisions towards your goals) or epistemic rationality (having true beliefs), but the second one is more likely.
But even if it’s about instrumental rationality, I wouldn’t say that the correlation is 1 by definition: I’d say winning is a combination of luck, resources/power, and instrumental rationality.
winning is a combination of luck, resources/power, and instrumental rationality
Exactly. And the question is how much can we increase this result using the CFAR’s rationality improving techniques. Would better rationality on average increase your winning by 1%, 10%, 100%, or 1000%? The values 1% and 10% would probably be lost in the noise of luck.
Also, what is the distribution curve for the gains of rationality among population? An average gain of 100% could mean that everyone gains 100%, in which case you would have a lot of “proofs that rationality works”, but it could also mean that 1 person in 10 gains 1000% and 9 of 10 gain nothing; in which case you would have a lof of “proofs that rationality doesn’t work” and a few exceptions that could be explained away (e.g. by saying that they were so talented that they would get the same results also without CFAR).
It would be also interesting to know the curve for increases in winning by increases in rationality. Maybe rationality gives compound interest; becoming +1 rational can give you 10% more winning, but becoming +2 and +3 rational gives you 30% and 100% more winning, because your rationality techniques combine, and because by removing the non-rational parts of your life you gain additional resources. Or maybe it is actually the other way round; becoming +1 rational gives you 100% more winning, and becoming +2 and +3 rational only gives you additional 10% and 1% more winning, because you have already picked all the low-hanging fruit.
The shape of this curve, if known, could be important for CFAR’s strategy. If rationality follows the compound interest model, then CFAR should pick some of their brightest students and fully focus on optimizing them. On the other hand, if the low-hanging fruit is more likely, CFAR should focus on some easy-to-replicate elementary lessons and try to get as much volunteers as possible to teach them to everyone in sight.
By the way, for the efficient altruist subset of LW crowd, income (its part donated to effective charity) is a good proxy for winning.
That is a possible and likely model, but it seems to me that we should not stop the analysis here.
Let’s assume that rationality works mostly by preventing failures. As a simple mathematical model, we have a biased coin that generates values “success” and “failure”. For a typical smart but not rational person, the coin generates 90% “success” and 10% “failure”. For an x-rationalist, the coin generates 99% “success” and 1% “failure”. If your experiment consists of doing one coin flip and calculating the winners, most winners will not be x-rationalists, simply because of the base rates.
But are these coin flips always taken in isolation, or is it possible to create more complex games? For example, if the goal is to flip the coin 10 times and have 10 “successes”, then the players have total chances of 35% vs 90%. That seems like a greater difference, although the base rates would still dwarf this.
My point is, if your magical power is merely preventing some unlikely failures, you should have a visible advantage in situations which are complex in a way that makes hundreds of such failures possible. A person without the magical power would be pretty likely to fail at some point, even if each individual failure would be unlikely.
I just don’t know what (if anything) in the real world corresponds to this. Maybe the problem is that preventing hundreds of different unlikely failures would simply take too much time for a single person.
This is getting better, slowly. Workshops are going on in Melbourne sometime in early 2014 (February?), and they’re looking to do more internationals going forward.
Rationality is the process of humans getting provably better at predicting the future. Evidence based medicine is rational. “traditional” and “spiritual” medicine are not rational when their practitioners and customers don’t really care whether their impression that they work stands up to any kind of statistical analysis. Physics is rational, its hypotheses are all tested and open to retesting against experiment, against reality.
When it comes to “winning,” it needs to be pointed out that rationality when consciously practiced allows humans to meet their consciously perceived and explicitly stated goals more reliably. You need to be rational to notice that this is true, but it isn’t a lot more of a leap than “I think therefore i am.”
One could analyze things and conclude that rationality does not enhance humanities prospects for surviving our own sun’s supernova, or does not materially enhance your own chances of immortality, both of which I imagine strong cases could be made for. While being rational, I continue to pursue pleasure and happiness and satisfaction in ways that don’t always make sense to other rationalists and to the extent that I find satisfaction and pleasure and happiness, i don’t much care that other rationalists do not think what I am doing makes sense. But ultimately, I look at the pieces of my life, and my decisions, through rational lenses whenever I am interested in understanding what is going on, which is not all the time.
Rationality is a great tool. It is something we can get better at, by understanding things like physics, chemistry, engineering, applied math, economics and so on, and by by understanding human mind biases and ways to avoid them. It is something that sets humans apart from other life on the planet and something that sets many of us apart from many other humans on the planet, being a strength many of us have over those other humans we compete with for status and mates and so on. Rationality is generally great fun, like learning to drive fast or to fly a plane.
And if you use it right, you can get laid, and then have more data available for determining if that’s what you REALLY want.
So far, humans are the life’s best bet for surviving the day our Sun goes supernova.
Not to detract from your point, but that’s pretty unlikely. Unless it becomes a part of a tight binary star several billion years down the road, when it has turned into a white dwarf. Of course, by then Earth will have been destroyed during the Sun’s red giant stage.
So far, humans are the life’s best bet for surviving the day our Sun goes supernova.
This is a pedantic point in context, but our solar system almost certainly isn’t going to develop into a supernova. There’s quite a menagerie of described or proposed supernova types, but all result either from core collapse in a very massive star (more than eight or so solar masses) or from accretion of mass (usually from a giant companion) onto a white dwarf star.
A close orbit around a giant star will sterilize Earth almost as well, though, and that is developmentally likely. Though last I heard, Earth’s thought to become uninhabitable well before the Sun develops into a giant stage, as it’s growing slowly more luminous over time.
Bringing life to the stars seems a worthy goal, but if we could achieve it by building an AI that wipes out humanity as step 0 (they’re too resource intensive), shouldn’t we do that? Say the AI awakes, figures out that the probability of intelligence given life is very high, but that the probability of life staying around given the destructive tendencies of human intelligence is not so good. Call it an ecofascist AI if you want. Wouldn’t that be desirable iff the probabilities are as stated?
I think you’re wrong about your own preferences. In particular, can you think of any specific humans that you like? Surely the value of humanity is at least the value of those people.
Then there may, indeed, be no rational argument (or any argument) that will convince you; a fundamental disagreement on values is not a question of rationality. If the disagreement is sufficiently large—the canonical example around here being the paperclip maximiser—then it may be impossible to settle it outside of force. Now, as you are not claiming to be a clippy—what happened to Clippy, anyway? - you are presumably human at least genetically, so you’ll forgive me if I suspect a certain amount of signalling in your misanthropic statements. So your real disagreement with LW thoughts may not be so large as to require force. How about if we just set aside a planet for you, and the rest of us spread out into the universe, promising not to bother you in the future?
CAE_Jones answered the first part of your question. As for the second part, the human-style half-assed implementation of it is the best we can do in many circumstances, because bringing to bear the full machinery of mathematical logic would be prohibitively difficult for many things. However, just because it’s hard to talk about things in fully logical terms, doesn’t mean we should just throw up our hands and just pick random viewpoints. We can take steps to improve our reasoning, even with our mushy illogical biological brains.
I would like to ask for help with one thing, however.
The book is in lay terms, and tries to be as non-technical as possible, so I’ve not been able to find an answer to my question online that hasn’t assumed my having more knowledge than I do.
Can anyone give me a real life example of a series of results, where the assumption of exchange ability holds and it isn’t a Bernoulli series?
Let’s say that we have a box of weighted coins. Some are more likely to fall heads; others tails. We pull one out and flip it many times. The flips are identical, so we can switch the order. They are independent conditional on knowing which coin was chosen, but ahead of time they are dependent, the one telling us about the choice of coin and thus about the other. De Finetti’s theorem says that all exchangeable sequences take this form.
Added: Actually, de Finetti’s theorem only applies to infinite sequences. Here’s an example of a finite exchangeable sequence that doesn’t fit the theorem: draw balls from a box without replacement. This can only go on until the box is empty. And of course you can combine the two: randomly choose a box with at least n balls and then pull out n balls without replacement.
Added: A crazy model that is exchangeable is Pólya’s urn. It is not obvious that it is exchangeable, let alone that the conclusion of de Finetti’s theorem applies. Pólya’s urn contains balls of two colors, the initial numbers of which are known. Every time you draw one out, you put k of the same color back. If k=1, this is drawing with replacement; if k=0, this is drawing without replacement, both of which are exchangeable. And if k is a larger integer, it is also exchangeable. Here is an idea of how to the exchangeability. What if are somehow confused about the size of the balls, and think that they are r times bigger than they really are? Then each time we remove an actual ball, we’re removing 1/r part of a confused ball. That’s like removing 1 confused ball and putting back 1-1/r balls. Thus k=1-1/r is like drawing without replacement, but with this confusion. This is exchangeable. Thus the model is exchangeable for infinitely many values of k, which verifies some identities for infinitely many values of k, which is probably enough to verify it as an algebraic identity.
A lot of things modern “conservatives” consider traditional are recent innovations barely a few decades or a century old. Chesterton’s fence doesn’t apply to them.
I would guess that this comment came out of the discussion about homosexuality and male to male intimacy between friends further down in the thread.
Drug prohibition is also something that’s roughly a century old and Konkvistador wrote a post writing that he would be under some circumstances be okay with getting rid of it.
Why doesn’t Chesterton’s fence apply to “recent innovations”? It applies to everything that you don’t know how it came into being—time frame doesn’t matter much.
A stronger case for Chesterton’s fence can be made for older over recent innovations, I guess I should write an essay to explain the arguments on this, I forgot this wasn’t widely talked about outside a certain IRC channel.
A stronger case for Chesterton’s fence can be made for older over recent innovations
Hm. I would expect the reverse. The Chesterton’s Fence argument is about knowing the purpose of something and being able to understand the consequences of changing it. With older traditions both are harder. Granted, there is the offsetting factor that over the course of years (or centuries) no one was bothered enough to change it—an evolutionary argument, sort of—but an appeal to the wisdom of ancestors is not the same thing as the Chesterton’s Fence.
The Chesterton’s Fence argument is about knowing the purpose of something and being able to understand the consequences of changing it. With older traditions both are harder.
This is turning the argument on its head.
The point isn’t that knowing a purpose for something is a reason to keep the thing. If we know the reason for it and judge it good, of course we shall keep it. Banal. If we know a reason for a thing, and judge it bad, then the argument isn’t an encouragement to keep it either. No Chesterton’s Fence is the argument that us not knowing the reason behind something is a reason to keep it. Applying it to things, for which we easily learn why they are there, is pretty much redundant as far as heuristics go.
Let me quite directly, from his novel The Thing (1929). In the chapter entitled, “The Drift from Domesticity” he writes:
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
What you say here is reasonable, but it is completely unrelated to your comment that started this thread. If, as in your original comment, people are mistaken about the age of their traditions, they are ignorant of the origins, and thus Chesterton advice to learn the origin applies.
This isn’t directly related to that argument no, like I said I would need an essay to explain that (and I’ve started writing one), I was correcting a misreading of the classical Cheston’s fence.
Chesterton’s Fence is the argument that us not knowing the reason behind something is a reason to keep it.
Kinda. I actually read it as an argument for passivity unless you know what you’re doing.
Not knowing the reason for something is a “reason to keep it”—well, it’s a reason to not do anything. If that something gets destroyed by, say, a force of nature, would Chesterton’s Fence tell you to rebuild it? No, I don’ think so.
The Chesteron’s Fence is primarily a warning against hubris, against pretending to contain all the reasons of the world in your head. It is, basically, an entreaty to consider unknown unknowns, especially if you have evidence of their workings in front of you.
Not knowing the reason for something is a “reason to keep it”—well, it’s a reason to not do anything. If that something gets destroyed by, say, a force of nature, would Chesterton’s Fence tell you to rebuild it? No, I don’ think so.
Force of nature is misleading in the context of where it is likely to be applied. No social norms or institutions subsist without maintenance. But let me keep it and tweak it a bit, if you could easily prevent the force of nature destroying the fence, would you say the argument encourages you to do so?
Chesterton’s Fence is the argument that us not knowing the reason behind something is a reason to keep it.
Here’s a Bayesian counterargument for cultural practices:
Culture is more likely to have retained the instruction “Do X!” but not retained knowledge of X’s original purpose, if that purpose is not relevant any more.
If X’s purpose is still relevant, then retaining and teaching about X’s original purpose provides greater incentive for learning and teaching X, making X more likely to be retained. But if X’s original purpose is not still relevant, then retaining knowledge of the original purpose is a disincentive to learn and teach X itself, making X less likely to be retained. So, given that X is still taught, learning that its original purpose is known is evidence that it is still relevant; whereas learning that it is not known is evidence that it is not still relevant.
If X’s purpose is still relevant, then retaining and teaching about X’s original purpose provides greater incentive for learning and teaching X, making X more likely to be retained. But if X’s original purpose is not still relevant, then retaining knowledge of the original purpose is a disincentive to learn and teach X itself, making X less likely to be retained. So, given that X is still taught, learning that its original purpose is known is evidence that it is still relevant; whereas learning that it is not known is evidence that it is not still relevant.
If you are using the model of memetic selection, then useful things Xs are unlikely to have true explanations of why they are useful attached to them, but the most virulent ones. Sometimes they are the same, but obviously often they aren’t. After all Robin Hanson gets a lot of low hanging fruit showing us how for example school isn’t about learning etc.
Sometimes the most persistent combination would be a behavior or practice without an explicit explanation at all.
“The mathematician’s patterns, like the painter’s or the poet’s, must be beautiful; the ideas, like the colours or the words, must fit together in a harmonious way. Beauty is the first test: there is no permanent place in the world for ugly mathematics.”—G. H. Hardy, A Mathematician’s Apology (1941)
Is LW the largest and most established online forum for discussion of AI? If yes, then we should be aware that LW, or at least EY’s ideas about AI, might be underestimated with regards to how widespread these ideas are to the people that matter, like AI researchers.
I say this because I come across a lot of comments with the sentiment of lamenting the world’s AI researchers aren’t more aware of friendliness on the level that is discussed here. I might also just be projecting what I think is the sediment here, in that case, just ignore this comment. Thoughts?
A full half (20/40) of the posts currently under discussion are meetup threads.
Can we please segregate these threads to another forum tab (in the vein of the Main/Discussion split)?
Edit: And only 5 or so of them actually have any comments in them.
I might as well point out my solution- I’ve set the date of the Austin meetup to be six years from now, and edit the date each week. It stays on the map, it stays on the sidebar (so I remember to edit the date- if this were automatic, then it could be correct), and it stays out of discussion.
This issue has been brought up many times, and I agree that it’s a major problem. The solution I suggested was to have all the meetup locations be brought together into a single weekly meetup thread, with all the city names in the title. This could be done either automatically with a little bit of coding, or by just having someone do the coordination. I even volunteered to be the one doing the coordinating. But no one seemed to be interested in actually agreeing to do it. I still stand by my suggestion, if it is adopted.
It seems to me that the forum format is ill suited for the subject matter in the first place. Unless the intent is to use the discussion forum raise awareness of the meetups, it seems to me that a website specifically designed for the meetups would make more sense. Especially since most people aren’t going to have much interest in a meetup more than, say, 100 miles away from where they live. If people really want to be notified about the meetups without having to go to a separate website, couldn’t that be accomplished through an RSS feed or some such solution? Granted, that would take more effort on individual LWers, but I have doubts about how much clutter should be accepted simply to make becoming aware of every meetup as effortless as possible.
Is there a way on my end to tell my computer to not include any post that includes “Meetup” in the title?
I think it works to have meet-ups on this site rather than in a separate blog, but they shouldn’t be separate posts in discussion.
Frank Adamek has done what you suggest for years. That you don’t notice it being done is a pretty bad sign about the idea. If you want to contribute, you should be trying to get people to use his system, rather than trying to introduce a new system. Or maybe you should suggest modifications. But the first step is knowing the current system.
I’m aware of Frank’s posts to main. It came up during the last discussion about this idea. What I am suggesting is to remove the individual meetup threads from discussion, to clear up the clutter. In addition, the meetup cities would be right up there in the title (to respond to objections that having a single thread would result in reduced visibility). Instead of everyone submitting to discussion and then someone gathering up everything in main, everyone would simply submit to the person doing the coordinating. The reason I proposed myself as a volunteer was that I didn’t know if Frank would be willing to do this, given that it would require daily correspondence with the people organizing the meetups.
I don’t know how typical I am, but I check Discussion at lot more often than Main.
That’s because Discussion has a lot more activity, right?
I’m not sure how much of the difference is more activity. It feels like a higher proportion of things I’m interested in, but that could just be more frequency of things I’m interested in.
Yes I think the proposed aggregated meetup threads should be in discussion.
… and of those five, for two of them the comments consist of me complaining that the meetup location hasn’t been included in the title.
That being said, personally I don’t mind the meetup posts that much, and I’m not sure that moving them to their own section would be an improvement. I find it pretty likely that nobody would ever look there.
Next iteration: meetup announcements occupy their own tab, top of Discussion starts with an “ad” line about recent announcements, in a bright color or otherwise distinguished: “Recent meetup announcements: Moscow, Tel-Avid, Boulder, London”, every city is a link.
If true, what should we infer about the policy of having them cluttering up Discussion?
That policy forces everybody to see the meetup announcements, and thus probably increases meetup attendance (and knowing your announcement will have a wide (forced) public encourages people to create meetups).
No, it doesn’t. Partially because of the meetup clutter I don’t look at the posts page at all and just go straight into comments.
And what is the cost-benefit analysis for forcing everyone to read about meetups all over the globe?
Recent work shows that it is possible to use acoustic data to break public key encryption systems. Essentially, if one can send specific encrypted plaintext then the resulting sounds the CPU makes when decrypting can reveal information about the key. The attack was successfully demonstrated to work on 4096 bit RSA encryption. While some versions of the attack require high quality microphones, some versions apparently were successful just using mobile phones.
Aside from the general interest issues, this is one more example of how a supposedly boxed AI might be able to send out detailed information to the outside. In particular, one can send surprisingly high bandwith even accidentally through acoustic channels.
Now that’s creepy.
Eh… if an attacker has the level of physical access to the CPU that’s required to plant a microphone, you have worse problems than acoustic attacks.
For personal devices the attacker may have access to the microphone inside the device via flash/java/javascript/an app, etc.
If the attacker can run code on your device, a keylogger is a much simpler solution.
I think it is probably simpler to enable the microphone from a web or mobile application than to install a keylogger in the OS. But then if you consider acoustic keyloggers...
With an acoustic keylogger you could scoop the my KeePass password but the actual passwords that I use to log into websites.
Not if it’s sandboxed, but then timing and other side-channel attacks are still easier than using the mike.
That might have been true a few years ago, but they point out that that’s not as true anymore. For example, they suggest one practical application of this technique might be to put your own server in a colocation facility, stick a microphone in it and slurp up as many keys as you can. They also were able to get a version of the technique to work 4 meters away, which is far enough that this becomes somewhat different from having direct physical access. They also point out that laser microphones could also be used with this method.
In the example the used a mobile phone. Going from having owned a microphone to being able to know a key on a computer is a significant step.
Additionally there are other way to get audio access. Heating pipes conduct sound waves if the attacker has a good microphone.
Glass of windows vibrates in a way that can be detected from a distance.
Yesterday I noticed a mistake in my reasoning that seems to be due to a cognitive bias, and I wonder how widespread or studied it is, or if it has a name—I can’t think of an obvious candidate.
I was leaving work, and I entered the parking elevator in the lobby and pressed the button for floor −4. Three people entered after me—call them A, B and C - but because I hadn’t yet turned around to face the door, as elevator etiquette requires, I didn’t see which one of them pressed which button. As I turned around and the doors started to close, I saw that −2 and −3 were lit in addition to my −4. So, three floors and four people, means two people will come out on one of the floors, and I wondered which one it’ll be.
The elevator stopped at floor −2. A and B got out. Well, I thought, so C is headed for −3, and I for −4 alone. As the doors were closing, B rushed back and squeezed through them. I realized she didn’t want −2, and went out of the elevator absent-mindedly. I wondered which floor she did want. The elevator went down to −3. The doors opened and B got out… and then something weird happened: C didn’t. I was surprised. Something wasn’t right in my idle deductions. I figured it out in the few seconds it took for the elevator to descend to my floor and let me out together with C.
Where did I go wrong? When I knew that B left on −2, I deduced, correctly, that C will get out on −3. But then B came back; the fact of her leaving on −2 turned out to be wrong; yet I didn’t cancel my deduction about C and didn’t return him the “freedom” of leaving either on −3 or on −4. It didn’t even occur to me to do that. Why didn’t it?
It seems important that the new information was a correction of a known fact, and not just some other fact. If I treat the new information “B does not leave at −2” purely as a fact, the consequence for C is “C may leave either on −3 or on −4“, which is already clear as it is and not worth updating. No, it seems “B does not leave at −2” has a special character when it comes to correct the previously-assumed “B left at −2”. It comes as a “rollback” of existing information and I need to “roll back” everything I deduced from that information. And that seems hard to do and easy to forget. So if wasn’t just a failure to update that I committed. It was a failure to “roll back”.
On reflection, this mistake seems like something we might be doing often, and something to keep an eye out for. Is there a name for this mistake, has it been studied?
Seems related to the studies where people are told a fact, but it’s in red, which they’re told means it’s not true. After seeing lots of different facts in colours blue or red (blue means true) they’re asked about certain facts, and they’re more likely to remember a false fact as true than a true fact as false—we’re more likely to believe things, and don’t tend to take on contrary evidence as easily.
http://wiki.lesswrong.com/wiki/Cached_thought http://lesswrong.com/lw/k5/cached_thoughts/
Thanks. Cached thoughts seem applicable, but also too broad for what I’m describing. After all, if I failed to update on A and B exiting on −2, and continued thinking C may get out either on −3 or −4, that could also be described as a cached thought which I retained even when new evidence contradicted it. But I didn’t do that, and was in no danger on doing that. I think that it’s the necessity to roll back to the previous state, rather than just, in general, update on new evidence and get rid of the cached thought, that seems important here.
This post is very interesting. It reminds me very much of some variations of the change scam. You seem to be describing something really similar, the rollback of information you speak of is applicable to the counting of change. I also feel like this sort of mistake happens often but I might not notice it. I feel like this deserves a name like rollback deduction failure or something.
Change blindness seems related.
Seems a bit Monty Hall-ish. You updated when B got out on 2 but didn’t retract your update when B re-entered. After your update, C—or maybe you were thinking “the remainder of strangers on this elevator”—had near certain chance of getting out on 3 so when B came back in it looks like you mashed the two together as “the remainder of strangers on this elevator”.
I have no clue if this phenomenon has a name or not.
Reproduced for convenience...
On G+, John Baez wrote about the MIRI workshop he’s currently attending, in particular about Löb’s Theorem.
Timothy Gowers asked:
Qiaochu Yuan, a past MIRI workshop participant, gave a concise answer:
Qiaochu’s answer seems off. The argument that the parent AI can already prove what it wants the successor AI to prove and therefore isn’t building a more powerful successor, isn’t very compelling because being able to prove things is a different problem than searching for useful things to prove. It also doesn’t encompass what I understand to be the Lobian obstacle, that being able to prove that if your own mathematical system proves something that thing is true implies that your system is inconsistent.
Is there more context on this?
It’s entirely possible that my understanding is incomplete, but that was my interpretation of an explanation Eliezer gave me once. Two comments: first, this toy model is ignoring the question of how to go about searching for useful things to prove; you can think of the AI and its descendants as trying to determine whether or not any action leads to goal G. Second, it’s true that the AI can’t reflectively trust itself and that this is a problem, but the AI’s action criterion doesn’t require that it reflectively trust itself to perform actions. However, it does require that it trust its descendants to construct its descendants.
As there was some interest in Soylent some time ago, I’m curious what people who have some knowledge of dietary science think of its safety and efficacy given that the recipe appears to be finalized. I don’t know much about this area, so it’s difficult for me to sort out the numerous opinions being thrown around concerning the product.
ETA: Bonus points for probabilities or general confidence levels attached to key statements.
They included vitamin D2 instead of D3. From what I read about vitamin D that seems to be a bad decision.
Given that dogfood and catfood work as far as mono-diets go, I’m pretty hopeful that personfood is going to work out as well. I don’t know enough about nutrition in general to identify any deficiencies (and you kind of have to wait 10+ years for any long-term effects), but the odds are good that it or something like it will work out in the long run. I’d go with really rough priors and say 65% safe (85% if you’re willing to have a minor nutritional deficiency), up to 95% three years from now. These numbers go up with FDA approval.
They mostly seem to, but if they cause a drop in energy or cognitive capability because of some nutrient balance problems, the animals won’t become visibly ill and humans are unlikely to notice. A persistent brain fog from eating a poor diet would be quite bad for humans on the other hand.
Most of the selective breeding has been done while these animals were on simple diets, so perhaps some genetic adaptation has happened as well. Besides, aren’t carnivore diets quite monotonous in nature anyway?
I am not so sure of that. People have been feeding cats and dogs commercial pet food only for the last 50 years or so and only in wealthy countries. Before that (and in the rest of the world, still) people fed their pets a variety of food that doesn’t come from a bag or a can.
In terms of what you kill and eat, mostly yes, but in terms of (micro)nutrients prey not only differs, but also each body contains a huge variety (compared to plants).
There’s probably seasonal variation—Farley Mowat described wolves eating a lot of mice during the summer when mice are plentiful. Also, I’m pretty sure carnivores eat the stomach contents of their prey—more seasonal variation. And in temperate-to-cold climates, prey will have the most fat in the fall and the least in the early spring.
It wouldn’t surprise me if there’s a nutritional variation for dry season/rainy season climates, but I don’t know what it would be.
I actually thought this way at first, but after reading up more on nutrition, I’m slightly skeptical that soylent would work as a mono-diet. For instance, fruits have been suggested to contain chemical complexes that assist in absorption of vitamins. These chemical complexes may not exist in soylent. In addition, there hasn’t really been any long-term study of the toxic effects of soylent. Almost all the ingredients are the result of nontrivial chemical processing, and you inevitably get some impurities. Even if your ingredient is 99.99% pure, that 0.01% impurity could nevertheless be something with extremely damaging long-term toxicity. For instance, heavy metals, or chemicals that mimic the action of hormones.
Obviously, toxic chemicals exist in ordinary food as well. This is why variety is important. Variety in what you eat is not just important for the sake of chemicals you get, but for the sake of chemicals you don’t get. If one of your food sources is tainted, having variety means you aren’t exposed to that specific chemical in levels that would be damaging.
I still think it’s promising though, and I think we’ll eventually get there. It may take a few years, but I think we’ll definitely arrive on a food substitute that has everything the body needs and nothing the body doesn’t need. Such a food substitute would be even more healthy than ‘fresh food’. I just doubt that this first iteration of Soylent has hit that mark.
I’ll be watching Soylent with interest.
It seems to me that Soylent is at least as healthy as many protein powders and mass gainers that athletes and bodybuilders have been using for quite some time. That is to say, it depends on quality manufacturing. If Soylent does a poor job picking their suppliers, then it might be actively toxic.
I’d like to see creatine included, just because most people would see mental and physical benefits from supplementation. The micronutrients otherwise look good. I’ve read things to the effect that real food is superior to supplementation (example), so I don’t think that this is a suitable replacement to a healthy diet. I do think that this will be a significant improvement over the Standard American Diet, and a step up for the majority of people.
The macronutrients also look good—especially the fish oil! 102g of protein is a solid amount for a non-athlete, and athletes can easily eat more protein if desired. Rice protein is pretty terrible to eat, I hope that they get that figured out. I’d probably prefer less carbs and more fat for myself, but I think that’s just a quirk of my own biology.
Well, my estimates for long-term consequences would probably be:
Soylent is fine to consume occasionally -- 98%
Soylent is fine to be a major (but not sole) part of your diet -- 90%
Soylent is fine to be the sole food you consume -- 10%
What are your credentials w.r.t. nutrition?
My credentials are my posts.
I don’t do arguments from authority.
Given that you didn’t mention otherwise, I assumed that you were mostly going off priors in the absence of much domain-specific knowledge, as ThrustVectoring was. I haven’t read enough of your posts to accurately gauge how heavily to weight your opinion—if my assumption is incorrect, I’d appreciate it if you would let me know.
There is no data about long-term effects of Soylent. Everyone has only priors and nothing but priors. By the way, “domain-specific knowledge” is a prior as well.
I am not sure how are you going to gauge the proper weighting for people’s opinions. This is the internet, after all. If I tell you “I’m highly credentialed. Just trust me” :-D will that satisfy you?
On a bit more serious note I prefer arguments that stand on their own, regardless of their source (and its credibility or lack thereof). In fact, nutrition is such a screwed-up field that I would probably downgrade opinions from someone who claims to be a nutritionist...
Eh; it would be medium-strength evidence. Even though I have no way to verify what you say, I don’t think that you have any real incentive or motive to deceive me (given that simple trolls are unlikely to amass >2K karma). :P
(I think we’ve exhausted the usefulness of this subthread, so I probably won’t respond to any replies—tapping out.)
What exactly do you mean with fine?
Um. Probably lack of noticeable health/fitness problems. But yes, it’s a vague word. On the other hand, the general level of uncertainty here is high enough to make a precise definition not worthwhile. We are not running clinical trials here.
By the way, the vagueness of “major … part of … diet” is a bigger handwave here :-/
The more I read about nutrition the more I come to the conclusion that most diets do have effects. Some advantages and some disadvantages.
I thing there a good chance that A diet without any cholesterol might reduce some hormone levels and some people who look hard enough might see that as an issue.
The first one sounds underconfident (at least if you don’t count people allergic or intolerant to one of the ingredients, nor set a very high bar for what to call “fine”).
The first one can be read as saying that 2% of people occasionally drinking Soylent will have problems because of that. That doesn’t sound outlandish to me.
I’d rank it below existing dietary replacements.
Thanks for your input. Are there any existing dietary replacements you recommend that are similarly easy to prepare? (Soylent Orange seems to be working well for you as a solution, but I don’t think I would actually go to the trouble to put the ingredients together.)
On a related note, do you have any new/more specific criticisms of Soylent, other than those that you presented in this post?
None that I would recommend. None of my criticisms are original, Soylent still seems a very haphazard concoction to me. I do have a bunch of specific issues with Soylent that I haven’t discussed in detail e.g. lack of cholesterol and saturated fat not being great for hormones. But yeah, I’m not super motivated to get deep into it unless I decide to try to turn the latest variant of Soylent Orange into an actual service. I’m still working on it.
Easy as in time requirements or easy as in money? The kind of fluid food replacement that they use in hospitals is probably better than what Soylent produces.
Liquid diets are not exactly a new idea, and most of them don’t have to be prepared at all but come in portions. Since most of them have been developed for medical use, the price tag is significantly higher. Some of them have been developed for patients who can’t swallow normal food at all, so I doubt they lack anything important that Soylent contains and probably have been much more rigorously tested. If anyone knows studies that have been done on these people, I’m all ears.
Never mind it’s safety, I do not like it’s hedonics at all. Basic: If you currently are eating blandly enough that shifting to a liquid mono-diet for any reason other than dire medical necessity is not a major quality of life sacrifice, you need to reprioritize either your time or your money expeditures.
Loosing one of the major pleasures of life is not a rational sacrifice. Life is supposed to be enjoyable!
Perhaps eating isn’t a major pleasure of life for everyone.
I’m imagining an analogous argument about exercise. Someone formulates (or claims to, anyway) a technique combining drugs and yoga that provides, in a sweatless ten minutes per week, equivalent health benefits to an hour of normal exercise per day. Some folks are horrified by the idea — they enjoy their workout, or their bicycle commute, or swimming laps; and they can’t imagine that anyone would want to give up the euphoria of extended physical exertion in exchange for a bland ten-minute session.
To me, that seems like a failure of imagination. People don’t all enjoy the same “pleasures of life”. Some people like physical exercise; others hate it. Some people like tasty food; others don’t care about it. Some people like sex; others simply lack any desire for it; still others experience the urge but find it annoying. And so on.
Strong agreement—I’ve read enough from people who simply don’t find food very interesting to believe that they’re part of the human range.
More generally, people’s sensoriums vary a lot.
It’s a weak analogy as humans are biologically hardwired to eat but are not hardwired to exercise.
True, but two comments. First, let’s also look at the prevalence. I’m willing to make a wild approximation that the number of people who truly don’t care (and never will care) about food is about the same as the number of true asexuals and that’s what, 1-2%?
Second, I suspect that many people don’t care about food because of a variety of childhood conditioning and other psychological issues. In such cases you can treat it as a fixable pathology. And, of course, one’s attitude towards food changes throughout life (teenagers are notoriously either picky or indifferent, adults tend to develop more discriminating tastes).
Preparing food is an annoying hassle which tends to interfere with my workflow and distract from doing something more enjoyable. Food does provide some amount of pleasure, but having to spend the time actually making food that’s good enough to actually taste good (or having to leave the house to eat out) is enough of an annoyance that my quality of life would be much improved if I could just cease to eat entirely.
Soylent’s creator argues that it increases the quality of life benefits of food, since the savings from the Soylent diet meant that when he chooses to eat out, he can afford very good quality food and preparation.
For myself, while I enjoy eating good food, I do not enjoy preparing food (good or otherwise), and in fact I enjoy eating significantly less than I dislike preparing food. So the total event (prepares good food → eats good food) has negative utility to me, other than the nutritional necessity.
Additionally, if one’s schedule is so tight that preparing simple home-made meals (nothing complicated, just stuff that can be prepared with 5 minutes of work) is out of the question, that seems like a fast route to burnout.
Here’s the one pro-Soylent friend I have discussing why he likes it(tl;dr, he’s bad at eating and figures it’ll balance him out):
http://justinsamlal.blogspot.ca/2013/06/soylent-preliminary-stuff.html
I noticed that the micro quantities appear to be very different between Soylent and Jevity. Dropped a post on the Soylent forum here if anyone’s interested.
I’m not ready for my current employer to know about this, so I’ve created a throwaway account to ask about it.
A week ago I interviewed with Google, and I just got the feedback: they’re very happy and want to move forward. They’ve sent me an email asking for various details, including my current salary.
Now it seems to me very much as if I don’t want to tell them my current salary—I suspect I’d do much better if they worked out what they felt I was worth to them and offered me that, rather than taking my current salary and adding a bit extra. The Internet is full of advice that you shouldn’t tell a prospective employer your current salary when they ask. But I’m suspicious of this advice—it seems like the sort of thing they would say whether it was true or not. What’s your guess—in real life, how offputting is it for an employer if a candidate refuses to disclose that kind of detail when you ask for it as part of your process? How likely are Google to be put off by it?
I work at Google. When I was interviewing, I was in the exact same position of suspecting I shouldn’t tell them my salary (which I knew was below market rate at the time). I read the same advice you did and had the same reservations about it. Here’s what happened: I tried to withhold my salary information. The HR person said she had to have it for the process to move forward and asked me not to worry about it. I tried to insist. She said she totally understood where I was coming from, but the system didn’t allow her flexibility on this point. I told her my salary, truthfully. I received an offer which was substantially greater than my salary and seemingly uncorrelated with it.
My optimistic reading of the situation is that Google’s offer is mostly based on approximate market salary for the role, adjusted perhaps by how well you did at the interviews, your seniority, etc. (these are my guesses, I don’t have any internal info on how offers are calculated by HR). Your current salary is needed due for future bookkeeping, statistics, or maybe in case it’s higher than what Google is prepared to offer and they want to decide if it’s worth it to up the offer a little bit. That’s my theory, but keep in mind that that it’s just a bunch of guesses, and also that it’s a big company and policies may be different in different countries and offices.
I think it is worth mentioning that “the system won’t allow for flexibility on this” is just about the oldest negotiation tactic in the book. (Along with, “let me check with my boss on that...”)
In reality, there is zero reason Google, or any employer, should need to know your current or past salary information apart from that information’s ability to work as a negotiation tactic in their favor.
Google has something you want (a job that pays $) and you have something they want (skill to make them $). Sharing your salary this early in the process tips the negotation scales (overwhelmingly) in their favor.
That said, Google is negotiating from a place of immense strength. They choose from nearly anyone they want, while there is only one Google...
...so, if Google wants to know your salary, tell them your salary. And enjoy your career at one of the coolest companies around. You win. :)
And if salary is what matters use them as resume-points to get a higher salary somewhere else.
There are some things you may want to consider when using this strategy. For example, choose the appropriate amount of time you want to spend at Google. Too short may be suspicious, but too long would be a lost purpose if your goal is to make more money somewhere else later.
Optimize for having the most impressive CV when you leave. This means you should have an impressively sounding job description. Think about your CV items “on project X I worked as Y and my responsibilities were Z”, and try to manage your career within Google to optimize these.
Have a plausible story about why you decided to work for Google, and why you later decided to work somewhere else. This story can also be made up later, but if you prepare it in advice, you can make it more realistic.
The most simple version of this advice would be: If you choose Google with hope of having an impressive CV and a higher salary later, don’t stay there for the next 10 years in a role of code monkey working all the time on some completely unknown project that will be cancelled shortly after you leave.
Sidestepping the question:
Interview with other companies (Microsoft, Facebook, etc.) and get other offers. When the competition is other prospective employers, your old salary won’t much matter.
The rationale behind salary negotiations are best expanded upon by patio11′s “Salary Negotiation: Make More Money, Be More Valued” (that article’s well worth the rent).
In real life, the sort of places where employers take offense by you not disclosing current salary (or generally, by salary negotiations -that is, they’d hire someone else if he’s available more cheaply) are not the places you want to work with: if they’re putting selection pressure for downscaling salaries, all your future coworkers are going to be, well, cheap.
This is anecdotally not true for Google; they can afford truckloads, if they really want to have you onboard. So this is much more likely to come from standardized processes. Also note in Google’s case, that decisions are delegated to a board of stakeholders, so there isn’t really one person who can be put off due to salary (and they probably handle the hire/no hire decisions entirely separate to the salary negotiations).
Also, the company will probably be less likely to buy you a decent computer for work, install a new server when your department needs it, or hire new people when there is more work than you can handle. Even if you somehow don’t care about money for yourself, you probably do at least care about having decent working conditions. Maybe the just-world hypothesis makes you believe that lower salary will somehow be balanced by better working conditions, but it’s probably the other way round.
I’m a manager at a financial firm and I’ve hired people. I’d consider it pretty normal not to want to say. “Everyone” knows that trying to get the other person to name a number first is a common negotiating tactic, no real grownup is going to take it personally or get upset about this.
I don’t know how “normal” a company Google is in this way, but I’d guess it’s pretty normal.
If you are challenged on this, you can try stating it as a rule: “I’m not prepared to discuss my current salary, I’m here to talk about working for Google.” Or, “As a policy I don’t disclose my current salary. I’m sure you understand.” Or make up some blah about how that’s proprietary information for your current employer and you don’t feel comfortable disclosing it.
If they absolutely refuse to process your application without this (which is a bad move on their part if they really want you, but some companies are stubborn that way), other options are to fudge your number upwards somehow, though personally I wouldn’t try the ones that actually involve telling a literal lie:
Give them a wide range of expectations instead of your current salary. Say that of course it depends on the other details of the offer, any other offers you might get, etc.
Roll in as much stuff as you plausibly can (adding in bonuses or other moneylike benefits, and making an adjustment if the cost of living in Googleland is higher than where you live now). Example: I could add my salary of $80k, my last year’s bonus of $5k (or next year’s bonus, or my average bonus in percentage terms, whichever is highest), and my $2k transit benefits for a total of $87k.
Round up to the nearest $10k and say it’s an approximate figure. So if I make $87k I might say I’m in the ballpark of $90k.
State a range (e.g. if I made $69k I might say I make something in the high 5 figures, or somewhere near the $70-80k range)
Lie outright, but plausibly.
My guess is that a mild refusal would be acceptable to Google. They are unlikely to be put off by a change of subject.
A hard refusal might annoy them, if they persist in asking.
I suggest naming a very high figure first, to gain benefit from the anchoring effect. Then mentioning your salary will make it the anchor for the negotiation. Google has a reputation for paying high salaries.
If you are looking for advice on negotiation, I suggest searching for ‘anchoring’ as well as ’negotiation, to get more evidence-based advice.
Good luck.
If it’s helpful to know what other Google employees make my compensation details are here.
This reminded me to ask about a similar question: I am currently interviewing. Assuming I get an in-person interview, that will involve a long flight. I feel like I shouldn’t tell my current employer that I’m interviewing until I have an offer, but in order to hide it I presumably will have to take holiday on fairly short notice, have a plausible reason for why I’m taking it, and generally act like I’m not taking a long flight to an interview. There’s a chance that I’ll have to do this multiple times. (Though ideally I’d take multiple in-person interviews in the same trip.)
I don’t particularly like the idea of doing this. It feels deceitful and stressful. How bad an idea would it be to just let my employer know what’s going on?
Extremely bad. People have been fired or denied promotion because of this. Don’t even tell any of your colleagues.
I am not discussing the legal aspects of this, but you will probably be perceived as not worth investing in the long term. Imagine that your interview fails and you decide to stay. Your current employer is not going to trust you with anything important anymore, because they will be expecting you to leave soon anyway.
Okay, this may sound irrational, because you are not your employer’s slave, and technically you are (any anyone else is) free to leave sooner or later. But people still make estimates. It is in your best interest to pretend to be a loyal and motivated employee, until the day you are 100% ready to leave.
This is part of the human nature; what we have evolved to do. Even your dislike for deceit is part of the deceit mechanism. If you unilaterally decide to stop playing the game, it most likely means you lose.
There is probably an article by Robin Hanson about how LinkedIn helps us to get in contact with new job offers while maintaining plausible deniability, which is what makes it so popular, but I can’t find the link now.
Here is the post by Robin Hanson.
I founded it by searching site:overcomingbias.com social network. The key was generalizing from the specific linkedin to “social network,” though I can’t say why I thought to do that.
Thank you! This was probably the one I remembered.
It is only deceitful if you haven’t made a honest effort to improve your situation in your current company. It is as deceitful to stay silent and don’t give your employer a chance to increase your salary of your position.
It depends on the type of company of course. There are those that see you as an exchangable human resource where it may be appropriate to see the company as a slave owner who has to be hidden tghe truth of your escape from.
But there are companies where honesty about work situations is seen as interest in the company and critique used as feedback to improve the environment.
Salary negotiatons will be always tough though. Strictly comparing offers is the only reliable way to sell yourself. Everything else is falling prey to the salary negotiation tricks of the business world.
EDIT: I’m from Germany so my view my be country specific.
In this case, I don’t want to leave, just there are things that I want more than I want to stay. Not that it couldn’t be improved, but they probably can’t offer anything to change my mind.
If you are in such company, that’s great! Try to improve things; provide the feedback.
But don’t mention the fact that you are doing interviews with another company.
I can’t find it either. Nothing on OB, nothing in Google for ‘Linkedin “Robin Hanson”’ or sharpened to add ‘hypocrisy’. Sure it was Linkedin he was talking about?
I think I ADBOC. It’s not like the “disliking to be deceitful” gene evolved to make its bearers lose the game.
Certainly there are risks to being honest, but there are also benefits. Admittedly, the most salient one to me right now is “I don’t want to treat my current employer poorly”, and I’m not sure that lying about going to interviews is actually significantly worse than merely not telling them about interviews.
For people who properly compartmentalize, it helps to win the game. By signalling dislike to deceit, they gain other people’s trust… and then at the right moment they do something deceitful and (if they are well-calibrated) most likely profit from it..
It’s the people in the valley of bad rationality who may lose the game when they realize all the consequences and connections, and try to tune their honesty up to eleven. (For example by telling their boss that they would be willing to quit the company if they had a better offer from somewhere else, and that they actually look at the available information about other companies.)
Disliking deception also makes people more cautious and frugal about it, which is probably beneficial too.
It depends a lot on your company, so I think your inside view will be better than our outside view. I told my employer when I went out to do a tryout with CFAR, and that went well. One reason I told my boss was that, if I were hired, I’d need to scramble to get all my projects annotated well enough to be able to pass of seamlessly, and I didn’t want her to be left in the lurch or to make any plans that hinged on having her quant around for the next month. (Hiring sometimes took a while at my old company).
My boss really appreciated my being forthright and it saved me a lot of tsuris. I think it also worked better because it was expected that people in my role (Research Associate) wouldn’t stick around forever.
Yes, not leaving my employer in the lurch is important to me, but I do feel like they expect me to be around for a while. I’m glad to hear of your positive experience.
Almost everyone finds an explicit refusal to answer offputting. Don’t do it. But that doesn’t mean that you should actually answer. Usually a good choice is to answer a different question, such as to make them an offer.
I would advise googling to find average salaries for similar positions, especially with google.
I don’t feel qualified to answer your question, though if I were to make a guess, I wouldn’t expect them to be put off by refusal. Assuming Google behaves at least somewhat rationally, they should at this point have an estimate of your value as an employee and it doesn’t seem like your current salary would provide much additional information on that.
So, the question is, to what extent Google behaves rationally. This ties to something that I always wonder whenever I read salary negotiation advice. What is the specific mechanism by which disclosing current salary can hurt you? Yes, anchoring, obviously. But who does it? Is the danger that the potential employer isn’t behaving rationally after all and will anchor to the current salary, lowering the upper bound on what they’re willing to offer? Or is the danger primarily that anchoring will undermine your confidence and willingness to demand more (and if you felt sufficiently entitled, it wouldn’t hurt you at all)?
I would guess this one. It can make you ask less, with almost zero effort on the employer’s side; they don’t even have to read your answer. So the cost:benefit ratio of asking you this question is huge. And even if it doesn’t work on some people, it most likely does on average, so it can save a lot of money.
The mechanism that seems most important to me doesn’t really involve any sort of cognitive bias much. It goes like this. You are on (say) $50k/year. You are good enough that you’d be good value at $150/year, but you’d be willing to move if offered $60k/year, if that were all you could get. You apply for a new job and have to disclose your current salary to every prospective employer. So you get offers in (say) the $60k-80k range because everyone knows, or at least guesses, that that’s enough to tempt you and that no one else will be offering much more. You might get a lot more if you successfully start a bidding war, but otherwise you’re going to end up paid way less than you could be.
Note that everyone in this scenario acts rationally, arguably at least. Your prospective employer offers you (say) $75k. This would be irrational if you’d turn that down but accept a higher offer. But actually you’ll take it. This would be irrational if you could get more elsewhere. But actually you can’t because no one else will offer you much more than your current salary.
(You could try telling them that you have a strict policy of not taking offers that are way below what you think you’re worth, in the hope that it’ll stop them thinking you’d accept an offer of $75k. But you might not like the inference they’d draw from that and your current salary.)
Obvious note: Of course people care about lots of other things besides money, your value to one employer isn’t the same as your value to another, etc. This has only limited effect on the considerations above.
I was recently in a similar position, but I nonetheless managed to negotiate a large salary increase by taking a job in a different city, quoting the salary level that I wanted, and pleading cost-of-living increases when I was asked to justify it. They did negotiate me down by about $5000, and I wouldn’t say I’m quite at market rates yet for my level of experience, but it did seem to successfully anchor the negotiations on my asking price rather than my previous salary.
The new city actually did have a higher cost of living than the old one, but I get the impression that the hiring manager didn’t care about the actual rate so much as he cared about having a rationale that looked good on paper.
Well, assuming your example numbers, if my work would bring $150k+$x/year and the company didn’t hire me because I refused to take $60k/year, instead demanding, say, $120k/year (over twice the current salary, how greedy), then they just let $30k+$something/year walk out the door. Would they really do that (assuming rational behavior blah blah)?
I don’t see how they would benefit from playing the game of salary-negotiating chicken to the bitter end. Having a reputation for not offering market salaries for people with unfortunate work history? That actually sounds like it could be harmful.
The company doesn’t really know your true value. If you are really worth $150k it raises the question why you can’t get your present employeer to pay you that wage. Your present employeer has a lot more information about your skills then they do.
Something’s brewing in my brain lately, and I don’t know what. I know that it centers around:
-People were probably born during the Crimean War/US Civil War/The Boxer Rebellion who then died of a heart attack in a skyscraper/passenger plane crash/being caught up in, say WWII.
-Accurate descriptions of people from a decade or two ago tend to seem tasteless. (Casual homophobia) Accurate descriptions of people several decades ago seem awful and bizarre. (Hitting your wife, blatant racism) Accurate descriptions of people from centuries ago seem alien in their flat-out implausible awfulness. (Royalty shitting on the floor at Versailles, the Albigensian Crusade, etc...)
-We seem no less shocked now by social changes and technological developments and no less convinced that everything major under the sun has been done and only tweaks and refinements remain than people of past eras did.
I guess what I’m saying is that the Singularity seems a lot more factually supported-ly likely than it otherwise might have been, but we won’t realize we’re going through it until it’s well underway because our perception of such things will also wind up going faster for most of it.
I do expect the future to be different.
I could imagine a future where people see illegalizing LSD as strange as illegalizing homosexuality.
I can imagine that Google’s rent an AI car service will completely remove personal ownership of cars in a few decades. This removes cars as status symbols with means that they will be built on other design criteria like energy efficiency.
I can imagine a constructed language possibly overtaking English.
There are a lot of other things that are more vague.
Drug prohibition laws introduced in the 20th century are a nice counterexample to reactionaries’ claim that Cthulhu only swims left, BTW.
(Edited to replace “always” with “only”—I misremembered that quote.)
Cthulhu always swims left isn’t an observation that on every single issue society will settle on the left’s preferences, but that the general trend is leftward movement. If you interpreted it as that, the fall of the Soviet Union and the move away from planned economies should be a far more important counterexample.
Before continue I should define how I’m using left and right. I think them real in the sense they are the coalitions that tend to form in under current socioeconomic conditions, when due to the adversarial nature of politics, you compress very complicated preferences into as few dimensions (one) as possible. Principle component analysis makes for a nice metaphor on this.
Back to Cthulhu. As someone who’s preferences can be described as right wing I would be quite happy with returning to 1950s levels of state intervention, welfare and relative economic equality in exchange for that period’s social capital and cultural norms. Controlling for technological progress obviously. Some on the far right of mainstream conservatives might accept the same trade. This isn’t to say I would find it a perfect fit, not by long shot, but it would be a net improvement. I believe most Western far right people would accept this trade, most Western far left people would not accept this trade. And in America at least, centrists would be uncomfortable both with that level of state intervention and the social norms of the 1950s.
Now that we have this claim about revealed preferences, let’s invoke a very simple heuristic. Imagine you have two players playing a zero sum game of politics, they are offered to move the game to the position it had 50 moves ago. One player accepts, the other refuses. Ceteris paribus which one do you think is winning?
The “who would prefer to return 50 years back?” argument is interesting, but I think the meaning of “winning” has to be defined more precisely. Imagine that 50 years ago I was (by whatever metric) ten times as powerful as you, but today I am only three times as powerful than you. Would you describe this situation as your victory?
In some sense, yes, it is an improvement of your relative power. In other sense, no, I am still more powerful. You may be hopeful about the future, because the first derivative seems to work for you. On the other hand, maybe the second derivative works for me; and generally, predicting the future is a tricky business.
But it is interesting to think about how the time dimension is related to politics. I was thinking that maybe it’s the other way round; that “the right” is the side which self-identifies with the past, so in some sense it is losing by definition—if your goal is to be “more like yesterday than like today”, then tautologically today is worse according to this metric than the yesterday was. And there is a strong element of returning to the past in some right-wing movements.
But then I realized that some left-wing movements have this component too. I remember communists emphasising that millenia ago humans lived in perfectly egalitarian hunter-gatherer societies (before the surplus value was taken by the evil slavers / feudal lords / capitalists), so when the true communism comes, this ancient harmony will be restored. Similarly, some feminists (maybe just a small minority of them, I don’t know) have stories about how exactly the ancient matriarchal societies were organized, so overthrowing patriarchy would kinda restore this ancient order.
At this moment my working hypothesis is that “returning to the perfect past” is simply an universal human bias, and the main political difference is where exactly is your Golden Age located. Then it would seem that the right wing puts the Golden Age in the more recent past, while the left wing prefers prehistorical societies.
That makes it pretty likely that in its heart, the left is about returning towards our hunter-gatherer instincts and abandoning as much as possible of our disappointing civilization, while the right is about insisting on some specific adaptations to scarcity. Something like what Yvain said, with the connotational objection that the category of danger does not include only zombies, but also criminals or dysfunctional bureaucracies, which are everyday reality for some people. Generally, as we improve economically, we can afford to remove some of the adaptations to scarcity; the trade-offs that are no longer necessary. But sometimes while doing so we fuck up things horribly and the scarcity returns; often in a way that university professors don’t notice, simply because it does not happen to them.
I would describe it as me playing very well for the past 50 years and the game going my way.
Dropping the “always” might lead to less confusion on this point.
I only used “Cthulhu always swims left” because that is how army1987 termed it. Moldbug says “Cthulhu may swim slowly. But he only swims left.”
That formulation has the same problem. Like “always swims left”, “only swims left” suggests that every observed movement is leftwards.
Unless there is an implicit distinction between purposeful movement, analogous to swiming, and some drift due to chance.
I doubt most people who read the phrase “Cthulhu only swims left” would pick up on that unspoken distinction, though I could be wrong.
(I’ve corrected it now.)
Why does politics compress issues into a single linear dimension?
I suppose it makes sense in a two party system, but why do parliamentary systems with several small parties mainly have parties on the extremes, rather than mainly having parties with orthogonal preferences that could ally with either major party? In principle, a Green party ought not to care much about the left-right access, but in practice it cares very much.
I don’t know, it might be an artifact of representative systems in the West where the all important thing is getting the majority vote. “Left” vs. “Right” being a strong signal of who your allies tend to be seems to work pretty well descriptively for most people’s political identities and preferences.
A slight nitpick: I wouldn’t describe politics as anything remotely near zero-sum. The actions of the players of a country’s game of politics have very far-reaching effects on the citizens and residents of that country, and in some cases of the residents of the entire world.
The actions of the players definitely do affect the world at large in ways outside the scope of the game, which makes it about as far from zero-sum as it could possibly be. I’m pretty sure that this changes the outcome of your thought experiment dramatically.
That depends on the definition of “left.” What is your definition?
(I am skeptical that “left” is a useful concept.)
Moldbug sometimes seems to define it as Puritan or Protestant more generally. But at other times he seems to say that the two are the same, but not by definition.
In the early 19th century, the Temperance movement was the same people seeking the abolition of slavery and women’s suffrage. Surely this counts as left? A century later it won by having a broader base, but it was still Protestant. Indeed, much of the appeal was as a way to attack Catholic immigrants.
Marijuana and cocaine were banned at about the same time as alcohol. One interpretation was that this was a side effect of the temperance movement. (This is more clear in the case of cocaine, which was a dry run of prohibition; less clear with marijuana, which was banned later.) Another interpretation is that they were race war (just like alcohol). That sounds right-wing, but what is your definition? The prohibition of LSD was much later and seems much more clearly right-wing.
I am talking only of America. I believe that prohibitions elsewhere were imported. If you think America is right-wing, that makes them seem right-wing.
Added: I forgot opium. It was nationally banned with cocaine, but it was banned in SF much earlier, when the Temperance movement was a lot weaker. I think most local Prohibitions were left-wing, but opium in SF might not fit that.
I think the ban of Marijuana in 1937 was a win for DuPond business interests. From a right/left perspective of the 19th century that’s difficult to parse as either left or right.
Yes, the commercial aspects probably pushed it over the line despite it not being banned earlier, but the fact that lots of things were banned suggest that there is probably a common cause and that the commercial aspects were only secondary. Whether the common cause is that one group opposed everything or that one group opposed alcohol and moved the Overton window is harder to decide.
I would consider legalizing drugs, prostitution and taking away women’s votes to be well worth voting for. If I believed in voting that is.
What’s your issue with women’s voting rights?
If I had to guess, I’d say that as Konkvistador is against democracy and voting in general, he wants voting rights to be denied to everyone, and as such, starting with 51% of the population is a good step in that direction.
Am I correct, or is there something more?
Sure, but the process would likely have hysteresis depending on which group you remove first, and “women” doesn’t seem like the best possible choice to me—even “people without a university degree” would likely be better IMO.
Maybe it is because of our instincts that scream at us that every woman is precious (for long-term survival of the tribe), but the males are expendable. Taking the votes away from the expendable males could perhaps get popular support even today, if done properly. The difficult part in dismantling democracy are the votes of women.
(Disclaimer: I am not advocating dismantling democracy by this comment; just describing the technical problems.)
If you stop thinking of democracy as sacred and start seeing letting various groups vote as a utility calculation, one starts looking at questions like how various groups vote, how politicians attempt to appeal to them, and what effect this has on the way the country winds up being governed.
Don’t forget to consider what sorts of political expression are available to those who are not allowed the vote.
Sure, but I’d guess voting patterns vary much more with age, education, and income than with gender.
It’s not just a question of whether they vary, it’s whether they vary in a way that systematically correlates with better (or worse) decisions. Also there are Campbell’s law considerations.
I think my point still stands.
Well, education is subject to Campbell’s law, but I suspect Konkvistador wouldn’t object to raising the voting age, or imposing income requirements.
Another strike against utilitarianism! One person’s modus ponens is another person’s modus tollens.
I chose those examples in particular because in the United States the movement behind prohibition, making prostitution illegal and expanding the franchise to women was basically one and the same.
I disapprove of voting obviously. I chose it as the example because in the US the same movements argued for making these three things, among others, as they are in the first place.
But these are not, seemingly, as different as, say, the discovery of LSD. Or psychotropics. Or the establishment of homosexuality as relatively innate. Or the invention of the car, or the very first creation of a constructed language.
The invention of the car wasn’t that big a deal. At the beginning it wasn’t clear that cars are all that great. It took time for people to figure out that cars are much more awesome than horse carts.
I think you underrate the effects of legalizing LSD. If you say you legalize all drugs, you have to ask yourself questions such as why pharma company pay a lot of money for clinical trials when all substances can be legally sold. As a society you have to answer those questions.
As far as the establishment that homoesexuality is relatively innate, I think you have to keep in mind how vague the term homosexuality happens to be. At the moment homosexuality seems to be an identity label. To me it’s not clear that this will be the case in 200 years.
A lot of men who fuck other men in prisons don’t see themselves as homosexual. Plenty of people who report that they had pleasureable sex with a person of the same sex don’t label themselves as homosexual.
There are also a lot of norms about avoiding physical contact with other people. A therapist is supposed to work on the mind and that doesn’t mean just hugging a person for a minute. I can imagine a society in which casual touches between people are a lot more intimate than they are nowadays and behavior between males that a conversative American would label as homosexual would be default social behavior between friends.
If you run twin studies you find that being overweight has a strong genetic factor. The same goes for height. Yet the average of both changed a lot during the last two hundred years. The notion of something being innate might even be some rest of what Nietzsche called the God in the grammar. It might not be around in 100 years anymore as it exists nowadays.
This futuristic society of casual male intimacy was known as the 19th century.
In it, the Russia of the 1950s and the modern Middle East you could observe men dancing together, holding hands, cuddling, sleeping together and kissing.
More generally, ISTM that displays of affection between heterosexual men correlate negatively with homophobia within each society but positively across societies. (That’s because the higher your prior probability for X is, the more evidence I need to provide to convince you that not-X.)
If I look at that description it seems to me that the current way of seeing homosexuality won’t be permanent.
It seems being homesexual became a separate identity to the extend that people focused in not engaging in certain kinds of intimacy to signal that they aren’t gay.
If the stigma against homosexuality disappears, homosexuality as identity might disappear the same way.
The word homosexuality is even in decline in google ngrams.
There’s a distinction occasionally drawn between homosexual and gay; homosexual is the sexual preference, gay is the cultural lump/stereotype populated mainly by homosexuals. So the ‘metrosexual’ thing in the early 00s was a kind of fad for heterosexual men adopting gay culture.
This distinction is mainly drawn to point out that the political right’s objection is largely to ‘gay’ rather than to ‘homosexual’.
What does “sexual preference” mean exactly?
Do you mean that the criminals in prisons who rape other criminals are gay but not homosexual?
Are you implying that neither or the terms is actually about whether a man has sexs with another man?
Under this distinction: Men who prefer to have sex with men rather than women are homosexual. Men who prefer to have sex with women rather than men are heterosexual.
Prison sex may be homosexual (that’s a matter of fuzzy definitions), but (under this distinction) definitely isn’t gay.
No the political right’s objection is to people engaging in homosexual sex and to popular culture telling people this is a normal and healthy thing to do. The subtler objection is to it telling people that if they find 19th century style male bonding appealing it means that they’re “gay” and should thus engage in homosexual sex.
I see no reason to believe that is the case; gay culture, by its nature of growing out of highly-liberal communties during the 60s and 70s, is highly hedonistic and permissive, both things the political right objects to already. That they strongly dislike (perceived) core attributes of this culture and the associated homosexuality looks like a strictly simpler hypothesis than that they dislike (perceived) core attributes of this culture, and also homosexuality.
In short: Occam appears to be on my side, so you’ll need some evidence for that.
Read what traditionalists actually write for one thing. They’re against hedonistic behaviors and that includes homosexual sex (this is not the only reason they’re against it). Notice that this was true long before the current cultural concept of what it means to “act gay”.
Taboo that word. Is being left-handed normal?
ISTM the point of that word is often to sneak connotations in.
What? ISTM it’s right-wingers who say things like that. EDIT: I guess I had misread that (I had read “should” as ‘are likely to’ rather than ‘had better’), in which case… what??? I can’t remember anyone ever suggesting anything remotely like that with a straight face, and I know plenty of left-wingers; are you sure you aren’t attacking a straw man?
They tend to phrase it as encouraging people to “find out if they’re gay”, i.e., encourage people to declare themselves “gay” if what amounts to 19th century style male bonding appeals to them. Furthermore, once someone has been declared “gay” it’s considered a horrendous hate crime to discourage him from engaging in homosexual sex.
Never heard that either.
And once someone has been declared “straight” it’s considered a horrendous hate crime to discourage him from engaging in heterosexual sex (except by fundamentalist Christians and the like, but that also applies to gay sex), so what’s your point?
Encouraging “gays” to become “straight” is considered a hate crime, encouraging “straights” to become “gay” is framed as encouraging them to “find out if they’re gay” and considered commendable.
Also, at least in the US, encouraging “straights” to hold of until marriage is considered old fashioned but not nearly as bad as attempting to deconvert “gays”. The latter has in fact been made illegal in California.
What the hell are you talking about? AFAICT nearly all straight people I know would find such an, ahem, encouragement quite annoying at the very best, and most of them would be utterly disgusted by it. “I’m flattered, but I’m straight” said with a poker face is about as positive a reaction as I’d ever anticipate seeing.
You and Eugine seem to be talking past one another;
He’s saying that society tends to see it as (at worst) a bit of a faux pas for a gay man to try to get a straight to switch teams whereas a gay converter is one step off from an SS officer in terms of the hatred they get.
You, on the other hand, seem to be talking about how annoyed straight guys get when being harassed by gays trying to convert them, and presumably vice versa. That people get pissed off, with good reason, when people try to dictate terms to them on whom they desire.
Oddly enough, both of you are right. It is much more acceptable for gay men to be “straight chasers” and try to get straight guys to “come out” than it is for Christians to be “deconverters” and try to get gay guys to “find Jesus,” at least everywhere I’ve lived (admittedly, my favorite cities tend to be pretty deep blue). People confronted with this kind of obnoxious behavior don’t appreciate it in either case, but the straight guy has to be a lot more careful not to say anything “offensive” to the guy grabbing him (God forbid throwing a punch) than the gay guy who can tell the pastor to go to hell and walk off with the full force of the law / media behind him.
There seems to be a pretty big asymmetry here that you’re ignoring. Christian “deconverters” aren’t simply saying “Hey, why don’t you try straight sex? You might end up enjoying it.” They’re saying “There is something deeply wrong with your sexual orientation and you will suffer eternally unless you sincerely attempt to change it.” I doubt that attempts to convert straight men result in higher rates of depression or suicide among them.
The appropriate analog of the gay “straight chasers” you’re talking about would be a straight woman who attempts to “convert” gay guys by, say, trying to convince them to sleep with her, maybe because she likes the challenge. Do you think such a person would also be seen as one step off from an SS officer?
BTW, IME straight men who manage to convince lesbians to sleep with them usually inspire awe, not disgust. (I can’t think of any concrete examples of the gender-reversed situation, which you described.)
Actually he said it is “considered commendable”, but I see your point.
When did this actually happen? All the arguments I’ve boil down to either the “it shows up on brain scans and is thus innate” fallacy, or if you don’t agree it’s innate you must be an EVIL HOMOPHOBE!!!11!!
What? I can’t see why knowing that genetics (assuming that’s what’s meant by “innate”) affects how likely people are to commit violent crimes would make me dislike violent criminal any less, nor why knowing that (say) the concentration of lead in the air also affects that would make me dislike them any more.
Well, there are a lot of people arguing that we should go easy on violent criminals since “it’s not their fault”. I don’t agree with this argument, but a lot of people seem to be convinced by it.
Twin studies. (Though by that standard lots of things are relatively innate.)
Relative to what? If “lots of things” are “relatively” something, your standards are probably too low.
Yes, twin studies give a simple upper bound to the genetic component of male homosexuality, but it is very low. As an exercise, you might try to name 10 things with a lower genetic contribution. But I think defining “innate” as “genetic” is a serious error, endemic in all discussions of human variety.
Added, months later: Cochran and Ewald suggest as a benchmark leprosy, generally considered an infection, not at all innate. Yet it has (MZ/DZ) twin concordance of 70⁄20. For something less exotic, TB is 50⁄20. That’s higher than any reputable measure of the concordance of homosexuality. The best studies I know are surveys of twin registries: in Australia, there is a concordance of 40⁄10 for Kinsey 1+ and 20⁄0 for Kinsey 2+; in Sweden, 20⁄10 and 5⁄0.
Since everybody in this subthread is talking about the numbers without mentioning them, from Wikipedia:
Numbers like ”.34–.39″ imply great precision. In fact, that is not a confidence interval, but two point estimates based on different definitions. The 95% confidence interval does not exclude 0 genetic contribution. I’m getting this from the paper, table 1, on page 3 (77), but I find implausible the transformation of that raw data into those conclusions.
Ok, taboo “relatively innate”. The common analogy used in the ‘civil rights’ arguments is to things like skin color. By that standard homosexuality is not innate.
I can’t speak for Bayeslisk, but I’d say it means that things other than what happens to you after your birth have a non-negligible effect (by which standard your accent is hardly innate). But I agree it’s not a terribly important distinction.
I probably agree. (But of course it’s a continuum, not two separate classes. Skin colour also depends by how long you sunbathe and how much carotene you eat, yadda yadda yadda.)
The problem is that human social mores seem to change on the order of 20-40 years which is consistent with the amount of time it takes a new generation of people to take the helm and for the old generation to die out. I have personally seen extreme societal change within my own country of origin, change that happened in only the span of 30 years. In comparison, Western culture over this same time has seemed almost stagnant (despite the fact that it, too, has undergone massive changes such as acceptance of homosexuality).
However, by some estimates, we are already just 20-40 years away from the singularity (2035-2055). This seems like too short a time for human culture to adapt to the massive level that is required. For instance, consider a simple thing like food. Right now, the idea of eating meat that has been grown in a lab seems unsettling and strange to many people. Now consider what future technology will enable, step-by-step:
Food produced by nanotech with simple feedstock, with no slow and laborious cell growth required.
Food produced by nanotech with household waste, including urine and feces (possibly the feces of other people as well), thus creating a self-contained system.
Changing human biochemistry so that waste is simply recycled inside our bodies, requiring no food at all, and just an energy source plus some occasional supplements.
Uploading brains. Food becomes an archaic concept.
There is likely to not be a very large span of time between each of these steps.
Absent mass mind uploading, I doubt that food in some relatively recognizable form will ever die out, or that we will ever find it economically feasible to eat food known to be made from human waste. Sunlight and feedstock are cheap, people get squicked easily, and stuff that’s stuck around for a long time is likely to continue sticking around. You may as well say we’ll outgrow a need for fire, language, or tools; indeed, I’d believe any of those over the total abandonment of food.
Fecal implants do seem to have some health benefits.
There are people who do drink their own urine. Spirilina can be grown on urine.
Algae also have the advantage that they are signal cell organism with means that it’s easy to introduce new genes into them via DIY-bio efforts.
That means you can easily change the way the stuff tastes and let it produce vitamins and other substances. If you want a cheap source of THC you can transplant the relevant genes needed to produce the THC into an algae and grow it at home in a way that isn’t as easily discovered as growing hemp.
You can trade different algae species and get more interesting compounds than THC.
What are fecal implents?
Few people do, and I doubt that it will catch on; spirulina can also be grown on runoff fertilizer, which will probably sound more appealing to most people.
I think the parent post means fecal transplants which are a way to reseed the gut biota with something hopefully more suitable.
Oh, makes sense. That’s not food, though; that’s a very easy organ(?) transplant.
You don’t transplant the organ but the feces. They get processed in the intestine. Stuff that enters the body to be processed in the intestine is food for some definition of “food”.
But once you accept the goal to get feces into the gut, the way is only a detail that’s open to change.
By the time that stuff is in the colon—which is what gets transplanted—it’s not food any more. At least not for humans.
No, I know that the colon is not transplanted; the flora is. Hence the (?). Also, it hopefully doesn’t get processed but rather survives to colonize the gut. Further, an enema would probably be far more effective, given its lack of strong acid and pepsin designed to kill the flora.
Sorry, typo. Should be fecal implants or stool transplants.
Sounding appealing is a question of marketing. Plenty of people prefer organic food that grown with feces of animals over food grown with “chemical” fertilizer. They even pay more money for the product.
I also think you underrate the cost of fertilizer for some poor biohacker in Neirobi who has plenty of access to empty bottles. Human urine should also be pretty cheap to buy in third world megacities.
Access to cheap natural gas and oil is also central for the current way of doing agriculture. Without having access to those resources for cheap prices resource reuse might be a bigger deal.
Good point. I doubt that that extends to abandoning food altogether, though.
If there’s a causal link here, then it’s possible the biggest problem with social change and technological advances would be due to increased longevity, in which case it might not matter how long the time span is… even if there were decades, it wouldn’t be enough.
In some sci-fi settings they have rules where people above a certain ‘age’ can’t directly enter politics anymore. Although I’m not sure exactly how effective that would be, since they would still hold power and influence, and human nature seems to be that we allow more power and influence to the elderly than to the young.
Vinge said something of the sort—that the Singularity would be unimaginable from its past, but after the Singularity (he’s assuming one which includes humans), the path to the Singularity will be known, and it will seem quite plausible.
That’s something a little different—I think that’s already talked about here. Maybe under the Hindsight Bias? At any rate, I’m not talking about looking back; I’m talking about looking from within. The march of history is almost always too slow to see, and even with a significant speedup it’d still probably seem “normal”. Only right at the end would it be clear that a Singularity is occurring.
I want my family to be around in the far future, but they aren’t interested. Is that selfish? I’m not sure what I should do, or if I should even do anything.
I don’t think the odds are good. Getting serious about cryonics will break a whole bunch of implicit assumptions about the order of life, and people who haven’t signed out from the norms and conventions layer of mainstream society to the degree of your average outcast LessWronger are going to be keenly aware of the unspoken rules that are being broken.
Telling people that there’s reason to think cryonics is a valid option and that you support it is good, but trying to get to the bottom of all disagreements beyond that seems like taking it on yourself to make a religious fundamentalist relative accept evolution. It’s probably not going to happen, because the surface level argument is tied up to a head full of invisible machinery that won’t respond to reasoning about technical feasibility.
I wonder how you framed it.
Do you think that any resuscitation technology, including defibrillation is a sin (or use their favorite objection against cryonics)?
How about one that enables resuscitation on a longer time frame? How long is still OK? Hours? Days? Years?
Would you take a treatment that makes one feel younger and live longer?
Would you approve of being cooled down for a day or two until a life-saving liver/heart/kidney transplant is available? What if it requires cooling deep enough that your heart stops beating?
Their replies, if any, might give you a hint of their true objections. If they are truly religious in nature, and your family attends church regularly, consider having a talk with your local pastor (or whatever religious authority figure they look up to). To paraphrase Ender’s game and HPMoR, children’s opinions have zero weight, so try to engage someone actually being listened to.
They weren’t arguing that it wouldn’t work. They think that being revived is selfish, that spending money on having your head frozen is selfish, and my mom says she wants to die. The old death=good cached thought seems to be one of the main driving factors. She also said there’d be no place for her in the future, that the world might be inconceivably different and strange, and that she would be unable to deal with it.
When I explained that some thousand people have done it, and a lot more are signed up, she said that was only “insane rich eccentrics” and when I explained that ordinary people do it, she said some nasty things about those people, along the lines of calling them nuts.
My main question was related towards figuring out if I should keep pursuing it, and try to change their minds, or if I should respect their wishes. I don’t know what the right thing to do in this situation is- because saving lives is very important, but respecting others’ rights is also pretty important. But the difficulty of this situation is compounded, because I’m angry with her and I don’t want to give up because I’m angry.
First, there is no objectively right thing to do. At this point you are expending effort on an essentially selfish goal: saving your mother’s life against her current wishes. Not that “selfish” is in any sense bad or negative. But if you actually cared about saving lives in general, you would apply your effort where it is more likely to pay off. Your current position is no more defensible than hers: you selfishly want her to have a chance to live in some far future with you, she selfishly disregards your wishes and wants to expire when it’s her time. Certainly telling her that her wishes are less valid than yours is not likely to convince her. You can certainly point out that by deciding to forgo cryo she behaves just as selfishly as you do by wanting her to sign for cryo. Maybe then you and her can discuss what “selfish” means to each of you, and maybe have some progress from there. Of course, you should be fully prepared to change your mind and do your best to steelman her arguments. Can you make them better than she does, have her agree and then discuss potential weaknesses in them?
I already am. This is in addition to that.
It is definitely a good idea to talk to her about what selfish means, because my mother and I have differing views on what is selfish and what is not.
I’m interested to know what comes out of these discussions and if you guys manage to converge. Keep us posted.
As the first step I would recommend to stop being angry with her.
Also keep in mind that for a true-believer Christian cryonics is basically trying to cheat oneself out of heaven—not a very appealing idea :-/
Have you read this? Might give you some useful tools to speak against that idea.
Would you rather act on your own preferences, or some lesswrongian’s?
Anger is temporary, so not a great basis for long term decisions. Also, anger will affect your tone and therefore make you less convincing.
I’ve read it.
I feel my own judgement is suspect on this occasion. I don’t know. I want to help her and she’s alternating between being incredibly blase and being furious with me. It’s not like I can just point her at some books to read, because her and my dad don’t like to read. And the things that convinced me, my parents regard as rubbish or nonsense and get-your-head-out-of-space-go-get-married-and-be-normal-goddamnit!
If I continue to pursue this, either the relationship between my parents and I will suffer and they won’t choose to freeze themselves, or they’ll choose to freeze themselves and our relationship won’t suffer. Large risk, large benefit.
My other consideration is to attempt to be subtle, plant the seeds in their heads that give them the sense that maybe the world doesn’t work how they think it does (I managed to convince my dad that the earth was old and that dinosaurs did not roam the earth with humans this way, so it has some merit.)
This aside’s quite important; it sounds like the inferential distance between you and your parents is huge. Trying to bridge it in one fell swoop is quite ambitious, so I’d err towards a slow & subtle approach. (Not that I have much experience with this problem!)
I think subtlety usually works the best with stubborn individuals, but might easily backfire now that you’ve been in their face. If you were to use that strategy, I’d recommend you let the issue settle for a while so that they don’t immediately see what you’re trying to do. If they realize you’re manipulating them, that might make them even less susceptible to your ideas. Planning is the key, unless it’s an emergency.
Don’t try to push an idea in a way that costs you something.
When it comes to convincing others it helps to understand the other person. Nobody get’s angry if you show genuine interest into how they think the world works. Listen a lot.
It might also help to reduce the amount of things that make her furious with you. If those wouldn’t exist it might be easier to convince her on other questions.
If someone really believes in going to heaving after they die, then being locked up in some state between being alive and dead is an issue.
Various religious people do believe that proper burials are important for letting the soul pass on.
I hope MIRI is thinking about how to stop Johnny Depp.
http://trailers.apple.com/trailers/wb/transcendence/
(YouTube version for people who don’t want to download QuickTime.)
Huh, a major Hollywood movie about superintelligence, uploading and the Singularity that seems like its creators might actually even have a mild clue of what they’re talking about. Trailers can always be misleading, of course, but I’ll have to say that this looks very promising—expect to enjoy this one a lot.
I’m not sure how to react to that. While the trailer does get some points correct (an intelligence explosion is dangerous, and much smarter than you things can likely do stuff that you can’t even imagine) it looks like it is essentially from the technological-progress-is-bad-because-hubris end of science fiction, akin to the rebooted Outer Limits. And this seems to ignore the implicit issue that uploads are one of the safer results, not only because they would be near us in mindspace, but because the incredible kludge that is the human brain makes recursive self-improvement less likely.
Given that the film probably doesn’t end up with all of humanity being dead, it probably rather overstates than understates the safety of uploads.
I didn’t get that vibe: it looked like the terrorists blowing up AI labs were being depicted as being bad (or at least not-good) guys, whereas some of the main characters seemed genuinely conflicted and torn about whether to try to upload their friend in an attempt to save him, and whether to even keep him running after he’d uploaded. If they had been going for the hubris angle, I would have expected a lot more of a gung-ho attitude towards building potential superintelligences.
And maybe I’m reading too much into it, but I get the feeling that this has a lot more of a shades-of-gray morality than is normal for Hollywood: e.g. it’s not entirely clear whether the terrorists really are bad guys, nor whether the main character should have been uploaded, etc.
Well, there’s only as much that you can pack into a two-hour movie while still keeping it broadly accessible. If it manages to communicate even a couple of major concepts even semi-accurately, while potentially getting a lot of people interested in the field in general, that’s still a big win. A movie doesn’t need to communicate every subtlety of a topic if it regardless gets people to read up on the topic on their own. (Supposedly science fiction has historically inspired a lot of people to pursue scientific careers, particularly related to e.g. space exploration, though I don’t know how accurate this common-within-the-scifi-community belief is.)
And you can put even less in a two and a halve minute trailer.
If that were it (couple major concepts semi-accurately, the rest entertainment/drama), I’d agree. However, “imagine a machine with a full range of human emotion” (quote from the trailer) and the invariable AI-stopped-using-stupid-gimmicks ending (there’s gonna be a happy ending) is more likely to create yet another Terminator-style distortion/caricature to fight. The false concepts that get planted along with the semi-accurate ones can do large net harm by muddling the issue using powerful visual saliency cheats (how can ‘boring forum posts’ measure up against flashy Hollywood movies).
“Oh, you’re into AI safety? Yea, just like Terminator! Oh, not like that? Like Transcendence, then?” anticipatory facepalm
I expect that any people whose concepts get hopelessly distorted by this movie would be a lost cause anyway. Reasoning correctly about AI risk already requires the ability to accept a number of concepts that initially seem counterintuitive: if you can’t manage “this doesn’t work the way it does in movies”, you probably wouldn’t have managed “an AI doesn’t work the way all of my experience about minds says a mind should work” either.
“hopelessly” probably not. But that doesn’t mean that the distortion is insignificant.
Granted. Still, the general public is never going to have an accurate understanding of any complex concept, be that concept evolution, climate change, or the Singularity. The understanding of non-specialists in any domain is always going to be more or less distorted. The best we can hope for is that the popularizations that make the biggest splash are even semi-accurate so that the popular understanding won’t be too badly distorted: and considering everything that Hollywood could have done with this movie, this looks pretty promising.
There’s been some prior discussion here about the problem of uncertainty of mathematical statements. Since most standard priors (e.g. Solomonoff) assume that one can do a large amount of unbounded arithmetic, issues of assigning confidence to say 53 being prime are difficult, as are issues connected to open mathematical problems (e.g. how does one discuss how likely one is to estimate that the Riemann hypothesis is true in ZFC?). The problem of bounded rationality here seems serious.
I’ve run across something that may be related, and at minimum seems hard to formalize. For a mathematical statement A, let F(A) be “A is proveable in ZFC) (you could use some other axiomatic system but this seems fine for now). Let G(A) be “A will be proven in ZFC by 2050”. Then one can give examples of statements A and B, where it seems like P(F(A)) is larger than P(F(B)) but the reverse holds for P(G(A)) and P(G(B)).
The example that originally came to mind is technical: let A be the statement “ZPP is contained in P^X where X is an oracle for graph isomorphism” and let B be the statement “ZPP is contained in P^Y where Y is an oracle that answers whether Ackermann(n)+1 has an even or odd number of distinct prime factors.” The intuition here is that one expects Ackermann(n)+1 to be essentially random in the parity of its number of distinct prime factors, and a strong source of pseudorandom bits forces collapse of ZPP. However, actually proving that Ackermann(n)+1 acts this way looks completely intractable. In contrast, there’s no strong prior reason to think graph isomorphism has anything to do with making ZPP type problems easier (aside from some very minor aspects) but there’s a lot of machinery out there that involves graph isomorphism and people thinking about it.
So, is this sort of thing meaningful? And are there other more straightforward, less complicated or less technical examples? I do have an analog involving not math but space exploration. P(Life on Mars) might be lower than P(Life on Europa) even though P(We discover life on Mars in the next 20 years) might be higher than P(We discover life on Europa in the next 20 years) simply because we send so many more probes to Mars. Is this a helpful analog or is it completely different?
How about the statements:
A: “The number of prime factors of 4678946132165798721321 is divisible by 3”
B: “The number of prime factors of 9876216987326578968732678968432126877 8498415465468 5432159878453213659873 1987654164163415874987 3674145748126589681321826878 79216876516857651 64549687962165468765632 132185913574684613213557 is divisible by 2”
P(F(A)) is about 1⁄3 and P(F(B)) is about 1⁄2.
But it’s far more likely that someone will bother to prove A, just because the number is much smaller.
ETA: To clarify, I don’t expect it to be particularly hard to prove or disprove, I just don’t think anyone will bother.
Whether someone will bother really depends on why someone wants to know. You can simple type “primefactors of 9876216987326578968732678968432126877” into Wolfram Alpha and get your answer. It’s not harder than typing “primefactors of 4678946132165798721321″ into Wolfram Alpha
I don’t know if this was due to an edit, but the second number in Khoth’s post is far larger than 9876216987326578968732678968432126877, and indeed Alpha won’t factor it.
To be honest I’m sort of surprised that Alpha is happy to factor 4678946132165798721321, I’d have thought that that was already too large.
The reason nobody will bother is that it’s just one 200 digit number among another 10^200 similar numbers. Even if you care about one of them enough to ask Wolfram Alpha, it’s vanishingly unlikely to be that particular one.
Technically it is harder, since there are more digits; apart from the additional work involved this also makes more opportunities for mistakes. In addition, of course, the computer at the other end is going to have to do more work.
If there’s some new hypothesis it’s likely to be proven or disproven quickly. If you look at an old one, like the Riemann hypothesis, that people have tried and failed to prove or disprove, it probabily won’t be proven or disproven any time soon. Thus, it’s not hard to find something more likely to be proven quickly than the Riemann hypothesis, but is still less likely to be true.
Let A = “pi is normal”, and B = “pi includes in it as a contiguous block the first 2^128 digits of e”. B is more likely to be provable in ZFC, simply because A requires B but not vice versa. A is vastly more likely to be proven by 2050. Is this a valid example, or do you see it as cheating in some way?
I’m not sure if this question is meaningful/interesting. It may be, but I’m not seeing it.
Suggested repair of your example: A= “Pi is normal” and B= “Pi includes as a contiguous block the first 2^128 digits of e within the first Ackermann(8) digits” which should do something similar.
Doesn’t the fact that A implies B mean that it’s very easy to prove B once you’ve proved A?
You’re right, I blundered and this example is no good.
Eliezer said in his Intelligence Explosion Microeconomics that Google is maybe the most potential candidate to start the FOOM scenario.
I’ve gotten the impression that Google doesn’t really take this Friendliness business seriously. But beyond that, what is Google’s stance towards it? On the scale of “what a useless daydreaming”, “an interesting idea but we’re not willing to do anything about it”, “we may allocate some minor resources to it at some point in the future”, or something else?
http://lesswrong.com/lw/4rx/agi_and_friendly_ai_in_the_dominant_ai_textbook/
This book’s second author is Peter Norvig, director of research at google.
What if they just want to appear unbiased, neutral and give every view point their fair share of the book? Doesn’t mean they endorse it in any way.
It difficult to know from the outside how Google spends it money on undisclosed projects.
I just got a mail that SciCast is live and ready.
Are there solid examples of people getting utility from Lesswrong? As opposed to utility they could get from other self-help resources?
The Less Wrong community is responsible for me learning how to relate openly to my own emotions, meeting dozens of amazing friends, building a career that’s more fun and fulfilling than I had ever imagined, and learning how to overcome my chronic bouts of depression in a matter of days instead of years.
Who knows? I’m an experiment with a sample size of one, and there’s no control group. In the actual world, other things didn’t actually work for me, and this did. But people who aren’t me sometimes get similar things from other sources. It’s possible that without Less Wrong, I might still have run across the right resources and the right community at the right moment, and something else could have been equally good. Or maybe not, and I’d still be purposeless and alone, not noticing my ennui and confusion because I’d forgotten what it was like to feel anything else.
I did self help before I joined lesswrong, and had almost no results. I’d partially attribute Lesswrong to changing me in ways such that I switched my major from graphic design to biology, in an effort to help people through research. I’ve also gotten involved in effective altruism in my community, starting the local THINK club for my college, which is donating money to various (effective) charities. I have a lovely group of friends from the Lesswrong study hall who have been tremendously supportive and fun to be around. There are a number of other small things, like learning about melatonin, which fixed my insomnia...etc. but those are more of a result of being around people who are knowledgeable of such things, not necessarily lesswrong-people.
In short, yes, it is helpful.
What would solid examples look like? Are there solid examples of people getting utility from other self-help sources? Can you think of any?
Less Wrong isn’t just a self-help resource. I enjoy the conversational norms and topics here, and that’s utility for me, but can you measure it?
I can make you cash offers to abandon it until you take one. This is leaky but workable.
True. It’s surprisingly difficult to think about the hypothetical figures since I’m not short on cash, can’t seem to make myself much happier spending more money, and still don’t know any viable alternative to LW. It also seems thinking about this in terms of a subscription fee instead of getting a cash offer changes the figures significantly, which I guess tells us something about the diminishing marginal utility of money.
This makes me wonder if there are any threads here discussing how to convert money into experiential happiness. ETA: yes there are.
I am wary of such type of advice because it almost always aims itself at an average person. Someone who is not average might not find such advice useful and it could turn out the be misleading and harmful.
Also a large part of it comes from psychology papers which are, um, not an unalloyed source of truth.
yes, but in the absence of significant countervailing evidence one should not assume that they are so different as to render the advice useless.
Well, that depends on the person, doesn’t it? Some are sufficiently different and some are not.
Generic advice is generic. Only you can prevent wildfires.. err.. decide whether it is appropriate specifically for you or not. My point is really that you shouldn’t treat it as “scientifically established” gospel and get unhappy if you are weird enough for it not to apply.
Guessing here is a bad idea though, because it is specifically in relation to an area where people are known to be bad at predicting their own responses.
with a big dose of empiricism.
Solid as in empirical, no. But I fee like I get a lot out of LW. It’s a good source for finding other resources. What do you want help in, if any?
Understanding if reading lesswrong is more or less a waste of time than other internet stuff I read.
I think that depends a lot on how you interact with it. You can read a post on commitment contracts and adopt the technique or you can read the post and just accept the new information. The impact on your life will the very different.
It prob’ly depends on what the other Internet stuff you read is.
That’s a pretty broad counterfactual.
I used TDT to get in the habit of flossing my teeth every night—it worked beautifully.
I’m not sure if TDT is available elsewhere as I gave up on self-help books many years ago.
Also I’m not sure of the health benefits of flossing, but still.
I don’t know about self-help books, but the moral advice to choose as if you are choosing more than the immediate consequences is found in moral philosophy.
“I want to be the kind of agent that chooses X (habitually), therefore I will choose X (now)” reasoning can be found in virtue ethics, although the argument there is based on habit and character development rather than being an algorithm. Aristotle discusses the importance of practicing good decisions in the Nichomachean Ethics: “Similarly we become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts.” (source)
“I want to live in a world where people choose X, therefore I will choose X” is a line of reasoning I’ve heard connected to the Jewish moral idea of tikkun olam, though I don’t have a source on that.
I agree that is similar to TDT but I would say it is too vague and general for it to have been much use to me. Part of the advantage of Lesswrong—or any intermet based medium—is that people can comment and sharpen ideas.
Where do I find local Bitcoin discussion here?
There’s some on IRC at least.
I know about that obviously, but others may not so it was nice of you to mention it.
What did you want to discuss?
http://www.google.com/search?q=bitcoin+site:lesswrong.com
(SCNR.)
A while back, I posted in an open thread about my new organisation of LW core posts into an introductory list. One of the commenters mentioned the usefulness of having videos at the start and suggested linking to them somehow from the welcome page.
Can I ask who runs the welcome page, and whether we can discuss here whether this is a good idea, and how perhaps to implement it?
What’s so great about rationality anyway? I care a lot about life and would find it a pity if it went extinct, but I don’t care so much about rationality, and specifically I don’t really see why having the human-style half-assed implementation of it around is considered a good idea.
“Rationality” as used around here indicates “succeeding more often”. Or if you prefer, “Rationality is winning”.
That’s the idea. From the looks of it, most of us either suck at it, or only needed it for minor things in the first place, or are improving slowly enough that it’s indistinguishable from “I used more flashcards this month”. (Or maybe I just suck at it and fail to notice actually impressive improvements people have made; that’s possible, too.)
[Edit: CFAR seems to have a better reputation for teaching instrumental rationality than LessWrong, which seems to make sense. Too bad it’s a geographically bound organization with a price tag.]
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation. Or at least to measure winning, so we could say whether CFAR lessons contribute to winning.
Sometimes income is used as a proxy for winning. It has some problems. For our purposes I would guess a big problem is that the changes of income within a year or two (since when CFAR provides workshops) are mostly noise. (Also, for employees this metric could be more easily optimized by preparing them for job interviews, helping them to optimize their CVs, and pressuring them into doing as many interviews as possible.)
The biggest issue with using income as a metric for ‘winning’ is that some people—in fact, most people—do not really have income as their sole goal, or even as their most important one. For most people, things like having social standing, respect, and importance, are far more important.
That, and income being massively externally controlled for the majority of people. The world, contrary to reports, is not a meritocracy.
Huh?
If you mean that people don’t necessarily get the income they want, well, duh...
No, it isn’t, but I don’t see the relevance to the previous point.
I think the point was government handout programs. This is a massive external control on many people’s incomes, and it is part of how the world is not a meritocracy.
(Please note, I ADBOC with CellBioGuy, so don’t take my description as anything more than a summary of what I think he is trying to say.)
He might also be saying that most people don’t have an obvious path for marginal increases to their income.
This is closer to what I was getting at. Above someone mentioned government assistance programs, which is also true to a point but not really what I meant (another ‘disagree connotatively’).
I was mostly going for the fact that circumstances of birth (family and status not genetics), location, and locked-in life history have far more to do with income than a lot of other factors. And those who make it REALLY big are almost without exception extremely lucky rather than extremely good.
You what with CellBioGuy..?
Should be “ADBOC”—“agree denotationally, but object connotatively”. (ygert is probably thinking of “disagree” instead of “object”.)
Ah, thanks. I usually think of such things as “technically correct but misleading”—that’s more or less the same thing, right?
Yes.
Yes, my mistake. I was in a rush, and didn’t have time to double check what the acronym was. Edited now.
I think I could make an argument that “object” has a semantic advantage over “disagree” but one advantage is that “adboc” can be pronounced as a two-syllable word.
Yes, this is true. You cannot meaningfully compare incomes between people that, say, live in developed vs. developing countries.
The value of income varies pretty widely across time and place (let alone between different people), so using it as a metric for “winning” is highly problematic. For instance, I was mostly insensitive to my income before getting married (and especially having my first child) beyond being able to afford rent, internet, food, and a few other things. The problem is, I don’t know of any other single number that works better.
Since in the local vernacular rationality is winning, you need no measures: the correlation is 1 by defintion :-/
It’s a very bad proxy as “winning” is, more or less, “achieving things you care about” and income is a rather poor measure of that. For the LW crowd, anyway.
talk of “rationality as winning” is about instrumental rationality; when Viliam talks about the correlation between rationality and winning, it’s not clear whether it’s instrumental rationality (taking the best decisions towards your goals) or epistemic rationality (having true beliefs), but the second one is more likely.
But even if it’s about instrumental rationality, I wouldn’t say that the correlation is 1 by definition: I’d say winning is a combination of luck, resources/power, and instrumental rationality.
Exactly. And the question is how much can we increase this result using the CFAR’s rationality improving techniques. Would better rationality on average increase your winning by 1%, 10%, 100%, or 1000%? The values 1% and 10% would probably be lost in the noise of luck.
Also, what is the distribution curve for the gains of rationality among population? An average gain of 100% could mean that everyone gains 100%, in which case you would have a lot of “proofs that rationality works”, but it could also mean that 1 person in 10 gains 1000% and 9 of 10 gain nothing; in which case you would have a lof of “proofs that rationality doesn’t work” and a few exceptions that could be explained away (e.g. by saying that they were so talented that they would get the same results also without CFAR).
It would be also interesting to know the curve for increases in winning by increases in rationality. Maybe rationality gives compound interest; becoming +1 rational can give you 10% more winning, but becoming +2 and +3 rational gives you 30% and 100% more winning, because your rationality techniques combine, and because by removing the non-rational parts of your life you gain additional resources. Or maybe it is actually the other way round; becoming +1 rational gives you 100% more winning, and becoming +2 and +3 rational only gives you additional 10% and 1% more winning, because you have already picked all the low-hanging fruit.
The shape of this curve, if known, could be important for CFAR’s strategy. If rationality follows the compound interest model, then CFAR should pick some of their brightest students and fully focus on optimizing them. On the other hand, if the low-hanging fruit is more likely, CFAR should focus on some easy-to-replicate elementary lessons and try to get as much volunteers as possible to teach them to everyone in sight.
By the way, for the efficient altruist subset of LW crowd, income (its part donated to effective charity) is a good proxy for winning.
Also, rationality might mostly work by making disaster less common—it’s not so much that the victories are bigger as that fewer of them are lost.
That is a possible and likely model, but it seems to me that we should not stop the analysis here.
Let’s assume that rationality works mostly by preventing failures. As a simple mathematical model, we have a biased coin that generates values “success” and “failure”. For a typical smart but not rational person, the coin generates 90% “success” and 10% “failure”. For an x-rationalist, the coin generates 99% “success” and 1% “failure”. If your experiment consists of doing one coin flip and calculating the winners, most winners will not be x-rationalists, simply because of the base rates.
But are these coin flips always taken in isolation, or is it possible to create more complex games? For example, if the goal is to flip the coin 10 times and have 10 “successes”, then the players have total chances of 35% vs 90%. That seems like a greater difference, although the base rates would still dwarf this.
My point is, if your magical power is merely preventing some unlikely failures, you should have a visible advantage in situations which are complex in a way that makes hundreds of such failures possible. A person without the magical power would be pretty likely to fail at some point, even if each individual failure would be unlikely.
I just don’t know what (if anything) in the real world corresponds to this. Maybe the problem is that preventing hundreds of different unlikely failures would simply take too much time for a single person.
I suspect rationality does a lot to prevent likely failures as well as unlikely failures.
This is getting better, slowly. Workshops are going on in Melbourne sometime in early 2014 (February?), and they’re looking to do more internationals going forward.
Try this. Do you care about achieving your values?
Rationality is the process of humans getting provably better at predicting the future. Evidence based medicine is rational. “traditional” and “spiritual” medicine are not rational when their practitioners and customers don’t really care whether their impression that they work stands up to any kind of statistical analysis. Physics is rational, its hypotheses are all tested and open to retesting against experiment, against reality.
When it comes to “winning,” it needs to be pointed out that rationality when consciously practiced allows humans to meet their consciously perceived and explicitly stated goals more reliably. You need to be rational to notice that this is true, but it isn’t a lot more of a leap than “I think therefore i am.”
One could analyze things and conclude that rationality does not enhance humanities prospects for surviving our own sun’s supernova, or does not materially enhance your own chances of immortality, both of which I imagine strong cases could be made for. While being rational, I continue to pursue pleasure and happiness and satisfaction in ways that don’t always make sense to other rationalists and to the extent that I find satisfaction and pleasure and happiness, i don’t much care that other rationalists do not think what I am doing makes sense. But ultimately, I look at the pieces of my life, and my decisions, through rational lenses whenever I am interested in understanding what is going on, which is not all the time.
Rationality is a great tool. It is something we can get better at, by understanding things like physics, chemistry, engineering, applied math, economics and so on, and by by understanding human mind biases and ways to avoid them. It is something that sets humans apart from other life on the planet and something that sets many of us apart from many other humans on the planet, being a strength many of us have over those other humans we compete with for status and mates and so on. Rationality is generally great fun, like learning to drive fast or to fly a plane.
And if you use it right, you can get laid, and then have more data available for determining if that’s what you REALLY want.
So far, humans are the life’s best bet for surviving the day our Sun goes supernova.
Because we don’t have better one (yet?).
Not to detract from your point, but that’s pretty unlikely. Unless it becomes a part of a tight binary star several billion years down the road, when it has turned into a white dwarf. Of course, by then Earth will have been destroyed during the Sun’s red giant stage.
This is a pedantic point in context, but our solar system almost certainly isn’t going to develop into a supernova. There’s quite a menagerie of described or proposed supernova types, but all result either from core collapse in a very massive star (more than eight or so solar masses) or from accretion of mass (usually from a giant companion) onto a white dwarf star.
A close orbit around a giant star will sterilize Earth almost as well, though, and that is developmentally likely. Though last I heard, Earth’s thought to become uninhabitable well before the Sun develops into a giant stage, as it’s growing slowly more luminous over time.
Bringing life to the stars seems a worthy goal, but if we could achieve it by building an AI that wipes out humanity as step 0 (they’re too resource intensive), shouldn’t we do that? Say the AI awakes, figures out that the probability of intelligence given life is very high, but that the probability of life staying around given the destructive tendencies of human intelligence is not so good. Call it an ecofascist AI if you want. Wouldn’t that be desirable iff the probabilities are as stated?
As a human, I find solutions that destroy all humans to be less than ideal. I’d prefer a solution that curbs our “destructive tendencies”, instead.
But is there a rational argument for that? Because on a gut level, I just don’t like humans all that much.
I think you’re wrong about your own preferences. In particular, can you think of any specific humans that you like? Surely the value of humanity is at least the value of those people.
Then there may, indeed, be no rational argument (or any argument) that will convince you; a fundamental disagreement on values is not a question of rationality. If the disagreement is sufficiently large—the canonical example around here being the paperclip maximiser—then it may be impossible to settle it outside of force. Now, as you are not claiming to be a clippy—what happened to Clippy, anyway? - you are presumably human at least genetically, so you’ll forgive me if I suspect a certain amount of signalling in your misanthropic statements. So your real disagreement with LW thoughts may not be so large as to require force. How about if we just set aside a planet for you, and the rest of us spread out into the universe, promising not to bother you in the future?
CAE_Jones answered the first part of your question. As for the second part, the human-style half-assed implementation of it is the best we can do in many circumstances, because bringing to bear the full machinery of mathematical logic would be prohibitively difficult for many things. However, just because it’s hard to talk about things in fully logical terms, doesn’t mean we should just throw up our hands and just pick random viewpoints. We can take steps to improve our reasoning, even with our mushy illogical biological brains.
Following up on my post in the last open thread, I’m reading Understanding Uncertainty which I think is excellent.
I would like to ask for help with one thing, however.
The book is in lay terms, and tries to be as non-technical as possible, so I’ve not been able to find an answer to my question online that hasn’t assumed my having more knowledge than I do.
Can anyone give me a real life example of a series of results, where the assumption of exchange ability holds and it isn’t a Bernoulli series?
Let’s say that we have a box of weighted coins. Some are more likely to fall heads; others tails. We pull one out and flip it many times. The flips are identical, so we can switch the order. They are independent conditional on knowing which coin was chosen, but ahead of time they are dependent, the one telling us about the choice of coin and thus about the other. De Finetti’s theorem says that all exchangeable sequences take this form.
Added: Actually, de Finetti’s theorem only applies to infinite sequences. Here’s an example of a finite exchangeable sequence that doesn’t fit the theorem: draw balls from a box without replacement. This can only go on until the box is empty. And of course you can combine the two: randomly choose a box with at least n balls and then pull out n balls without replacement.
Added: A crazy model that is exchangeable is Pólya’s urn. It is not obvious that it is exchangeable, let alone that the conclusion of de Finetti’s theorem applies. Pólya’s urn contains balls of two colors, the initial numbers of which are known. Every time you draw one out, you put k of the same color back. If k=1, this is drawing with replacement; if k=0, this is drawing without replacement, both of which are exchangeable. And if k is a larger integer, it is also exchangeable.
Here is an idea of how to the exchangeability. What if are somehow confused about the size of the balls, and think that they are r times bigger than they really are? Then each time we remove an actual ball, we’re removing 1/r part of a confused ball. That’s like removing 1 confused ball and putting back 1-1/r balls. Thus k=1-1/r is like drawing without replacement, but with this confusion. This is exchangeable. Thus the model is exchangeable for infinitely many values of k, which verifies some identities for infinitely many values of k, which is probably enough to verify it as an algebraic identity.
A lot of things modern “conservatives” consider traditional are recent innovations barely a few decades or a century old. Chesterton’s fence doesn’t apply to them.
Examples?
I would guess that this comment came out of the discussion about homosexuality and male to male intimacy between friends further down in the thread.
Drug prohibition is also something that’s roughly a century old and Konkvistador wrote a post writing that he would be under some circumstances be okay with getting rid of it.
So? For most people “traditional” means “what my grandparents used to do”. Very very few people have a sense of history that extends far back.
This was an observation of when the argument of Chesterton’s fence applies.
Why doesn’t Chesterton’s fence apply to “recent innovations”? It applies to everything that you don’t know how it came into being—time frame doesn’t matter much.
Most recent (social) innovations would be more comprable to coming to a field and observing the trampled remains of a fence.
A stronger case for Chesterton’s fence can be made for older over recent innovations, I guess I should write an essay to explain the arguments on this, I forgot this wasn’t widely talked about outside a certain IRC channel.
Hm. I would expect the reverse. The Chesterton’s Fence argument is about knowing the purpose of something and being able to understand the consequences of changing it. With older traditions both are harder. Granted, there is the offsetting factor that over the course of years (or centuries) no one was bothered enough to change it—an evolutionary argument, sort of—but an appeal to the wisdom of ancestors is not the same thing as the Chesterton’s Fence.
This is turning the argument on its head.
The point isn’t that knowing a purpose for something is a reason to keep the thing. If we know the reason for it and judge it good, of course we shall keep it. Banal. If we know a reason for a thing, and judge it bad, then the argument isn’t an encouragement to keep it either. No Chesterton’s Fence is the argument that us not knowing the reason behind something is a reason to keep it. Applying it to things, for which we easily learn why they are there, is pretty much redundant as far as heuristics go.
Let me quite directly, from his novel The Thing (1929). In the chapter entitled, “The Drift from Domesticity” he writes:
What you say here is reasonable, but it is completely unrelated to your comment that started this thread. If, as in your original comment, people are mistaken about the age of their traditions, they are ignorant of the origins, and thus Chesterton advice to learn the origin applies.
This isn’t directly related to that argument no, like I said I would need an essay to explain that (and I’ve started writing one), I was correcting a misreading of the classical Cheston’s fence.
Kinda. I actually read it as an argument for passivity unless you know what you’re doing.
Not knowing the reason for something is a “reason to keep it”—well, it’s a reason to not do anything. If that something gets destroyed by, say, a force of nature, would Chesterton’s Fence tell you to rebuild it? No, I don’ think so.
The Chesteron’s Fence is primarily a warning against hubris, against pretending to contain all the reasons of the world in your head. It is, basically, an entreaty to consider unknown unknowns, especially if you have evidence of their workings in front of you.
Force of nature is misleading in the context of where it is likely to be applied. No social norms or institutions subsist without maintenance. But let me keep it and tweak it a bit, if you could easily prevent the force of nature destroying the fence, would you say the argument encourages you to do so?
Here’s a Bayesian counterargument for cultural practices:
Culture is more likely to have retained the instruction “Do X!” but not retained knowledge of X’s original purpose, if that purpose is not relevant any more.
If X’s purpose is still relevant, then retaining and teaching about X’s original purpose provides greater incentive for learning and teaching X, making X more likely to be retained. But if X’s original purpose is not still relevant, then retaining knowledge of the original purpose is a disincentive to learn and teach X itself, making X less likely to be retained. So, given that X is still taught, learning that its original purpose is known is evidence that it is still relevant; whereas learning that it is not known is evidence that it is not still relevant.
If you are using the model of memetic selection, then useful things Xs are unlikely to have true explanations of why they are useful attached to them, but the most virulent ones. Sometimes they are the same, but obviously often they aren’t. After all Robin Hanson gets a lot of low hanging fruit showing us how for example school isn’t about learning etc.
Sometimes the most persistent combination would be a behavior or practice without an explicit explanation at all.
“The mathematician’s patterns, like the painter’s or the poet’s, must be beautiful; the ideas, like the colours or the words, must fit together in a harmonious way. Beauty is the first test: there is no permanent place in the world for ugly mathematics.”—G. H. Hardy, A Mathematician’s Apology (1941)
Just heard this quoted on The Infinite Monkey Cage.
Isn’t the place for this the Rationality Quotes thread?
It’s more to do with mathematics than rationality.
I think the quotes thread is pretty general and even mathematics quotes fit better at that place than here.
Still, I’d think that as a quote, it should be in the thread that we have that is intended specifically for quotes.
.
Does anyone have any recommended “didactic fiction”? Here are a couple of examples:
1) Lauren Ipsum (http://www.amazon.com/Lauren-Ipsum-Carlos-Bueno/dp/1461178185) 2) HPMoR
There’s a thread in the rationalist fiction subreddit for brainstorming rationalist story ideas which might interest people here.
Is LW the largest and most established online forum for discussion of AI? If yes, then we should be aware that LW, or at least EY’s ideas about AI, might be underestimated with regards to how widespread these ideas are to the people that matter, like AI researchers.
I say this because I come across a lot of comments with the sentiment of lamenting the world’s AI researchers aren’t more aware of friendliness on the level that is discussed here. I might also just be projecting what I think is the sediment here, in that case, just ignore this comment. Thoughts?
Edit spelling
typo(s): sediment->sentiment