Well I totally missed the diaspora. I read star slate codex (but not the comments) and had no idea people are posting things in other places. It surprises me that it even has a name “rationalist diaspora.” It seemed to me that people ran out of things to say or the booster rocket thing had played itself out. This is probably because I don’t read Discussion, only Main and as Main received fewer posts I stopped coming to Less Wrong. As “meet up in area X” took over the stream of content I unsubscribed from my CSS reader. Over the past few years the feeling of a community completely evaporated for me. Good to hear that there is something going on somewhere, but it still isn’t clear where that is. So archiving LW and embracing the diaspora to me means so long and thanks for all the fish.
BrandonReinhart
When you’re “up,” your current strategy is often weirdly entangled with your overall sense of resolve and commitment—we sometimes have a hard time critically and objectively evaluating parts C, D, and J because flaws in C, D, and J would threaten the whole edifice.
Aside 1: I run into many developers who aren’t able to separate their idea from their identity. It tends to make them worse at customer and product oriented thinking. In a high bandwidth collaborative environment, it leads to an assortment of problems. They might not suggest an idea, because they think the group will shoot it down and they will be perceived as a generator of poor ideas. Or they might not relinquish an idea that the group wants to modify, or work on an alternative to, because they feel that, too, is failure. Or they might not critically evaluate their own idea to the standard they would evaluate any other idea that didn’t come from their mind. Over time it can lead to selective sidelining of that person in a way that needs a deliberate effort to undo.
The most effective collaborators are able to generate many ideas with varying degrees of initial quality and then work with the group to refine those ideas or reject the ones that are problematic. They are able to do this without taking collateral damage to their egos. These collaborators see the ideas they generate as products separate from themselves, products meant to be improved by iteration by the group.
I’ve seen many cases where this entanglement of ego with idea generation gets fixed (through involvement of someone who identifies the problem and works with that person) and some cases where it doesn’t get fixed (after several attempts, with bad outcomes).
I know this isn’t directly related to the post, but it occurred to me when I read the quoted part above.
Aside 2: I have similar mood swings when I think about the rationalist community. “Less Wrong seems dead, there is no one to talk to.” then “Oh look, Anna has a new post, the world is great for rationalists.” I think it’s different from the work related swings, but also brought to mind by the post.
I’ve always thought that “if I were to give, I should maximize the effectiveness of that giving” but I did not give much nor consider myself an EA. I had a slight tinge of “not sure if EA is a thing I should advocate or adopt.” I had the impression that my set of beliefs probably didn’t cross over with EAs and I needed to learn more about where those gaps were and why they existed.
Recently through Robert Wiblin’s facebook have encountered more interesting arguments and content in EA. I had no concrete beliefs about EA, only vague impressions (not having had much time to research it in depth in the past). I had developed an impression that EA was about people maximizing giving to a self-sacrificial degree that I found uncomfortable. I also have repeatedly bounced off the animal activism—I have a hard time separating my pleasure of eating meat from my understanding of the ethical arguments. (So, I figured I would be considered a lawful evil person to the average EA).
However, now having read a few more things even just today, I feel like these are misplaced perceptions of the movement. Reading the 2014 summary, posted in a comment here from Tog, makes me think that:
EAs give in a pattern similar to what I would give. However, I personally favor the ex-risk and teaching rationality stuff probably a bit higher than the mean.
EAs give about as much as I’d be willing to give before I run into egoist problems (where it becomes painful in a stupid way I need to work to correct). So 10% seems very reasonable to me. For whatever reason, I had thought that “EA” meant “works to give most of what they earn and live a spartan life.” I think this comes from not knowing any EAs and instead reading 80,000 hours and other resources not completely processing the message correctly. Probably some selective reading going on and I need to review how that happened.
The “donate to one charity” argument is so much easier for me to plan around.
Overall I should have read the 2014 results much sooner and it helped me realize that my perspective is probably a lot closer to the average LWer than I had thought. This makes me feel like taking further steps to learn more about EA and make concrete plans to give some specific amount from an EA perspective is a thing I should do. Which is weird, because I could have done all of that anyway, but was letting myself bounce off of the un-pleasurable conclusions of giving up meat eating or giving a large portion of my income away. Neither of which I have to do in the short term to both give effectively or participate in the EA community. Derp.
I’m curious about the same thing as [deleted].
Furthermore, a hard to use text may be significantly less hard to use in the classroom where you have peers, teachers, and other forms of guidance to help digest the material. Recommendations for specialists working at home or outside a classroom might not be the same as the recommendations you would give to someone taking a particular class at Berkeley or some other environment where those resources are available.
A flat out bad textbook might seem really good when it is something else such as the teacher, the method, or the support that makes the book work.
“A directed search of the space of diet configurations” just doesn’t have the same ring to it.
Steam Greenlight
Consider a robot vacuum.
Thanks for this. I hadn’t seen someone pseudocode this out before. This helps illustrate that interesting problems lie in the scope above (callers to tdt_uility() etc) and below (implementation of tdt() etc).
I wonder if there is a rationality exercise in ‘write pseudocode for problem descriptions, explore the callers and implementations’.
Doh, I have no idea why my hands type c-y-r instead of c-r-y, thanks.
Metaphysical terminology is a huge bag of stupid and abstraction, but what I mean by mysticism is something like ‘characteristic of a metaphysical belief system.’ The mysticism tag tells me that a concept is positing extra facts about how the world works in a way that isn’t consistent with my more fundamental, empirical beliefs.
So in my mind I have ‘WARNING!’ tags (intentionally) attached to mysticism. So when I see something that has the mysticism tag attached to it, I approach cautiously and with a big stick. Or to save time or avoid the risk of being eaten I often don’t approach at all.
If I find that I have a metaphysical belief or if I detect that a fact/idea may be metaphysical, then I attach the mystical tag to it and go find my stick.
If something in my mind has the mysticism tag attached to it inappropriately, then I want to reclassify that thing—slightly reduce the size of the tag or create a branch through more specific concept definition and separation.
So I don’t really see value in attaching the mysticism tag to things that don’t directly warrant it. What you call a mystical litany I’d call a mnemonic technique for reminding yourself of a useful process or dangerous bias. Religions have litanies, but litanies are not inherently religious concepts.
So no, I won’t consider mysticism itself as a useful brain hack. Mysticism is allocated the purpose of ‘warning sign’ . It’s not the only warning sign, but it’s a useful one.
As an aside, what are IFS and NVC?
Edit: Ah, found links.
IFS: http://en.wikipedia.org/wiki/Internal_Family_Systems_Model
NVC: http://en.wikipedia.org/wiki/Nonviolent_Communication
I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts. Here I saw people talk about it separate from that context. My assumption was that if you approached it using Bayes and other tools, you could start to figure out if it was bullshit or not. It doesn’t seem unreasonable to me that folks interested could explore it and see what turns up.
Would I choose to do so? No. I have plenty of other low hanging fruit and the amount of non-mystic guidance around meditation seems really minimal, so I’d be paying opportunity cost to cover unknown territory with unknown payoffs.
I don’t feel oddly attached to any beliefs here. Maybe I’ll go search for some research. Right now I feel if I found some good papers providing evidence for or against meditation I would shift appropriately.
I don’t see myself updating my beliefs about meditation (which are weak) unduly because of an argument from authority. They changed because the arguments were reasoned from principles or with process I accept as sound. Reasoning like “fairly credible sources like Feynman claim they can learn to shift the perception of the center of self-awareness to the left. (Feynman was also a bullshitter, but let’s take this as an example...) What do we think he meant? Is what we think he meant possible? What is possible? Is that reproducible? Would it be useful to be able to do that? Should we spend time trying to figure out if we can do that?” This would be what I consider to be a discussion in the space of meditation-like stuff that is non-mystical and enjoyable. It isn’t going to turn me into a mystic any more than Curzi’s anecdotes about his buddy’s nootropics overdoses will turn me into a juicer.
I didn’t take away the message ‘meditation is super-useful.’ I took away the message ‘meditation is something some people are messing with to see what works.’ I’m less worried about that than if someone said ‘eating McDonalds every day for every meal is something some people are messing with to see what works.’ because my priors tell me that is really harmful whereas my priors tell me meditating every day is probably just a waste of time. A possibly non-mystical waste of time.
Now I’m worried comment-readers will think I’m a blind supporter of meditation. It is more accurate to say I went from immediate dismissal of meditation to a position of seeing the act of meditating as separable from a mystic context.
Now my wife is telling me I should actually be MORE curious about meditation and go do some research.
To address your second point first, the -attendees- were not a group who strongly shared common beliefs. Some attended due to lots of prior exposure to LW, a very small number were strong x-risk types, several were there only because of recent exposure to things like Harry Potter and were curious, many were strongly skeptical of x-risks. There were no discussions that struck me as cheering for the team—and I was actively looking for them!
Some counter evidence, though: there was definitely a higher occurrence of cryonicists and people interested in cryonics than you’d find in any random sample of 30 people. I.e.: some amount >2 vs some amount close to 0. So we weren’t a wildly heterogeneous group.
As for the instructors—Anna and Luke were both very open about the fact that the rationality-education process is in its infancy and among the various SIAI members there is discussion about how to proceed. I could be wrong, I interpreted Eliezer as being somewhat skeptical of the minicamp process. When he visited, he said he had almost no involvement related to the minicamp. I believe he said he was mainly a sounding board for some of the ideas. I’m interpreting his involvement in this thread now and related threads/topics as a belief shift on his part toward the minicamp being valuable.
I think your order of magnitude increases well describes a bad conceivable scenario, but poorly describes the scenario I actually witnessed.
Now, for cost, I don’t know. I’m attending a guitar camp in August that will be 7 days and cost me $2000. I would put the value of minicamp a fair amount above the value of the guitar camp, but I wouldn’t necessarily pay $3000 to attend minicamp. To answer the price question I would ask:
1) What else do I plan to spend the $1500 on? What plans or goals suffer setbacks? What would I otherwise buy?
2) What do I value the information from attending at? I can see how it would be easier to measure the value of information from a guitar camp versus one about something that feels more abstract. So maybe the first step is to find the concrete value you’ve already gotten out of LW. If you’ve read the sequences and you think there are useful tools there, you might start with ’what would be the estimated value from being able to clarify the things I’m unsure about.” So you take some measurement of value you’ve already gotten from LW and do some back of the napkin math with that.
3) Consider your level of risk aversion versus the value of minicamp now vs later. If these new minicamps are successful, more people will post about them. Attendees will validate or negate past attendee experiences. It may be that if $1500 is too much for you when measured against your estimation of the pay-off discounted by risks, that you simply wait. Either the camps will be shown to be valuable or they will be shown to be low value.
4) Consider some of the broad possible future worlds that follow from attending minicamp. In A you attend and things go great, you come out with new rationality tools. In B you attend and your reaction is neutral and you don’t gain anything useful. In C you attend and have poor experiences or worse suffer some kind of self-damage (ex: your beliefs shift in measurably harmful ways that your prior self would have not agreed to submit to ahead of time). Most attendees are suggesting you’ll find yourself in worlds like A. We could be lying because we all exist in worlds like C or we’re in B but feel an obligation to justify attending the camp or whatever. Weigh your estimate of our veracity with your risk aversion. Update the connected values.
I would suggest it unlikely that the SIAI be so skilled at manipulation that they’ve succeeded in subverting an entire group of people from diverse backgrounds and with some predisposition to be skeptical. Look for evidence that some people exist in B or C (probably from direct posts stating as much—people would probably want to prevent other people from being harmed).
There are other things to put into a set of considerations around whether to spend the money, but these are some.
I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:
1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use. 2) I was not well mentally organized—my rationality concepts were all individual floating bits not well sewn together—so I reached a point where new concepts didn’t fit into my map very easily.
I agree things got more disorganized, in fact, I remember on a couple occasions seeing the ‘this isn’t the outcome I expected’ look on Anna’s face and the attempt to update and try a different approach or go with the flow and see where things were leading. I marked this responsiveness as a good thing.
As for your ugly it’s important to note that was a casual discussion among attendees. I suppose this highlights risks from a general increase in credibility-giving by close temporal association with other new ideas you’re giving credibility to? Example: I talked to a lot of curious people that week about how Valve’s internal structure works, but no one should necessarily run off and establish a Valve-like company without understanding Valve’s initial conditions, goals, employee make-up, other institutions, and comparing them with their own initial conditions, goals, employees, institutions, etc.
I attended the 2011 minicamp.
It’s been almost a year since I attended. The minicamp has greatly improved me along several dimensions.
I now dress better and have used techniques provided at minicamp to become more relaxed in social situations. I’m more aware of how I’m expressing my body language. It’s not perfect control and I’ve not magically become an extrovert, but I’m better able to interact in random social situations successfully. Concretely: I’m able to sit and stand around people I don’t know and feel and present myself as relaxed. I dress better and people have noticed and I’ve received multiple comments to that effect. I’ve chosen particular ways to present myself and now I get comments like ‘you must play the guitar’ (this has happened five times since minicamp haha). This is good since it loads the initial assumptions I want the person to load.
I’ve intentionally hacked my affectation towards various things to better reach my goals. For years I never wanted to have children. My wife said (earlier this year, after minicamp) that she wanted to have kids. I was surprised and realized that given various beliefs (love for wife, more kids good for society, etc) I needed to bring my emotions and affectations in line with those goals. I did this by maximizing positive exposure to kids and focusing on the good experiences...and it worked. I’m sure nature helped, but I came to a change of emotional reaction that feels very stable. TMI: I had my vasectomy reversed and am actively working on building kid version 1.0
Minicamp helped me develop a better mental language for reasoning around rationalist principles. I’ve got tools for establishing mental breakpoints (recognizing states of surprise, rationalization, etc) and a sense for how to improve on weak areas in my reasoning. I have a LOT of things I still need to improve. Many of my actions still don’t match my beliefs. The up side is that I’m aware of many of the gaps and can make progress toward solving them. There seems to be only so much I can change at once, so I’ve been prioritizing everything out.
I’ve used the more concise, direct reasoning around rationality at my job at Valve Software. I use it to help make better decisions, concretely: when making decisions around features to add to DOTA 2 I’ve worked particularly hard at quickly relinquishing failed ideas that I generated. I have developed litanies like ‘my ideas are a product, not a component of my identity.’ Before I enter into interactions I pause and think ’what is my goal for this interaction? The reasoning tools from minicamp have helped me better teach and interpret the values of my company (which are very similar). I helped write a new employee guide that captures Valve values, but uses tools such as Anna Salamon’s “Litany for Simplified Bayes” to cut straight to the core concepts. “If X is true, what would the world look like?” “If X is not true, what would the world look like?” “What does the world look like?” I’ve been influential in instituting predictions meetings before we launch new features.
I’ve been better able to manage my time, because I’m more aware of the biases and pitfalls that lie before me. I think more about what ‘BrandonReinhart2020’ wants than what the current me wants. (Or at least, my best guess as to what I think he would want...like not being dead, and being a bad ass guitar shredder, etc). This has manifested itself concretely in my self-education around the guitar. When I went to minicamp I had only just started learning guitar. Since then I’ve practiced 415 hours (I work full time, so this is all in my spare time) and have developed entirely new skills. I can improv, write songs, etc. Minicamp provided some inspiration, yes, but there were also real tools that I’ve employed. A big one was coming home and doing research on human learning and practice. This helped me realize that my goals were achievable. Luke gave sessions on how to do efficient research. Critch gave a session on hacking your affectations. I used this to make practice something I really, really like doing (I listened to music I liked before practicing, I would put objects like role-playing books or miniatures that I liked around my practice area—nerdy yes, but it worked for me—and I would drink a frosty beer after practicing three hours in a row. Okay so that last one shows that my health beliefs and goals may not be entirely in line, but it served an objective here). Now I can easily practice for 3 hours and enjoy every moment of it. (This is important, before I would use that time for World of Warcraft and other pursuits that just wasted time and didn’t improve me.)
I’ve been in the Less Wrong orbit for a long time and have had the goal of improving my rationality for a long time. I’ve read Yudkowsky’s writing since the old SL4 days. I followed Overcoming Bias from the beginning. I can’t say that I had a really good grasp on which concepts were the most important until after minicamp. There’s huge value in being able to ask questions, debate a point, and just clarify your confusion quickly.
I have also been an SIAI skeptic. Both myself and John Salvatier thought that SIAI might be a little religion-like. Our mistake. The minicamp was a meeting of really smart people who wanted to help each other win more. The minicamp was genuinely about mental and social development and the mastery of concepts that seem to lead to a better ability to navigate complex decision trees toward desired outcomes.
While we did talk about existential risk, the SIAI never went deep into high shock level concepts that might alienate attendees. It wasn’t an SIAI funding press. It wasn’t a AGI press. In fact, I thought they almost went too light on this subject (but I came to modern rationality from trans/posthumanism and most people in the future will probably get to trans/posthumanism from modern rationality, so discussions about AGI and such feels normal to me). Point being if you have concerns about this you’ll feel a lot better as you attend.
I would say the thing that most discomforted me during the event was the attitude toward meditation. I realized, though, that this was an indicator about my preconceptions about meditation and not necessarily due to facts about meditation. After talking to several people about meditation, I learned that there wasn’t any funky mysticism inherent to meditation, just closely associated to meditation. Some people are trying to figure out if it can be used as a tool and are trying to figure out ways to experiment around it, etc. I updated away from ‘meditation is a scary religious thing’ toward ‘meditation might be another trick to the bag.’ I decided to let other people bear the burden/risk of doing the research there, though. :)
Some other belief shifts related to minicamp: I have greatly updated toward the Less Wrong style rationality process as being legitimate tools for making better decisions. I have updated a great deal toward the SIAI being a net good for humanity. I have updated a great deal toward the SIAI being led by the right group of people (after personal interactions with Luke, Anna, and Eliezer).
Comparing minicamp to a religious retreat seems odd to me. There is something exciting about spending time with a bunch of very smart people, but it’s more like the kind of experience you’d have at a domain-specific research summit. The experience isn’t to manipulate through repeated and intense appeals to emotion, guilt, etc (I was a Wesleyan Christian when I was younger and went to retreats like Emaeus and I still remember them pressing a nail sharply into my palm as I went to the altar to pray for forgiveness). It’s more accurate to think of minicamp as a rationality summit, with the instructors presenting findings, sharing techniques for the replication of those findings, and there being an ongoing open discussion of the findings and the process used to generate findings. And like any good Summit there are parties.
If you’re still in doubt, go anyway. I put the probability of self-damage due to attending minicamp at extremely low, compared to self-damage from attending your standard college level economics lecture or a managerial business skills improvement workshop. It doesn’t even blip on a radar calibrated to the kind of self-damage you could do speculatively attending religious retreats.
If you’re a game developer, you would probably improve your ability to make good decisions around products more by attending SIAI Minicamp than you would by attending GDC (of course, GDC is still valuable for building a social network within the industry).
- New applied rationality workshops (April, May, and July) by 9 Apr 2013 2:58 UTC; 44 points) (
- Applied Rationality Workshops: Jan 25-28 and March 1-4 by 3 Jan 2013 1:00 UTC; 34 points) (
- 19 Apr 2012 10:21 UTC; 2 points) 's comment on Open Thread, April 16 − 30, 2012 by (
What we know about cosmic eschatology makes true immortality seem unlikely, but there’s plenty of time (as it were) to develop new theories, make new discoveries, or find possible new solutions. See:
Cirkovic “Forecast for the Next Eon: Applied Cosmology and the Long-Term Fate of Intelligent Beings”
Adams “Long-term astrophysical processes”
for an excellent overview of the current best-estimate for how long a human-complexity mind might hope to survive.
Just about everything CIrkovic writes on the subject is really engaging.
More importantly, cryonics is useful for preserving information. (Specifically, the information stored by your brain.) Not all of the information that your body contains is critical, so just storing your spinal cord + brain is quite a bit better than nothing. (And cheaper.) Storing your arms, legs, and other extremities may not be necessary.
(This is one place where the practical reasoning around cryonics hits ugh fields...)
Small tissue cryonics has been more advanced than whole-body. This may not be the case anymore, but certainly was say four years ago. So storing your brain alone gave you an improved bet at good information retention over storing the whole-body. I believe that whole-body methods have improved somewhat in the past few years, but still have a ways to go. Part of the problem lies in efficient perfusion of cryoprotectants through the body.
If you place credence on the possibility of ems, then you might consider investing in neuro-preservation. In that case, you wouldn’t need revival, only good scanning and emulation tech.
Edit: Also, I highly recommend the Alcor site. The resources there span the gamut from high level to detailed and there’s good coverage of the small tissue and cryoprotectant problems among other topics. http://www.alcor.org/sciencefaq.htm
Your company plan sounds very much like how Valve is structured. You may find it challenging to maintain your desired organizational structure, given that you also plan to be dependent on external investment. Also, starting a company with the express goal of selling it as quickly as possible conflicts with several ways you might operate your company to achieve a high degree of success. Many of the recent small studios that have gone on to generate large amounts of revenue (relative to their size) (Terraria / Minecraft / etc) are independently owned and build ‘service based’ software that seeks to keep the community engaged.
Alexei, I would suggest an alternative and encourage you to apply to Valve. 1) It wouldn’t take much of your time to try. 2) It may help you reach your goals more quickly. 3) You don’t have to invest in building a rational organization, (which is costly and hard) since one already exists.
It would be a career oriented decision (few people leave Valve once they start there) and I know you are interested in applying yourself to existential risk as completely as you can, but you should consider the path of getting really good at satisfying the needs of customers and then through that, directing resources towards the existential risk problem. It may feel like you are less engaged, because you aren’t there solving the hard problems yourself—and if you have a high degree of confidence that you are the one to solve those problems then maybe you should pursue a direct approach—but it is a path you should give serious thought to.
I wouldn’t advise you to go work at any random company. Most game companies—particularly large ones—are structured in a way that doesn’t mean you’d have a good chance of individual success (versus working anywhere else or doing something else).
Valve has one of the highest profits per employees in the world and is wholly owned by the employees. The company compensates very well. So my advice is specific to considering an application to Valve.
Donation sent.
I’ve been very impressed with MIRI’s output this year, to the extent I am able to be a judge. I don’t have the domain specific ability to evaluate the papers, but there is a sustained frequency of material being produced. I’ve also read much of the thinking around VAT, related open problems, definition of concepts like foreseen difficulties… the language and framework for carving up the AI safety problem has really moved forward.