Hell, it’s not even just the bay area; Seattle has two explicitly-rationalist-group-houses and plenty of other people who live in more “normal” situations but with other rationalists (I found my current flatmate, when my old one moved out, through the community). Certainly the bay area rationalist community is large and this sort of living situation is far from universal, but I’ve certainly heard of several even though I’ve never actually visited any.
CBHacking
Gah, thank you, edited. Markdown is my nemesis.
Agreed that the above won’t work for all people, not even all people who say
I haven’t and probably can’t internalize it on a very deep, systematic level, no matter how many times I re-read the articles
Nonetheless, I find it a useful thing to consider, both because it’s a lot easier (even if there isn’t yet such a group in your area) than writing an entire LW-inspired rationality textbook, and because it’s something that a person can arrange without needing to have already internalized everything (which might be a prerequisite for the “write the textbook” approach). It also provides a lot of benefits that go well beyond solving the specific problem of internalizing the material (I have also discovered new material I would not have found as early if at all, I have engaged in discussions related to the readings that caused me to update other beliefs, I have formed a new social circle of people with whom I can discuss topics with in a manner that none of my other circles support, etc.).
For what it’s worth, I got relatively little[1] out of reading the Sequences solo, in any form (and RAZ is worse than LW in this regard, because the comments were worth something even on really old and inactive threads, and surprisingly many threads were still active when I first joined the site in 2014).
What really did the job for me was the reading group started by another then-Seattleite[2]. We started as a small group (I forget how many people the first meetings had, but it was a while before we broke 10 and longer before we did it regularly) that simply worked through the core sequences—Map & Territory, then How to Actually Change Your Mind—in order (as determined by posts on the sequences themselves at first, and later by the order of Rationality: AI to Zombies chapters). Each week, we’d read the next 4-6 posts (generally adjusted for length) and then meet for roughly 90 minutes to talk about them in groups of 4-8 (as more people started coming, we began splitting up for the discussions). Then we’d (mostly) all go to dinner together, at which we’d talk about anything—the reading topics, other Rationality-esque things, or anything else a group of smart mostly-20-somethings might chat about—and next week we’d do it again.
If there’s such a group near you, go to it! If not, try to get it started. Starting one of these groups is non-trivial. I was already considering the idea before I met the person who actually made it happen (and I met her through OKCupid, not LessWrong or the local rationality/EA community), but I wouldn’t have done it anywhere near as well as she did. On the other hand, maybe you have the skills and connections (she did) and just need the encouragement. Or maybe you know somebody else who has what it takes, and need to go encourage them.
[1] Reading the Sequences by myself, the concepts were very “slippery”; I might have technically remembered them, but I didn’t internalize them. If there was anything I disagreed with or that seemed unrealistic—and this wasn’t so very uncommon—it made me discount the whole post to effectively nothing. Even when something seemed totally, brilliantly true, it also felt untested to me, because I hadn’t talked about it with anybody. Going to the group fixed all of that. While it’s not really what you’re asking for, you may find it does the trick.
[2] She has since moved to (of course) the Bay Area. Nonetheless, the group continues (and is roughly now two years running, hitting nearly every Monday evening). We regularly break 20 attendees now, occasionally break 30, and the “get dinner together” follow-up has grown into a regularly-scheduled weekly event in its own right at one of the local rationalist houses.
I don’t think “converse” is the word you’re looking for here—possibly “complement” or “negation” in the sense that (A || ~A) is true for all A—but I get what you’re saying. Converse might even be the right word for that; vocabulary is not my forte.
If you take the statement “most beliefs are false” as given, then “the negation of most beliefs is true” is trivially true but adds no new information. You’re treating positive and negative beliefs as though they’re the same, and that’s absolutely not true. In the words of this post, a positive belief provides enough information to anticipate an experience. A negative belief does not (assuming there are more than two possible beliefs). If you define “anything except that one specific experience” as “an experience”, then you can define a negative belief as a belief, but at that point I think you’re actually falling into exactly the trap expressed here.
If you replace “belief” with “statement that is mutually incompatible with all other possible statements that provide the same amount of information about its category” (which is a possibly-too-narrow alternative; unpacking words is hard sometimes) then “true statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category are vastly outnumbered by false statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category” is something the I anticipate you would find true. You and Eliezer do not anticipate a different percentage of possible “statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category” being true.
As for universal priors, the existence of many incompatible possible (positive) beliefs in one space (such that only one can be true) gives a strong prior that any given such belief is false. If I have only two possible beliefs and no other information about them, then it only takes one bit of evidence—enough to rule out half the options—to decide which belief is likely true. If I have 1024 possible beliefs and no other evidence, it takes 10 bits of evidence to decide which is true. If I conduct an experiment that finds that belief 216 +/- 16 is true, I’ve narrowed my range of options from 1024 to 33, a gain of just less than 5 bits of evidence. Ruling out one more option gives the last of that 5th bit. You might think that eliminating ~96.8% of the possible options sounds good, but it’s only half of the necessary evidence. I’d need to perform another experiment that can eliminate just as large a percentage of the remaining values to determine the correct belief.
Replying loooong after the fact (as you did, for that matter) but I think that’s exactly the problem that the post is talking about. In logical terms, one can define a category “human” such that it carries an implication “mortal”, but if one does that, one can’t add things to this category until determining that they conform to the implication.
The problem is, the vast majority of people don’t think that way. They automatically recognize “natural” categories (including, sometimes, of unnatural things that appear similar), and they assign properties to the members of those categories, and then they assume things about objects purely on the bases of appearing to belong to that category.
Suppose you encountered a divine manifestation, or a android with a fully-redundant remote copy of its “brain”, or a really excellent hologram, or some other entity that presented as human but was by no conventional definition of the word “mortal”. You would expect that, if shot in the head with a high-caliber rifle, it would die; that’s what happens to humans. You would even, after seeing it get shot, fall over, stop breathing, cease to have a visible pulse, and so forth, conclude that it is dead.. You probably wouldn’t ask this seeming corpse “are you dead?”, nor would you attempt to scan its head for brain activity (medically defining “dead” today is a little tricky, but “no brain activity at all” seems like a reasonable bar).
All of this is reasonable; you have no reason to expect immortal beings walking among us, or non-breathing headshot victims to be capable of speech, or anything else of that nature. These assumptions go so deep that it is hard to even say where they come from, other than “I’ve never heard of that outside of fiction” (which is an imperfect heurisitic; I learn of things I’d never heard about every day, and I even encountered some of the concepts in fiction before learning they really exist). Nobody acknowledges that it’s a heuristic, though, and that can lead to making incorrect assumptions that should be consciously avoided when there’s time to consider the situation.
@Caledonian2 said “If Socrates meets all the necessary criteria for identification as human, we do not need to observe his mortality to conclude that he is mortal.”, but this statement is self-contradictory unless the implication “human” → “mortal” is logically false. Otherwise, mortality itself is part of “the necessary criteria for identification as human”.
Agreed. “Torture” as a concept doesn’t describe any particular experience, so you can’t put a specific pain level to it. Waterboarding puts somebody in fear for their life and evokes very well-ingrained terror triggers in our brain, but doesn’t really involve pain (to the best of my knowledge). Branding somebody with a glowing metal rod would cause a large amount of pain, but I don’t know how much—it probably depends in the size, location, and so on anyhow—and something very like this on a small scale this can be done as a medical operation to sterilize a wound or similar. Tearing off somebody’s finger- and toenails is said to be an effective torture, and I can believe it, but it can also happen fairly painlessly in the ordinary turn of events; I once lost a toenail and didn’t even notice until something touched where it should have been (though I’d been exercising, which suppresses pain to a degree).
If you want to know how painful it is to, say, endure the rack, I can only say I hope nobody alive today knows. Same if you want to know the pain level where an average person loses the ability to effectively defy a questioner, or anything like that...
I haven’t investigated selling it, but up to a certain multiple of my annual salary it’s included in my benefits and there is no value in setting it lower than that value; I wouldn’t get any extra money.
This is a fairly standard benefit from tech companies (and others that have good benefits packages in the US), apparently. It feels odd but it’s been like this at the last few companies I worked for, differing only in the insurance provider whose policy is used and the actual limit before you’d need to pay extra.
Nitpick: The article talks about a rabbit kidney, not a mouse one
It also isn’t entirely clear how cold the kidney got, or how long it was stored. It’s evidence in favor of “at death” cryonics, but I’m not sure how strong of evidence it is. Also, it’s possible to survive with substantially more kidney damage than you would even want to incur as brain damage.
Many employers provide life insurance. I’ve always thought that was kind of weird (but then, all of life insurance is weird; it’s more properly “death insurance” anyhow) but it’s a think. My current employer provides (at no cost to me) a life insurance policy sufficient to pay for cryonics. It would currently be given charitably—I have no dependents and my family is reasonably well off—but I’ve considered changing that.
Speaking as a SCUBA diver, the equipment is not designed to handle high airflow (such as you need when working hard on a bicycle), so even if the air tank itself wasn’t a problem you’d need, at a minimum, a heavily-adjusted second-stage (the one with the mouthpiece) regulator. Possibly a different regulator set altogether. On the other hand, one of the design considerations of a second-stage reg is that the purge valve needs to resist water pressure, including the pressure of swimming; air would generally not have that problem (and you probably wouldn’t have any need for a purge anyhow).
Even basic filter masks can cut down on particulate air pollution by a lot. I’ve spent some time in places with truly horrific air quality—the kind that makes LA seem clear and fresh-smelling—and a lot of people wear something over their face, even just a strip of cloth, when they go out (and sometimes also at night or even all the time). I don’t know how practical they’d be at filtering out anything likely to cause headaches in traffic, and they’re not terribly comfortable to wear, but it might be an option. Of course, in the US, the most common reason you see everyday people wearing something like that is if they’re sick and don’t wish to spread germs from their breath / sneezes, so people may be reluctant to shake your hand...
How many technologies are you aware of that don’t have a harmful potential application? I mean, (electronic) computers were invented for military purposes and can enable all kinds of mischief on the Internet. Refrigeration makes military logistics a lot easier. Hell, internal combustion drives tanks and other military vehicles. GPS makes cruise missiles easier, but pre-GPS ICBMs just used inertial targeting; that’s close enough for thermonuclear bombs.
In HPMOR, Harry figures out a large number of ways to make weapons out of the materials present in a low-tech classroom. I doubt anything short of reducing the world to subsistence farming (and no more than that) is sufficient to bring about a state in which
″… no nation will be in a position to commit an act of physical aggression against any neighbor”
The social stigma of something like that seems like you’re basically throwing away any hope of rehabilitation, but it’s hardly as if the US is much good at that anyhow.
Orion requires quite a few detonations, though; even with a massive craft (much of which is pusher plate and shock absorbers) to absorb the impact, you have to use fairly low-yield bombs and each only provides a relatively short period of thrust. You could possibly design something that takes higher yields (especially higher relative to the vehicle mass) that would survive reaching orbit on one detonation, but it would be subjected to extreme acceleration—the kind that would crush any satellite launched thus far—and I suspect there might be too much risk of tumbling given the non-uniformity of the atmosphere.
That’s worth checking (both in terms of what Apple claims, and in terms of what any relevant legal precedents claim; a hardware warranty certainly shouldn’t be at risk from a software modification). On the other hand, it should be easy to “un-jailbreak” a device; just restore an un-jailbroken image onto it (for example, from a backup made before jailbreaking), and you can do so before sending the device in for warranty service. If the device is “bricked” to the point that you can’t restore it, then Apple probably can’t tell that it was jailbroken, either.
Creating a post in Discussion only requires “a few” points of karma; creating one in Main requires 20. I believe 20 is also required for creating a Meetup post; in most ways those appear to be treated as posts to Main (for example, up- and down-votes on them count for 10x the usual amount of poster karma).
Source: The LW FAQ, specifically http://wiki.lesswrong.com/wiki/FAQ#Why_do_I_want_high_karma.3F
5+ years later, I’m curious: have you attempted this? If so, how did the attempt go? If not, is there a clear reason?
It’s probably a lot more effective to draw the water from ~10m down; the infrastructure costs are far lower, you’ll probably not need to insulate the water quite so much for coastal regions (to keep it from warming en route to the surface), you won’t need to pump so hard (you won’t have a vertical kilometer of buoyancy for your denser-than-shallower-water to fight).
For coastal regions, this might actually work, though those tend to be relatively moderate to start with (courtesy of the water). It would be a ton of infrastructure to get in installed in more than a small, clustered set of buildings / public property, though. For inland regions, you then need to pump cold (it’s not permitted to warm up much) corrosive (seawater is a pain) water over a long distance in a hot part of the world. Upon its arrival, you still need to get it into the heat exchangers that you have installed wherever financially practical. Then you have to get rid of the resulting slightly-warmer corrosive seawater.
Disclaimer: I don’t use iThings except occasionally for work, and those ones are always jailbroken. My knowledge of what Apple does and does not permit the nominal owners of their devices to do is limited.
You may be able to save a backup of your iPad’s current state to your computer, with the possibility of future restoration. This would back up both the apps and their data. You could then delete the apps (which deletes their data). If you wanted to play the apps again, you may be able to take a new backup and then restore the old one. Obvious downside here: if you ever do want to revert, you’ll have to (at least temporarily) do without the progress you made since the initial backup
Alternatively, delete only those games which sync their progress to an external service, after you perform the aforementioned synchronization. I don’t know which games those are, but they exist. Cross-platform ones, and those from major dev houses, are more likely to offer this feature.
… or you could jailbreak. There was a new one just released. You don’t have to do much with it except back up your own data, if you want. That’s one of the major reasons I rooted my phone.
Agreed. If you’re not willing to say “Nope, you crossed the line. See you next time, I’ll decide when that is, goodbye” (or similar) and leave (cut them off to whatever degree is needed to stop the harmful behavior), then you need to not give them an opportunity to start again. If you are willing to do so, though, or some other approach to ensuring your boundaries are respected, go ahead.
For the record, while I have a pretty good relationship with both my parents, I do not buy the line that a person always has an obligation to their parents. Sure, there usually is one, but your parent(s) put a finite amount of utility into your life, and negative utility is a thing. Parents trying to run the lives of their adult offspring drives me up a wall. Unless there’s something unusual about your capacity for self-reliance, at 25 you should not be living under anybody’s thumb to the degree described even without the negatives such as undesired/inappropriate criticism.
Moderately true of Seattle as well (two group houses, plus some people living as housemates or whatever but not explicitly a Rationalist Group House). I’m not sure if our community is big enough for something like this but I love this idea and it would be a point in favor of moving the bay area if there was one there (that I had a chance to move into) but not one here.