Last week, after a lot of thought and help from LessWrong, I finally stopped believing in god and dropped my last remnants of Catholicism. It is turned out to be a huge relief, though coping with some of the consequences and realizations that come with atheism has been a little difficult.
Do any of you have any tips you noticed about yourself or others after just leaving religion? I’ve noticed a few small habits I need to get rid of, but I am worried I’m missing larger, more important ones.
Are there any particular posts I should skip ahead and read? I am currently at the beginning of reductionism.
Are their any beliefs you’ve noticed ex-catholics holding that they don’t realize are obviously part of their religion?
I do not have any one immediately around me I can ask, so I am very grateful for any input.
Thank you!
Unlike religion, here no one claims to be all-knowing or infallible. Which, from my point of view at least, is why LessWrong is so effective. Reading the arguments in the comments of the sequences was almost as important as reading the sequences themselves.
I wouldn’t mind the paradise part or the living forever part though.
Yes, of course. I was mostly just trying to be funny. One could keep the joke going and compare the monthly meetups, Winter Solstice meetup, the Effective Altruist movement, the Singularity, and so on to their complements in Christianity.
Speaking from experience: don’t kneejerk too hard. It’s all too easy to react against everything at all implicitly associated with a religion or philosophy that you now reject the truth-claims of and distort parts of your personality or day to day life or emotions or symbolic thought that have nothing to do with what you have rejected.
Thank you.
Last week was full of “Is this religious? Yes? No? I can’t tell!.”
My brain has thankfully returned to normal function, and I will avoid intently analyzing every thought for religious connotations.
The lack of guilt is nice, and I don’t want to bring it back by stressing about the opposite.
Similarly, there’s no need to be scared of responding positively to art or other ideas because they originated from a religious perspective; if atheism required us to do that, it would be almost as bleak a worldview as it’s accused of being. Adeste Fideles doesn’t stop being a beautiful song when you realize its symbols don’t have referents. I think of the Christian mythology as one of my primary fantasy influences—like The Lord of the Rings, Discworld, The Chronicles of Thomas Covenant or Doctor Who—so, if I find myself reacting emotionally to a Christian meme, I don’t have to worry that I’m having a conversion experience (or that God exists and is sneakily trying to win me over!): it’s perfectly normal, and lawful, for works of fiction to have emotional impact.
The religious allusions seem even blatant now, but there is no way I’m getting rid of my copy of Chronicles of Narnia. I still feel the urge to look in the back of wardrobes.
Thank you. I had a religious song stuck in my head yesterday, but remembered reading you comment so was able to bypass the feeling of guilt.
What others already said: Don’t try to reverse stupidity by avoiding everything conected to Catholicism. You are allowed to pick the good pieces and ignore the bad pieces, instead of buying or rejecting the whole package. Catholics also took some good parts from other traditions; which by the way means you don’t even have to credit them for inventing the good pieces you decide to take.
If you talk with other religious people, they will probably try the following trick on you: Give you a huge book saying that it actually answers all your questions, and that you should at least read this one book and consider it seriously before you abandon religion completely. Of course if you read the whole book and it doesn’t convince you, they will give you another huge book. And another. And another. The whole strategy is to surround you by religion memes (even more strongly than most religious people are), hoping that sooner or later something will “trigger” your religious feelings. And no matter how many books you read, if at some moment you refuse to read yet another book, you will be accused of leaving the religion only because of your ignorance and stubbornness, because this one specific book certainly did contain all answers to your questions and perfectly convincing counterarguments to your arguments, you just refused to even look at it. This game you cannot win: there is no “I have honestly considered all your arguments and found them unconvincing” exit node; the only options given to you are either to give up, or to do something that will allow your opponents to blame you of being willfully ignorant. (So you might as well do the “ignorant” thing now, and save yourself a lot of time.)
Don’t try to convince other people, at least not during the first months after deconversion. First, you need to sort out things for yourself (you don’t have a convincing success story yet). Second, by the law of reciprocation, if the other people were willing to listen to your explanations, this in turn gives them the moral right to give you a huge book of religious arguments and ask you to read it, which leads to the game described above.
Basicly, realize that you have a right to spend most of your time without thinking about Catholicism, either positively or negatively. That is what most atheists really do. If you were born on another planet, where religion wasn’t invented, you wouldn’t spend your time arguing against religion. Instead, you would just do what you want to do. So do it now.
It reminds me of Transactional Analysis saying the best way to keep people in mental traps is to provide them two scripts: “this is what you should do if you are a good person”, but also “this is what you will do if you become a bad person (i.e. if you refuse the former script)”. So even if you decide to rebel, you usually rebel in the prescribed way, because you were taught to only consider these two options as opposites… while in reality there are many other options available.
The real challenge is to avoid both the “good script” and the “bad script”.
Thank you for the advice. I’ve started by rereading the scientific explanations of the big bang, evolution, and basically most general scientific principles. Looking at it without constant justification going on in my mind is quite refreshing.
So far I’ve been able to avoid most of the arguments, though I was surprised by how genuinely sad some people were. I’m going to keep quiet about religion for a while, and figure out what other pieces of my worldview I need to take a rational, honest look at.
I find myself to have a much clearer and cooler head when it comes to philosophy and debate around the subject. Previously I had a really hard time squaring utilitarianism with the teachings of religion, and I ended up being a total heretic. Now I feel like everything makes sense in a simpler way.
What are the most effective charities working towards reducing biotech or pandemic x-risk? I see those mentioned here occasionally as the second most important x-risk behind AI risk, but I haven’t seen much discussion on the most effective ways to fund their prevention. Have I missed something?
Biotech x-risk is a tricky subject, since research into how to prevent it also is likely to provide more information on how to engineer biothreats. It’s from nontrivial to know what lines of research will decrease the risk, and which will increase it. One doesn’t want a 28 Days Later type situation, where a lab doing research into viruses ends up being the source of a pandemic.
Note that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk. If you anticipate an intelligence explosion but aren’t worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).
So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk.
You’re ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.
Not to mention that if your P(AI) isn’t close to one, you probably want to be prepared for the situation in which an AI never materializes.
As far as I remember from LW census data the median date for predicted AGI intelligence explosion didn’t fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.
Close. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.
“At the time of rejection, the player, not the respondent, should be in a position of vulnerability. The player should be sensitive to the feelings of the person being asked.”
How does one implement this? One of my barriers to social interactions is the ethical aspect to it; I feel uncomfortable imposing on others or making them uncomfortable. Using other people for one’s own therapy seems a bit questionable. Does anyone have anything to share about how to deal with guilt-type feelings and avoid imposing on others with rejection therapy?
I used to have the same, to the extent that I wouldn’t ask even ask teachers, people paid to help me, for help. I hated the feeling that I was a burden somehow. But I got over it in the space of a couple months by getting into a position where people were asking me for help all the time—and that made me realize it wasn’t an unpleasant or annoying experience, I actually liked it, and others were probably the same. In most cases you’re doing people a favor by giving them a chance to get warm-fuzzies for what’s (usually in the case of rejection therapy) a relatively simple request to fulfill.
Of course, there are still certain requests that might be uncomfortable to reject, and my thoughts on those are that they’re usually the ones where you feel like you’ve left someone out who really needed your help. So to get over this, don’t choose things to ask that are going to go bad if you don’t get it—for instance asking for a ride when it’s pouring out, or telling someone you need some money to call your kids at home so they don’t worry (instead of just ‘I need to make a call’). As long as what you ask is casual and you don’t seem desperate, people should have no problem rejecting it without feeling bad, and to lessen any impact even more you can smile and say ‘no problem, thanks anyway’ or something similar to show you’re alright without it.
Also use your sense, if you ask and they look uncomfortable going ‘oh, umm, well...’ you should be the one to jump in and say ‘hey, it’s no problem, you look busy so I’ll check with someone else’ or something like that, rather than waiting for them to have to say outright ‘no’. Some people don’t mind just saying no outright, some people do, so be attuned to that and no-one should be uncomfortable. Good luck!
In general, people in a public space are to an extent consenting to interact with other humans. If they aren’t, we have a system of recognized signals for it: Walking fast, looking downward, listening to music, reading, etc. I don’t think you should feel too guilty about imposing a brief few seconds of interaction on people out and about in public.
It’s argued there’s a risk that in the event of a global catastrophe, humanity would be unable to recover to our current level of capacity because all the easily accessible fossil fuels that we used to get here last time are already burned. Is there a standard, easily Googlable name for this risk/issue/debate?
Can’t help you out with an easy moniker, but I remember that problem being brought up as early as in Olaf Stapledon’s novel Last and First Men, published 1930.
I remember a short story posted on LW a few years ago about this. It was told from the perspective of people in a society of pre-industrial tech, wondering how (or even if) their mythical ancestors did these magical feats like riding around in steel carriages faster than any horse and things like that. The moral being that society hadn’t reached the required “escape velocity” to develop large-scale space travel and instead had declined once the fossil fuels ran out, never to return.
It’s also argued that, fossil fuels being literally the most energy-dense per unit-of-infrastructure-applied energy source in the solar system, our societal complexity is likely to decrease in the future as the hard-to-get ones are themselves drawn down and there becomes no way to keep drawing upon the sheer levels of energy per capita we have become accustomed to over the last 200 years in the wealthier nations.
I recommend Tom Murphy’s “do the math” blog for a frank discussion of energy densities and quantities and the inability of growth or likely even stasis in energy use to continue.
At any level of technology. Where else in the solar system do you have that much highly reduced matter next to so much highly oxidized gas with a thin layer of rock between them, and something as simple as a drill and a furnace needed to extract the coal energy and a little fractional distillation to get at the oil? Everything else is more difficult.
“Unit of infrastructure” ~= amount of energy and effort and capital needed to get at it.
I am not going to believe that. Both because at the caveman level the fossil fuels are pretty much useless and because your imagination with respect to future technology seems severely limited.
“Unit of infrastructure” ~= amount of energy and effort and capital needed to get at it.
This entirely depends on the technology level. And how are you applying concepts like “energy-dense” to, say, sunlight or geothermal?
how are you applying concepts like “energy-dense” to, say, sunlight or geothermal?
Energy density refers only to fuels and energy storage media and doesn’t have much to do with grid-scale investment, although it’s important for things like transport where you have to move your power source along with you. (Short version: hydrocarbons beat everything else, although batteries are getting better.)
The usual framework for comparing things like solar or geothermal energy to fossil fuels, from a development or policy standpoint, is energy return on investment. (Short version: coal beats everything but hydroelectric, but nuclear and renewables are competitive with oil and gas. Also, ethanol and biodiesel suck.)
at the caveman level the fossil fuels are pretty much useless
Coal was used as fuel before theRoman empire. It didn’t lead to an industrial revolution until someone figured out a way to turn it into mechanical energy substituting for human labor instead of just a heat source in a society where that could be made profitable due to a scarcity of labor. That was the easiest, surface-exposed deposits, yes, but you hardly need any infrastructure at all to extract the energy, and even mechanical energy extraction just needs a boiler and some pistons and valves. This was also true of peat in what is now the Netherlands during the early second millennium.
your imagination with respect to future technology seems severely limited.
…
This entirely depends on the technology level.
What does ‘technology level’ even mean? There’s just things people have figured out how to do and things people haven’t. And technology is not energy and you cannot just substitute technology for easy energy, it is not a question of technology level but instead the energy gradients that can be fed into technology.
And how are you applying concepts like “energy-dense” to, say, sunlight or geothermal?
Mostly in terms of true costs and capital (not just dollars) needed to access it, combined with how much you can concentrate the energy at the point of extraction infrastructure. For coal or oil you can get fantastic wattages through small devices. For solar you can get high wattages per square meter in direct sunlight, which you don’t get on much of the earth’s surface for long and you never get for more than a few hours at a time. Incredibly useful, letting you run information technology and some lights at night and modest food refrigeration off a personal footprint, but not providing the constant torrent of cheap energy we have grown accustomed to. Geothermal energy flux is often high in particular areas where it makes great sense (imagine Iceland as a future industrial powerhouse due to all that cheap thermal energy gradient), over most of the earth not so much.
Sunlight is probably our best bet for large chunks the future of technological civilization over most of the earth’s surface. It is still not dense. It’s still damn useful.
but you hardly need any infrastructure at all to extract the energy
You don’t need ANY infrastructure to gather dry sticks in the forest and burn them. Guess that makes the energy density per unit of infrastructure infinite, then…
it is not a question of technology level but instead the energy gradients that can be fed into technology.
There are lots of energy gradients around. Imagine technology that allows you to sink a borehole into the mantle—that’s a nice energy gradient there, isn’t it? Tides provide the energy gradient of megatons of ocean water moving. Or, let’s say, technology provides a cheap and effective fusion reactor—what’s the energy gradient there?
You’ve been reading too much environmentalist propaganda which loves to extrapolate trends far into the future while making the hidden assumption that the level of technology will stay the same forever and ever.
You don’t need ANY infrastructure to gather dry sticks in the forest and burn them. Guess that makes the energy density per unit of infrastructure infinite, then...
Pretty much, until you need to invest in the societal costs to replant and regrow woods after you have cleared them, or you want more concentrated energy at which point you use a different source, or unless you value your time.
There are lots of energy gradients around
Yes. Some are easier to capture than others and some are denser than others. Fusion would be a great energy gradient if you can run it at rates massively exceeding those in stars, but everything I’ve seen suggests that the technology required for such a thing is either not forthcoming or if it is is so complicated that it’s probably not worth the effort.
the hidden assumption that the level of technology will stay the same forever and ever.
It won’t but there are some things that technology doesn’t change. To use the nuclear example, you always need to perform the same chemical and other steps to nuclear fuels which requires an extremely complicated underlying infrastructure and supply chain and concentrated capital for it. Technology isn’t a genetic term for things-that-make-everything-easier, some things can be done and some things can’t, and other things can be done but aren’t worth the effort, and we will see what some of those boundaries are over time. I hope to at least make it to 2060, so I bet I will get to see the outcome of some of the experiments being performed!
Solar energy used to halve in price every 7 years. in the last 7 it more than halved. Battery performance also has a nice exponential improvement curve.
Various forms of solar are probably one of our better bets, though I’m not convinced that large chunks of the recent gains don’t come from massive effective subsidy from China and eventually the cost of the materials themselves could become insignificant compared to complexity and maintenance and end-of-life-recycling cost which are not likely to decrease much. Though battery performance… I haven’t seen anything about it that even looks vaguely exponential.
To spell out a few things: the price of lithium batteries is decreasing. Since they are the most energy-dense batteries, this is great for the cost of electric cars, and maybe for the introduction of new portable devices, but it isn’t relevant to much else. In particular, performance is not improving. Moreover, there is no reason to expect them to ever be cheaper than existing less dense batteries. In particular, there is no reason to expect that the cost of storing electricity in batteries will ever be cheaper than the cost of the electricity, so they are worthless for smoothing out erratic sources of power, like wind.
though I’m not convinced that large chunks of the recent gains
I get the impression that most of the “recent gains” consist of forcing the utilities to take it and either subsidizing the price difference or passing the cost on to the customer. At least, the parties involved act like they believe this while attempting to deny it.
Various forms of solar are probably one of our better bets, though I’m not convinced that large chunks of the recent gains don’t come from massive effective subsidy from China and eventually the cost of the materials themselves could become insignificant compared to complexity and maintenance and end-of-life-recycling cost which are not likely to decrease much.
But even if some of the cost is subsidies and the real speed is only halving in price every 7 years that’s still good enough.
I don’t see why there shouldn’t be any way to optimise end of life costs and maintenance.
Yes. No nuclear power has ever been built without massive subsidies and insurance-guarantees, it only works right now because we externalize the costs of dealing with its waste to the future rather than actually paying the costs, and nuclear power is fantastically more complicated and prone to drastically expensive failures than simply burning things. Concentrating the fuel to the point that it is useful is an incredible chore as well.
Are you claiming nuclear energy has higher cost in $ per joule than burning fossil fuels? If so, can you back it up? If true, how do you know it’s going to remain true in the future? What happens when we reach a level of technology in which energy production is completely automatic? What about nuclear fusion?
The only reason the costs per joule in dollars are near each other (true factor of about 1.5-3x the cost in dollars between nuclear and the coal everyone knows and loves, according tothe EIA ) is that a lot of the true costs of nuclear power plants are not borne in dollars and are instead externalized. Fifty years of waste have been for the most part completely un-dealt-with in the hopes that something will come along, nuclear power plants are almost literally uninsurable to sufficient levels in the market such that governments have to guarantee them substandard insurance by legal fiat (this is also true of very large hydroelectric dams which are probably also a very bad idea), and power plants that were supposed to be retired long ago have had their lifetimes extended threefold by regulators who don’t want to incur the cost of their planned replacements and refurbishments. And the whole thing was rushed forwards in the mid 20th century as a byproduct of the national desire for nuclear weapons, and remarkably little growth has occurred since that driver decreased.
If true, how do you know it’s going to remain true in the future?
How do you know it won’t? More to the point, it’s not a question of technology. It’s a question of how much you have to concentrate rare radionuclides in expensive gas centrifuge equipment and how heavily you have to contain the reaction and how long you have to isolate the resultant stuff. Technology does not trump thermodynamics and complexity and fragility.
What happens when we reach a level of technology in which energy production is completely automatic?
What does this mean and why is it relevant?
What about nuclear fusion?
Near as I can tell, all the research on it so far has shown that it is indeed possible without star-style gravitational confinement, very difficult, and completely uneconomic. We have all the materials you need to fuse readily available, if it were easy to do it economically we would’ve after fifty years of work. It should be noted that the average energy output of the sun itself is about 1⁄3 of a watt per cubic meter—fusion is trying to produce conditions and reactions of the sort you don’t even see in the largest stars in the universe. (And don’t start talking about helium three on the moon, I point to a throwawy line in http://physics.ucsd.edu/do-the-math/2011/10/stranded-resources/ regarding that pipe dream.)
Is it possible I’m wrong? Yes. But literally any future other than a future of rather less (But not zero!) concentrated energy available to humanity requires some deus ex machina to swoop down upon us. Should we really bank on that?
The only reason the costs per joule in dollars are near each other (true factor of about 1.5-3x the cost in dollars between nuclear and the coal everyone knows and loves, according tothe EIA ) is that a lot of the true costs of nuclear power plants are not borne in dollars and are instead externalized. Fifty years of waste have been for the most part completely un-dealt-with in the hopes that something will come along
That an quite unfair comparison. The way we deal with coal waste kills ten of thousands or even hundreds of thousands per year. The way we deal with coal waste might cost more money but doesn’t kill as many people.
Simply dumping all nuclear waste in the ocean would probably a more safe way of disposing of waste than the way we deal with coal.
Even tunnel that were created in coal mining can collapse and do damage.
Coal isn’t a picnic either and I have my own rants about it too. But dealing with coal waste (safely or unsafely) is a question of trucking it, not running complicated chemical and isotopic purification or locking it up so thoroughly.
And the whole thing was rushed forwards in the mid 20th century as a byproduct of the national desire for nuclear weapons, and remarkably little growth has occurred since that driver decreased.
The obvious explanation of the timing is Three Mile Island and Chernobyl.
Do you believe that Japan and Germany built nuclear plants for the purpose of eventually building weapons?
Japan and Germany are interesting cases, both for the same reason: rich nations with little or declining fossil fuels. Germany’s buildout of nuclear power corresponds to the timing of the beginning of the decline in the production of high-quality coal in that country, and Japan has no fossil fuels of its own so nuclear was far more competitive. With plentiful fossil fuels around nobody does nuclear since it’s harder, though even the nations which use nuclear invariably have quite a lot of fossil fuel use which I would wager ‘subsidizes’ it.
What do you mean by “competitive”? Shipping coal adds very little to its cost, so the economic calculation is hardly different for countries that have it and countries that don’t. Perhaps national governments view domestic industries very differently than economists, but you haven’t said how to take this into account. I think Japan explicitly invoked “self-sufficiency” in its decision, perhaps meaning concerns about wartime.
Fifty years of waste have been for the most part completely un-dealt-with in the hopes that something will come along...
What do you mean by “un-dealt-with”? What cost do you think it will incur in the future?
...nuclear power plants are almost literally uninsurable to sufficient levels in the market such that governments have to guarantee them substandard insurance by legal fiat...
Interesting point. However the correct cost of insurance has to take into account probability of various failures and I see no such probability assessment in the article. Also, what about Thorium power?
And the whole thing was rushed forwards in the mid 20th century as a byproduct of the national desire for nuclear weapons, and remarkably little growth has occurred since that driver decreased.
Are you sure the problem is with lack of desire for nuclear weapons rather than with anti-nuclear paranoia?
If true, how do you know it’s going to remain true in the future?
More to the point, it’s not a question of technology. It’s a question of how much you have to concentrate rare radionuclides in expensive gas centrifuge equipment and how heavily you have to contain the reaction and how long you have to isolate the resultant stuff. Technology does not trump thermodynamics and complexity and fragility.
But the ratio between the physical requisites and dollars (i.e. labor) depends on technology very strongly.
What happens when we reach a level of technology in which energy production is completely automatic?
What does this mean and why is it relevant?
At some point we are likely to have sufficient automation so that little human labor is required for most things, including energy production. In these condition, energy (and most other things) will cost much less than today, with fossil fuels or without them.
What about nuclear fusion?
We have all the materials you need to fuse readily available, if it were easy to do it economically we would’ve after fifty years of work.
Obviously it’s not easy, but it doesn’t mean it’s impossible. We have ITER.
..fusion is trying to produce conditions and reactions of the sort you don’t even see in the largest stars in the universe...
So what? We already can create temperatures lower than anywhere in the universe and nuclear species that don’t exist anywhere in the universe, why not better fusion conditions?
...literally any future other than a future of rather less (But not zero!) concentrated energy available to humanity requires some deus ex machina to swoop down upon us.
I don’t think scientific and technological progress is “deus ex machina”. Given historical record and known physical limits, it is expected there is a lot of progress still waiting to happen. Imagine the energy per capita available to a civilization that builds Dyson spheres.
Mostly sitting around full of transuranic elements with half-lives in the tens of thousands of years in facilities that were meant to be quite temporary, without much in the way of functional or economically competitive breeder reactors even where they have been tried. They will eventually incur one of three costs: reprocessing, geological storage, or release.
what about Thorium power?
Near as I can tell it’s a way to boost the amount of fertile fuel for breeder reactors by about a factor of five. The technology is similar, with advantages and disadvantages. No matter what you have to run refined material through very complicated and capital-intensive and energy-intensive things, keep things contained, and dispose of waste.
These fuel cycles do work and they do produce energy, and if done right some technologies of the suite promoted for the purpose might reduce the waste quite a bit. My gripe is the fact that they work well (not to mention safely) in stable civilizations with lots of capital and concentrated wealth to put towards it that isn’t being applied to more basic infrastructure. Given the vagaries of history moving wealth and power around and the massive cheap energy and wealth subsidy that comes from fossil fuels that will go away, I’m not convinced that they can be run for long periods of time at a level that can compensate for the torrents of cheap wealth you get from burning the black rocks. I wouldn’t be terrilbly surprised at some nuclear power plants being around in a few thousand years, but I would be surprised at them providing anything like as much per capita as fossil fuels do now due to the complexity and wealth concentration issues.
sufficient automation… energy (and most other things) will cost much less than today, with fossil fuels or without them.
I don’t understand how automation changes the energy, material, or complexity costs (think supply chains or fuel flows) associated with a technology.
We have ITER.
Yes, and fusion research is fascinating. But the fact that while understanding of nuclear physics has been pretty well constant for decades more and more money goes into more and more expensive facilities, when looking back at the history of fission power (which does work, I’m not disputing that, just the cornucopian claims about it) pretty much as soon as it was understood it was taken advantage of, suggests to me that the sheer difficulty of it is such that the sort of technology that makes it possible is likely to be completely uneconomic. Technology is not an all-powerful force, it just is an accumulation of knowledge about how to make things that are possible happen. Some things will turn out to not be possible, or require too much effort to be worthwhile.
Imagine the energy per capita available to a civilization that builds Dyson spheres.
Except that when we look out into the universe we don’t see Dyson spheres, or evidence of replicators from elsewhere having passed our way, and we would be able to see Dyson spheres from quite a distance. It doesn’t happen. I’ve never understood why so few people look at the Fermi paradox and consider the possibility that it doesn’t mean we are a special snowflake or that we are doomed, but instead that intelligent life just doesn’t have a grand destiny among the stars and never has.
...They will eventually incur one of three costs: reprocessing, geological storage, or release.
How much does it cost to maintain the current facilities? By what factor does it make nuclear energy more expensive?
I don’t understand how automation changes the energy, material, or complexity costs (think supply chains or fuel flows) associated with a technology.
The most important component of economic cost is human labor. We have plenty of energy and materials in the universe left. “complexity” is not a limited resource so I don’t understand what “complexity cost” is.
Some things will turn out to not be possible...
Yes, but I think that current technology is very far from the limits of the possible.
Except that when we look out into the universe we don’t see Dyson spheres, or evidence of replicators from elsewhere having passed our way, and we would be able to see Dyson spheres from quite a distance.
Sure, because we are the only intelligent life the universe. What’s so surprising about that?
To anyone out there embedded in a corporate environment, any tips and tricks to getting ahead? I’m a developer embedded within the business part of a tech organization. I’ve only been there a little while though. I’m wondering how I can foster medium-term career growth (and shorter-term, optimize performance reviews).
Of course “Do your job and do it well” tops the list, but I wouldn’t be asking here if I wanted the advice I could read in WSJ.
most emphatically does not top the list. Certainly you have to do an adequate job, but your success in a corporate environment depends on your interpersonal skills more than on anything else. You depend on other people to get noticed and promoted, so you need to be good at playing the game. If you haven’t taken a Dale Carnegie course or similar, do so. Toastmasters are useful, too. In general, learning to project a bit more status and competence than you think you merit likely means that people would go along with it.
Just to give an example, I have seen a few competent but unexceptional engineers become CEOs and CTOs over a few short years in a growing company, while other, better engineers never advanced beyond a team lead, if that.
If you are an above average engineer/programmer etc. but not a natural at playing politics, consider exploring your own projects. If you haven’t read Patrick McKenzie’s blog about it, do so. On the other hand, if striking out on your own is not your dream, and you already have enough drive, social skills and charisma to get noticed, you are not likely to benefit from whatever people on this site can tell you.
Perhaps we could be more specific about the social / political skills. I am probably not good at these skills, but here are a few things I have noticed:
Some of your colleagues have a connection between them unrelated to the work, usually preceding it. (Former classmates. Relatives; not necessarily having the same surname. Dating each other. Dating the other person’s family member. Members of the same religious group. Etc.) This can be a strong emotional bond which may override their judgement of the other person’s competence. So for example, if one of them is your superior, and the other is your incompetent colleague you have to cooperate with, that’s a dangerous situation, and you may not even be aware of it. -- I wish I knew the recommended solution. My approach is to pay attention to company gossip, and to be careful around people who are clearly incompetent and yet not fired. And then I try to take roles where I don’t need their outputs as inputs for my work (which can be difficult, because incompetent people are very likely to be in positions where they don’t deliver the final product, as if either they or the company were aware of the situation on some level).
If someone complains about everything, that is a red flag; this person probably causes the problems, or at least contributes to them. On the other hand, if someone says everything is great and seems like they mean it, that’s possibly also a red flag; it could be a person whose mistakes have to be fixed by someone else (e.g. because of the reasons mentioned in the previous paragraph), and that someone else could become you.
Extra red flag is a person who makes a lot of decisions and yet refuses to provide any of them in a written form. (Here, “written form” includes a company e-mail, or generally anything that you could later show to a third party. For example in the case when the person insists on something really stupid, things get horribly wrong, and then suddenly the person says it was actually your idea.) -- One nice trick is to send them an e-mail containing the decisions they gave you, and say something like “here is the summary of our meeting; please confirm if it’s correct, or please correct me if I’m not”.
Sometimes a person becomes an informational bottleneck between two parts of the company. That could happen naturally, or could be a strategy on their part. In such case, try to find some informal parallel channels to the other part of the graph. Do it especially if you are discouraged by the given person from doing so. For example, if they say the other part is stupid and blames them for all troubles of your part. (Guess what: He is probably telling them the same thing about your part. So now he is the only person the whole company trusts to fight for their best interests against the other stupid part.)
Okay, this was all the dark side. From the light side, being nice to people and having small talk with them is generally useful. Remember facts about them, make notes if necessary (not in front of them). Make sure you connect with everyone at least once in a while, instead of staying within your small circle of comfort.
I’d beware conflating “interpersonal skills” with “playing politics.” For CEO at least (and probably CTO as well), there are other important factors in job performance than raw engineering talent. The subtext of your comment is that the companies you mention were somehow duped into promoting these bad engineers to executive roles, but they might have just decided that their CEO/CTO needed to be good at managing or recruiting or negotiating, and the star engineer team lead didn’t have those skills.
Second, I think that the “playing politics” part is true at some organizations but not at others. Perhaps this is an instance of All Debates are Bravery Debates.
My model is something like: having passable interpersonal/communication skills is pretty much a no-brainer, but beyond that there are firms where it just doesn’t make that much of a difference, because they’re sufficiently good at figuring out who actually deserves credit for what that they can select harder for engineering ability than for politics. However, there are other organizations where this is definitely not the case.
I’d beware conflating “interpersonal skills” with “playing politics.”
Certainly there is a spectrum there.
The subtext of your comment is that the companies you mention were somehow duped into promoting these bad engineers to executive roles
I did not mean it that way in general, but in one particular case both ran the company into the ground, one by picking a wrong (dying) market, the other by picking a poor acquisition target (the code base hiding behind a flashy facade sucked). I am not claiming that if the company promoted someone else they would have done a better job.
Second, I think that the “playing politics” part is true at some organizations but not at others.
If we define “playing politics as “using interpersonal relationships to one’s own advantage and others’ detriment”, then I am yet to see a company with more than a dozen employees where this wasn’t commonplace.
If we define “interpersonal skills” as “the art of presenting oneself in the best possible light”, then some people are naturally more skilled at it than others and techies rarely top the list.
As for trusting the management to accurately figure out who actually deserves credit, I am not as optimistic. Dilbert workplaces are contagious and so very common. I’m glad that you managed to avoid getting stuck in one.
Yes, definitely agree that politicians can dupe people into hiring them. Just wanted to raise the point that it’s very workplace-dependent. The takeaway is probably “investigate your own corporate environment and figure out whether doing your job well is actually rewarded, because it may not be”.
Dilbert workplaces are contagious and so very common.
I have a working hypothesis that it is, to a large degree, a function of size. Pretty much all huge companies are Dilbertian, very few tiny one are. It’s more complicated than just that because in large companies people often manage to create small semi-isolated islands or enclaves with culture different from the surroundings, but I think the general rule that the concentration of PHBs is correlated with the company size holds.
I worked mostly for small companies, and Dilbert resonates with me strongly.
It probably depends on power differences and communication taboos, which in turn correlate with the company size. In a large company, having a power structure is almost unaviodable; but you can also have a dictator making stupid decisions in a small company.
Just to give an example, I have seen a few competent but unexceptional engineers become CEOs and CTOs over a few short years in a growing company, while other, better engineers never advanced beyond a team lead, if that.
Being a manager is a radically different job from being an engineer. In fact, I think that (generalization warning!) good engineers make bad managers. Different attitudes, different personalities, different skill sets.
One particular simple and easy to follow tip, to add to the Toastmasters and taking leadership type courses advice, is that you should also you signal to those around you of your interest in these things as well. Some of the other advice here can take time and be hard to achieve, you don’t just turn a switch become charismatic or a great public speaker. So in the meantime while you work on all those awesome skills, don’t forget to just simply let others know about your drive, ambitions, and competency.
This is easier to pull off than the fake-it-till-you-make-it trick. It’s more about show-your-ambition-till-you-make-it. It’s easy to do because you don’t have to fake anything. It reminds me of this seduction advice I read from Mystery’s first book that went something along the lines of, you don’t have to be already rich to seduce somebody, you just have to let them know you have ambition and desire to one day be rich/successful.
I recently read this piece on meritocracy—rung quite true to me from personal experience. I work with a guy of similar ability to me, but I think I would beat him on most technical and simple people skills. However, he still gets ahead from being more ambitious and upfront than I am, and while he’s a bit more qualified on paper it’s used to far better effect. (No bitterness, he’s still a good guy to work with and I know it’s up to me to be better. Also I’m in kind of mid-level finance rather than coding.)
I think that article is a bit bitter. It probably applies to some organizations, but I think most places at least manage to consider competence as a substantial part of the mix in promotion decisions.
Which is not to say signaling ambition isn’t valuable (I absolutely believe it is). Just that the article is bitter.
Here’s an idea for enterprising web-devs with a lot more free time than me: an online service that manages a person’s ongoing education with contemporary project management tools.
Once signed up to this service, I would like to be able to define educational projects with tasks, milestones, deliverables, etc. against which I can record and monitor my progress. If I specify dependencies and priorities, it can carry out wazzy critical path analysis and tell me what I should be working on and in what order. It can send me encouraging/harassing emails if I don’t update it regularly.
Some use cases:
I have enrolled in a formal course of study such as an undergrad degree. I can specify my subjects, texts, deadlines, tests and the like. It will tell me what I should be studying in what order, what areas I’m neglecting, and what I really need to get done before the coming weekend.
I have recently started a new job, and have a package of technologies and skills to learn. Some are more important than others, or have much longer time horizons. If I have x hours a week to develop these skills, it will tell me what I should be doing with those x hours.
Conversely, I am an employer or educator (or world-saving organisation) who wishes oversight of another person’s study. I can assign them a prefabricated syllabus and observe their progress.
Some things that might fall out of a system like this once the infrastructure is in place:
A community whose members can offer each other high-context support and advice
A lot of useful and interesting data on effective learning routes through various subjects, perhaps powering a recommendation service
I imagine there are enough autodidacts and students on LessWrong to establish a preliminary user base. I for one would happily pay for a service like this.
Ouch, that made my mind come up with a different startup idea, Relationship Management software. Basically it would be a website where you can post updates about your relationship every day, like “Last night we argued for 30 minutes” or “I feel that he’s unusually emotionally distant” or something like that. You would also input your partner’s astrological sign, and so on. And the website would give you an overall prognosis and some sort of bullshit psychological advice, like “Try to be more conscious of your needs in the relationship” or “At this point it’s likely that he’s cheating on you”. And it would show tons of ads for related products and services. I think some people would love it!
For a different sort of person, any sort of quantified self about relationships would be interesting. (I heard that an app exists where you record a happy face or a sad face after every time talking to a long distance partner, and it doesn’t give you any advice. Unfortunately, I can’t remember the name or where I heard of it.)
For a minimal product, perhaps just start with the dependencies and priorities side of things? That seems to be the core of such a product, and the rest is dressing it up for usability.
Does anyone have good resources on hypnosis, especially self-hypnosis? I’m mostly looking for how-tos but effectiveness research and theoretical grounding are also welcome.
“Monsters & Magical Sticks: There’s No Such Thing As Hypnosis?” is a fine book for explaining what hypnosis is.
The recurring punchline is that there’s no Hypnosis but there are hypnotic phenomena. Being a good hypnotist is basically about using a bunch of hypnotic phenomena to go where you want to go.
Framing an interaction is something very important. A hypnosis therapist I know says that the hypnosis sessions for quitting smoking begins with the call.
The patient calls to make an appointment. He answers and asks whether the person has made a decision to quit smoking. If the patient says “no” he tells the patient to call again once he made the decision.
Hypnotherapist do a lot of stuff like this.
I am currently teaching myself basic Spanish. At the moment, I’m using my library’s (highly limited) resources to refresh my memory of Spanish learned in high school and college. However, I know I won’t go far without practice. To this end, I’d like to find a conversation partner.
Does anyone have any recommendation of resources for language learners? Particularly resources that enable conversation (written or spoke) so learners can improve and actually use what they are learning? The resource wouldn’t have to be dedicated solely to Spanish learning. Eventually, I want to learn other languages as well (such as German and French).
ROI in learning a foreign language is low, unless it is English. But if you must, I would say the next best thing to immersive instruction would be to watch spanish hulu as an aid to learning. You’d get real conversions at conversational speeds.
So after gwern pointed out that there is a transcript and I have read it I made a back of the envelope calculation.
Assumptions: According to Wikipedia people with a bachelor’s degree or higher make $56078 per year, so about $27 per hour. Learning German increases yearly income by 4% and learning it takes about 750 class hours, according to the foreign service institute. Learning Spanish increases income by 1.4% and takes 600 class hours to learn. If we assume that one class hour costs 11.25$ per hour (by glancing at various prices posted on different sites) we can make a calculation.
Assuming the language is learnt instead of working and that the foregone hours have no impact on later earning and that the foregone hours are paid at average salary, the student incurs opportunity cost in addition to the pure class cost. Ignoring all other effects, learning German costs $28657 with return of 2243 p.a. and learning Spanish costs $22926 with return $841 p.a. This works out to 7.8% and 3.6% on initial investment respectively.
So after 13 years learning German pays off, after 28 years learning Spanish pay off. Assuming the language is learnt at young age, at least learning German can be worthwhile. More benign assumptions, such as learning outside of class with some kind of program like Duolingo, will increase the return further, making learning even more worthwhile.
Of course I did not consider learning something else in those hundreds of hours that could have an even greater effect on income but for high income earners language learning is a very plausible way to increase their income. I assume this goes especially for people that have more obvious use of learning an additional language like translaters or investors.
You’re assuming that the correlation is purely causal and none of the increased income correlating with language learning is due to confounds; this is never true and so your ROI is going to be overstated.
This works out to 7.8% and 3.6% on initial investment respectively.
Most people have discount rates >4%, which excludes the latter. Throw in some sort of penalty (50% would not be amiss, given how many correlations crash and burn when treated as causal), and that gets rid of the former.
Language learning for Americans just doesn’t work out unless one has a special reason.
Language learning for Americans just doesn’t work out unless one has a special reason.
Would be nice to know those. The paper states that people in managerial positions get substantially higher relative returns from learning a foreign language. That would be a special reason.
Maybe the returns are much higher for low-income earners. That question is uninteresting for the average LW user but still. I further wonder what the return on learning a language is in the future.
As an aside, I am surprised how hostile US Americans can be when it is suggested to learn another language.
As an aside, I am surprised how hostile US Americans can be when it is suggested to learn another language.
Personally, I find most suggestions and discussion of Americans learning other languages to be highly irritating. They have not considered all of the relevant factors (continent sized country with 310m+ people speaking English, another 500m+ English-speakers worldwide, standard language of all aviation / commerce / diplomacy / science / technology, & many other skills to learn with extremely high returns like programming), don’t seem to care even when it is pointed out that the measured returns are razor-thin and near-zero and the true returns plausibly negative, and it serves as an excuse for classism, anti-Americanism, mood affiliation with cosmopolitanism/liberalism, and all-around snootiness.
It doesn’t take too many encounters with someone who is convinced that learning another language is a good use of time which will make one wealthier, morally superior, and more open-minded to start to lose one’s patience and become more than a little hostile.
It’s a bit like people who justify video-gaming with respect to terrible studies about irrelevant cognitive benefits (FPSes make me faster at reaction-time? how useful! not) - I want to grab them, shake them a little bit, and say ‘look at yourself from the outside! can’t you see that you’re transparently grabbing at flimsy justifications for something which you do for completely different reasons? You didn’t start playing Halo because you read and meta-analyzed a bunch of psych studies and decided that the lifetime reduction in risk from a 3% fall in simple reaction-time was worth 20 hours a week of work. And we both know you didn’t learn French because it’s going to pay off in future salary increases—you learned it because you had to learn something in high school and French has better cultural prejudices associated with it than Spanish!’
True. I come from the other side growing up in Germany and having met with a lot of foreign knowledge workers unwilling to learn even a lick of German. I actually know of several people that are unable to say “No, thank you” or “One beer please” and unwilling to learn. Personally I see this as highly disrespectful of the host country. After stating this opinion the unwillingness is then justified with the international status of English.
Anyhow, we are delving off into politics and thus I’d like to end this debate at this point with your understanding. I hope the downvote is not from you and moreso that is not because of that line only.
Yes, living in a foreign country is a substantially different proposition (and I’d guess that the correlated increases in income would be much higher). But comparing to Germany highlights part of why it’s such a bad idea for Americans: the population of America alone is 3.78x that of Germany, never mind the entire Anglophone world.
Disclaimer: I won’t listen to the podcast, because I am boycotting any medium that is not text.
Language learning may have extremely low ROI in general but extremely high in special cases. E.g. I would not be surprised by finding that people learning the language of the foreign country they live in increases their subjective wellbeing. Or if people want to work as translators. Or they are investors and are specialising in a region not speaking English as their main language.
This almost seems like a fallacy. I might call it “homogenity bias” or “mistaking the average for the whole” just to find out that is already known under a different name and well documented.
Disclaimer: I won’t listen to the podcast, because I am boycotting any medium that is not text.
Good news! Freakonomics is, along with Econtalk and patio11, one of the rare podcasts which (if you had click through) you can see provides transcripts for most/all of all their podcasts.
The earnings of college graduate who speak a foreign language are higher than the earnings of those who don’t. Our estimates of the impact of bilingualism on earnings are relatively small (2%-3%) and compare unfavorably with recent estimates on the returns to one extra year of general schooling (8%-14%), which may help explain current second-language investment decisions of monolingual English-speakers in the United States. the returns may be higher for individuals in management and the business services occupations, and they seem to be higher ex post for individuals who learn a second language later in life. Note that the results in this paper are estimates of the gross returns to learning a second language. Individual decisions on whether to study a second language will depend on (among other things) the opportunity cost of the time devoted to learning it and its nonmonetary rewards.
Thanks for the link. I hadn’t actually considered language learning in an ROI-fashion, but it’s obviously something I should think about before making heavy investments.
I still think it worth the time since my field involves dealing with non-English parties often. Though I have no distinct need for bilingualism at the moment, it will make me more hireable. However, I do need to evaluate my time learning Spanish against, say, the gains of spending that same time learning programming languages.
If you are willing/able, the best way is to go to a Spanish school in Mexico or Central America and live with a host family for a month or two. I learned more in two months doing that than my first four university classes combined. This probably doesn’t fall under “teaching yourself,” but if you are serious the other things can’t even touch the ROI of an immersive experience, in terms of time and money to Spanish acquired
Fluenz is a great computer-based program, but it’s expensive. I used Rosetta Stone a bit, this is way better.
Pimsular audio tapes for car rides or MP3 player
Duolingo is free, but isn’t for active conversation.
Look on Meetup.com for a spanish conversation meetup.
The last two links are about principles/suggestions. I agree with most of them. This is my advice: say anything that you can, whenever you can. Embarrassment is often the biggest obstacle. When you are beginning with conversations, say anything you can, even if it is a single word, grammatically incorrect, or irrelevant.
With regard to what niceguyanon said about low ROI on languages aside from English, I think there are social capital benefits, self-confidence benefits, cognitive functioning benefits that are valuable. Not to mention travel benefits—Spanish makes travel in many countries easy.
Thank you very much for those links! They’re very helpful.
As I mentioned in reply to niceguyanon, my field is one where language acquisition has a higher value than it might for other fields. And, I’ll admit, I do feel that the confidence and cultural benefits are worth the investment, for me at least. Expression and communication are important to my work. Becoming a more efficient communicator means making myself more valuable.
I know that audio tapes and text books are not how I’m going to learn a language. Like many of my peers, I spent two years in classes repeating over and over “tener, tener… tengo, tengo… tengas, tengas....” and gained nothing out of it except how to say “Me gusto chocolate.” I know how language learning doesn’t work, if nothing else.
You could look into volunteering at a charity that serves Hispanics, or find an ESL conversation group and see whether they would interested in spending time speaking Spanish with you.
Recommendations for good collections of common Deep Wisdom? General or situation specific would be helpful (e.g. all the different standard advice you get while picking your college major, or going through a tough break up).
I am curious about whether Borderline Personality Disorder is overrepresented on LessWrong compared to the general population.
Is Wikipedia’s article on BPD a good description of your personality any time in the past 5 years? For the sake of this poll, ignore the specific “F60.31 Borderline type” minimum criteria.
I could repeat this poll in a venue where the people are similarly prone to medical student syndrome, but not as prone to filling some kind of void with rationality or other epiphanies. That would provide a baseline for comparison. But I don’t yet know where exactly I would find such a venue.
You can’t detect whether a systematic bias in the sampling method exists by looking at the results.
If you have a prior, you can.
In a slightly more unrolled manner, if the results you are getting are inconsistent with your ideas of how the world works, one hypothesis that you should update is that your ideas about the world are wrong. But another hypothesis that you should also update is that your sampling method is wrong, e.g. by having a systematic bias.
Sure you can, in principle. When you have measured covariates, you can compare their sampled distribution to that of the population of interest. Find enough of a difference (modulo multiple comparisons, significance, researcher degrees of freedom, etc.) and you’ve detected bias. Ruling out systematic bias using your observations alone is much more difficult.
Even in this case, where we don’t have covariates, there are some patterns in the ordinal data (the concept of ancillary statistics might be helpful in coming up with some of these) that would be extremely unlikely under unbiased sampling.
When you have measured covariates, you can compare their sampled distribution to that of the population of interest.
That means that you need more data. Having a standard against which to train your model means that you need more than just the results of your measurement.
I was just contesting your statement as a universal one. For this poll, I agree you can’t really pursue the covariate strategy. However, I think you’re overstating challenge of getting more data and figuring out what to do with it.
For example, measuring BPD status is difficult. You can do it by conducting a psychological examination of your subjects (costly but accurate), you can do it by asking subjects to self-report on a four-level Likert-ish scale (cheap but inaccurate), or you could do countless other things along this tradeoff surface. On the other hand, measuring things like sex, age, level of education, etc. is easy. And even better, we have baseline levels of these covariates for communities like LessWrong, the United States, etc. with respect to which we might want to see if our sample is biased.
I was just contesting your statement as a universal one.
You argued against a more general statement than the one I made. But I did choose my words in a way that focused on drawing conclusions from the results and not results + comparison data.
There no reason to leave aside sample size. The value was zero by nature of low sampling size.
The observed reality that the first 5 people voted that BPD doens’t apply to them provides nearly zero bayesian evidence against the idea of systematic bias by surveying in that manner.
While ignoring the sample size, I’d put a high probability on my comment having something to do with the intense response in the other direction. (I am not even sure how you can read all of it and not think that it is at least ‘poorly descriptive’, no matter who you are)
There are probably checklist to diagnose Borderline Personality Disorder that are much better than simply reading a Wikipedia article and thinking about whether it applies to you.
People with borderline personality disorder generally lack “insight,” i.e. they are typically unaware that they have BPD; will deny having it; and will get extremely defensive at the suggestion they have it.
One can contrast with, for example, obsessive/compulsive disorder sufferers who usually do have pretty good insight.
So a survey based on self-reporting is not going to be very helpful.
Anyway, I doubt that there are many people on this board with BPD. This is based on my interactions and observations.
Also, this discussion board doesn’t seem like it would be very attractive to someone with BPD since it doesn’t offer a steady stream of validation. For example, it’s common on this board for other posters, even those who agree with you on a lot of stuff, to challenge, question, or downvote your posts. For someone with BPD, that would be pretty difficult to handle.
The main mental issue I sense on this board (possibly disproportionate to the general population) is Asperger’s. There also seems to be a good deal of narcissism, though perhaps not to the point where it would qualify as a mental disorder.
So if a person with BPD would discover LW and decide they like the ideas, what would they most likely do?
My model says they would write a lot of comments on LW just to prove how much they love rationality, expecting a lot of love and admiration in return. At first they would express a lot of admiration towards people important in the rationalist community; they would try to make friends by open flattery (by giving what they want to get most). Later they would start suggesting how to do rationality even better (either writing a new sequence, or writing hundreds of comments repeating the same few key ideas), trying to make themselves another important person, possibly the most important one. But they would obviously keep missing the point. After the first negative reactions they would backpedal and claim to be misunderstood. Later they would accuse some people of persecuting them. After seeing that the community does not reward this strategy, they would accuse the whole LW of persecution, and try to split apart their own rationalist subcommunity centered around them.
Because it’s a group of people who are excited for years about a rule for calculating conditional probability?
Yeah, I’m not serious here, but I will use this to illustrate the problem with self-diagnosis based on a description. Without hard facts, or without being aware how exactly the distibution in the population looks like, it’s like reading a horoscope.
Do I feel emotions? Uhm, yes. Easily? Uhm, sometimes. More deeply than others? Uhm, depends. For longer than others? I don’t have good data, so, uhm, maybe. OMG, I’m a total psycho!!!
Because it’s a group of people who are excited for years about a rule for calculating conditional probability?
No, there are a lot of data points.
One example: At the community we had one session where having empathy was a point. The person who’s on stage to explain the rest what empathy is talks about how it’s having an accurate mental model of other people and not that empathy is about feeling emotions.
I don’t want to say that having an accurate mental model of other people isn’t useful, but it’s not what people mean with the word empathy in a lot of other communities. Empathy usually refers to a process that’s about feeling emotions.
Is anyone going to be at the Eastercon this weekend in Glasgow? Or, in London later in the year, Nineworlds or the Worldcon?
ETA: In case it wasn’t implied by my asking that, I will be at all of these. Anyone is free to say hello, but I’m not going to try to arrange any sort of organised meetup, given the fullness of the programmes of these events.
In the last open thread, someone suggested rationality lolcats, and then I made a few memes, but only put them on last minute. In case anyone would like to see them, they are here.
What’s a good Bayesian alternative to statistical significance testing? For example, if I look over my company’s email data to figure out what the best time of the week to send someone an email is, and I’ve got all possible hours of the week ordered by highest open rate to lowest open rate, how can I get a sense of whether I’m looking at a real effect or just noise?
In that scenario, how much does it really matter? It’s free to send email at one time of week rather than another, so your only cost is the opportunity cost of picking a bad time to email people, which doesn’t seem likely to be too big.
Our email send by the hour would get far lumpier, so we would have to add more servers in order to handle a much higher peak emails sent per minute. And it takes development effort to configure emails to send at an intelligent time based on the user’s timezone.
OK, here’s a proposed solution I came up with. Start with the overall open rate for all emails regardless of time of the week. Use that number, and your intuition for how much variation you are likely to see between different days and times (perhaps informed by studies on this subject that people have already done) to construct some prior distribution over the open probabilities you think you’re likely to see. You’ll want to choose a distribution over the interval (0, 1) only… I’m not sure if this one or this one is better in this particular case. Then for each hour of the week, use maximum-a-posteriori estimation (this seems like a brief & good explanation) to determine the mode of the posterior distribution, after you’ve updated on all of the open data you’ve observed. (This provides an explanation of how to do this.) The mode of an hour’s distribution is your probability estimate that an email sent during that particular hour of the week will be opened.
Given those probability estimates, you can figure out how many opens you’d get if emails were allocated optimally throughout the week vs how many opens you’d get if they were allocated randomly and figure out if optimal allocation would be worthwhile to set up.
Not Bayesian, but can’t you just do ANOVA w/ the non-summarized time of day vs. open rate (using hourly buckets)? That seems like a good first-pass way of telling whether or not there’s an actual difference there. I confess that my stats knowledge is really just from natural sciences experiment-design parts of lab classes, so I have a bias towards frequentist look-up-in-a-table techniques just because they’re what I’ve used.
Rant for a different day, but I think physics/engineering students really get screwed in terms of learning just enough stats/programming to be dangerous. (I.e., you’re just sort of expected to know and use them one day in class, and get told just enough to get by- especially numerical computing and C/Fortran/Matlab).
Suppose you have three hypotheses:
(1) It’s better to email in the morning
(2) It’s better to email in the evening
(3) They’re equally good
Why do you care about (3)? If you’re just deciding whether to email in the morning or evening, (3) is irrelevant to ranking those two options.
The full-fledged Bayesian approach would be to identify the hypotheses (I’ve simplified it by reducing it down to just three), decide what your priors are, calculate the probability of seeing the data under each of the hypotheses, and then combing that data according to the Bayesian formula to find the posterior probability. However, you don’t have to run through the math to see that if your prior for (1) and (2) are equal, and the sample is skewed towards evening, then the posterior for (2) will be larger than the posterior for (1).
The only time you’d actually have to run through the math is if your priors weren’t equal, and you’re trying to decide whether the additional data is enough to overcome the difference in the priors, or if you have some consideration other than just choosing between morning or evening (for instance, you might find it more convenient to just email when you first have something to email about, in which case you’re choosing between “email in morning”, “email in evening” and “email whenever it’s convenient to me”).
“Statistical significance” is just a shorthand to avoid having to actually doing a Bayesian calculation. For instance, suppose we’re trying to decide whether a study showing that a drug is effective is statistically significant. If the only two choices were “take the drug” and “don’t take the drug”, and we were truly indifferent between those two options, the issue of significance wouldn’t even matter. We should just take the drug. The reason we care about whether the test is significant is because we aren’t indifferent to the two choices (we have a bias towards the status quo of not taking the drug, making the drug would cost money, there are probably going to be side effects of the drug, etc.) and there are other options (take another drug, have more drug trials, etc.) When a level of statistical significance is chosen, an implicit statement is being made about how much weight is being given towards the status quo.
Does anyone know of a way to collaboratively manage a flashcard deck in Anki or Mnemosyne? Barring that, what are my options so far as making it so?
Even if only two people are working on the same deck, the network effects of sharing cards makes the card-making process much cheaper. Each can edit the cards made by the other, they can divide the effort between the two of them, and they reap the benefit of insightful cards they might not have made themselves.
You could use some sort of cloud service: for example, Dropbox. One of the main ideas behind of Dropbox was to have a way for multiple people to easily edit stuff collaboratively. It has a very easy user interface for such things (just keep the deck in a synced folder), and you can do it even without all the technical fiddling you’d need for git.
Exactly the right avenue. Unfortunately, Anki typically uses its own idiosyncratic data format that’s not very handy for this kind of thing, but it’s possible to export and import decks to text, as it turns out.
The issue with this is that if you’re months into studying a deck and you’d like to merge edits from other contributors, I’m not certain that you simultaneously import the edits and keep all of your progress.
Even so, the text deck route has the most promise as far as I can tell.
Exactly the right avenue. Unfortunately, Anki typically uses its own idiosyncratic data format that’s not very handy for this kind of thing, but it’s possible to export and import decks to text, as it turns out.
Anki itself stores it”s data in SQLlite databases.
I think there a good chance that Anki itself will get better over time at collaborative deck editing. I think it’s one of the reason why Damien made the integrating with the web interface on of the priorities in Anki 2
I found this on Twitter, specifically related to applications for the blind (but the article is more general-purpose): Glasses to simulate polite eye contact
Having read only the article and the previously-mentioned tweet, and no comments and knowing nothing about what it actually looks like, I’m predicting that it falls into the uncanny valley, at best.
Given that it’s fanfiction copyright isn’t straightforward. Harry Potter is sort of owned by J.K. Rowling. If you want to do something with HPMOR, sent Eliezer an email to ask for permission and he will probably grant it to you.
Good question. I thought http://hpmor.com/info/ would cover the licensing, but nope. Some googling doesn’t turn up any explicit licensing either.
“All fanfiction involves borrowing the original author’s characters, situations, and world. It is ridiculous to turn around and complain if your own ideas get borrowed in turn. Anyone is welcome to steal anything from any fanfiction I write.”
I think that only speaks to writing fanfiction of Eliezer’s fanfiction, not rights over the text itself. By default, the copyright is solely Eliezer’s unless and until he says otherwise.
He only says you’re allowed to steal it. Not to use it with permission. If you take it without permission, that’s stealing, so you have permission, which means that you didn’t steal it, etc.
No, no, no: He didn’t say that you don’t have permission if you don’t steal it, only that you do have permission if you do.
What you said is true: If you take it without permission, that’s stealing, so you have permission, which means that you didn’t steal it.
However, your argument falls apart at the next step, the one you dismissed with a simple “etc.” The fact that you didn’t steal it in no way invalidates your permission, as stealing ⇒ permission, not stealing ⇔ permission, and thus it is not necessarily the case that ~stealing ⇒ ~permission.
I was wondering if there are any services out there that will tie charitable donations to my spending on a certain class of good, or with a certain credit card. E.g. Every time I buy a phone app or spend on in-app purchases, a matching amount of money goes to a particular charity.
There’s a lot of credit cards that will give a fix percentage of money to charity whenever you use them but I don’t think any will go up to the amounts I bet you want.
Hi, I’ve been intermittently lurking here since I started reading HPMOR. So now I joined and the first thing I wanted to bring up is this paper which I read about the possibility that we are living in a simulation. The abstract:
“This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.”
Quite simple, really, but I found it extremely interesting.
I don’t have enough karma to create my own post, so I’m cross posting this from a gist
Pascal’s Wager and Pascal’s Mugging as Fixed Points of the Anthropic Principle
Skepticism Meets Belief
Pascal’s Wager and Pascal’s Mugging are two thought experiments that explore what happens when rational skepticism meets belief. As skepticism and belief move towards each other, they approach a limit such that it’s impossible to cross from one to the other without some outside help.
Pascal’s Wager takes the point of view of a rational being attempting to make a decision about whether to believe in a higher being. As humans we can empathize with this point of view; we often have to make important decisions with incomplete or even dubious information. Pascal’s Wager says: it’s impossible to have enough information to make a rational decision about God’s existence so the rational position is to believe just in case there is and belief is important.
Pascal’s Mugging takes the point of view of the higher being attempting to cajole the rational being into paying a token fee to prevent an outrageously terrible but more outrageously unlikely event from happening. Due to choosing a skeptical muggee who demands comparably strong proofs for infinitesimal priors, there’s no amount of magic that would be an effective proof to convince the muggee.
I’m God, Therefore I Am
Both of these accounts show a lack of empathy with the higher being’s point of view because they start from the assumption that the higher being doesn’t exist and the rational being is in need of a convincing reason to believe in something extraordinarily unlikely. The first truth that the higher being will know is the same that we know: “I think, therefore I am”. If we want to empathize with this point of view then we have to start with the assumption of existence and work backward.
The anthropic principle does exactly that. It starts from the assumption that the present is truth (infinite prior) and works backward through the more and more unlikely sequence of events that got us here. Since we have an infinite prior to work with, we can justify as many ridiculously unlikely coincidences as we need to explain the present that are still a possible universe, e.g. compatible with scientific observation. This is how we can claim with a straight face that it’s possible the Sun is the first star in 13+ billion years to achieve our level of civilization, which explains why we seem to be alone and we’re all here and not somewhere else.
Belief As Reality Fluid
If we combine our new found empathy for higher beings with the anthropic principle we can flip the stories around and gain new insight. From the point of view of a future evolved higher being, their present is the infinite prior and the path between our present and theirs is what happened, despite its infinitesimal likeliness from our point of view. If the higher being knows it exists why does it care about belief? The only reason I can imagine that a higher being would care about belief is if belief was critical to its existence.
In order for a lower being to evolve into a higher being the lower being has to eventually believe the higher being is possible so that they can work toward becoming or creating it. From the point of view of the higher being, this belief is a requirement for it to exist and be sentient. If we imagine a higher being as something that is engineered then at some point a project manager in a technologically advanced civilization with the capability to build a higher being is going to have to be convinced that not only can a higher being exist but that they can build it.
Building a Higher Being
From this point of view, Pascal’s Wager looks like a filter to identify skeptical people who can truly change their mind (actually believe in a higher being despite being both rational and having flimsy or circumstantial evidence). If time is cyclical and the ultimate outcome of this universe feeds into the starting point of the next, then a higher being would have the ability to influence the general path of the next cycle. Pascal’s Mugging starts to look like a description of the least unlikely event required to bootstrap a higher being into existence. If this is the case, then in some sense the higher being is also its own creator (or at least a very similar previous version). From this vantage the creator can empathize with the process of creating the universe they find themselves in and gain insight into why things are like they are, from an implementation perspective, e.g. “if I was going to program the universe, X would do Y because of Z”.
In some sense, the higher being wills itself into existence as part of a stable, cyclical universe in which some amount of belief is a requirement. Their only responsibility is ensuring that a civilization somewhere in the next universe they build culminates in that one infinite/infinitesimal prior (depending on your point of view) so that the next iteration can be built.
Bit of a smorgasbord of a post (or a gish-gallop, if I’m not mincing words). Sorry to say, but much of your reasoning is opaque to me. Possibly because I misunderstand. Infinite priors? Anthropic reasoning applied to ‘higher beings’, because we emphatize with such a higher being’s cogito? You lost me there.
In order for a lower being to evolve into a higher being the lower being has to eventually believe the higher being is possible so that they can work toward becoming or creating it. From the point of view of the higher being, this belief is a requirement for it to exist and be sentient.
I’d say that the possibility of a non-expected FOOM process would be a counterexample, but then again, I have no idea whether you’d qualify a superintelligence of the uFAI variety as a ‘higher being’.
If time is cyclical (...)
Didn’t see that coming.
It may be that you’ve put a large amount of effort into coming to the conclusions you have, but you really need to put some amount of effort into bridging those inferential gaps.
If you’re going to make up new meanings for words, you should at least organize the definitions to be consistent with dependencies: dependent definitions after words they are dependent on, and related definitions as close to each other as possible. In your list, there are numerous words that are defined in terms of words whose definitions appear afterwards. Among other problems, this allows for the possibility of circular definitions.
Also, many of the definitions don’t make sense. e.g.
“An algorithm that guides reproduction over a population of networks toward a given criteria. This is measured as an error rate.”
Syntactically, “this” would refer to “criteria”, which doesn’t make sense. If it doesn’t refer to criteria, then it’s not clear what it does refer to.
I think your post is a bit rambling and incoherent but I very much support your style of making long comments in the fashion of posts with BOLD section headings etc.
I decided to post it here because it’s just so incredibly stupid and naively evil, but also because it’s using LW-ish language in a piece on how to—in essence—thoroughly corrupt the libertarian cause. Thought y’all would enjoy it.
Standardrejoinders. Furthermore: even if Brennan is ignorant of the classical liberal value of republicanism, why can’t he use his own libertarian philosophy to unfuck himself? How is lying like this ethical under it? Why does he discuss the benefits of such crude, object-level deception openly, on a moderately well read blog, with potential for blowback? By VALIS, this is a perfect example of how much some apparently intelligent people could, indeed, benefit from reading LW!
Well I am apparently too stupid to understand why the quoted article is stupid or evil, not to mention incredibly stupid or naively evil.
In any consequentialist theory combined with some knowledge of the actual world as it functions that we live in I don’t see how you can escape the conclusion that a politician running has a right to lie to voters. An essential conclusion from observing reality is that politicians lie to voters. Upon examination, it is hard NOT to conclude that politicians who don’t lie enough don’t get elected. If we are consequentialist, then either 1) elected politicians do create consequences and so a politician who will create good consequences had best lie “the right amount” to get elected or 2) elected politicians do not create consequences in which case it is consequentially neutral whether a politician lies, and therefore morally neutral.
If you prefer a non-consequentialist or even anti-consequentialist moral system, then bully for you, it is wrong (within your system) for politicians to lie to voters, but that conclusion is inconsequential, except perhaps for a very small number of people, presumably the politician who’s soul is saved or who’s virtue is kept intact by his pyrrhic act of telling the truth.
A lot of the superficial evilness and stupidity is softened by the follow-up post, where in reply to the objection that politicians uniformly following this principle would result in a much worse situation, he says:
The fact that most people would botch applying a theory does not show that the theory is wrong. So, for instance, suppose—as is often argued—that most people would misapply utilitarian moral standards. Perhaps applying utilitarianism is too hard for the common person. Even if so, this does not invalidate utilitarianism. As David Brink notes, utilitarian moral theory means to provide a criterion of right, not a method for making decisions.
So maybe he just meant that in some situations the “objectively right” action is to lie to voters, without actually recommending that politicians go out and do it (just as most utilitarians would not recommend that people try to always act like strict naive utilitarians).
So maybe he just meant that in some situations the “objectively right” action is to lie to voters, without actually recommending that politicians go out and do it
I’m confused. So would he recommend that the politicians do the “objectively wrong” thing?
All of that looks a lot like incoherence, unwillingness to accept the implications of stated beliefs, and general handwaving.
The fact that most people would botch applying a theory does not show that the theory is wrong.
So the problem is that the politicians can’t lie well enough?? X-D
So the problem is that the politicians can’t lie well enough??
No, that’s not what he means. Quoting from the post (which I apologize for not linking to before):
Many of the commenters said that my position can’t be right because people will misapply it in dangerous ways. They are right that politicians will misapply it in dangerous ways. In fact, I bet some politicians who wrongfully lie do so because they think that they mistakenly fall under a murderer at the door-type case. But that doesn’t mean that the principle is wrong. It just means that people tend to mess up the application.
So, to recap. Brennan says “lying to voters is the right thing when good results from it”. His critics say, very reasonably, that since politicians and humans in general are biased in their own favor in manifold ways, every politician would surely think that good would result from their lies, so if everyone followed his advice everyone would lie all the time, with disastrous consequences. Brennan replies that this doesn’t mean that “lying is right when good results from it” is false; it just means that due to human fallibilities a better general outcome would be achieved if people didn’t try to do the right thing in this situation but followed the simpler rule of never lying.
My interpretation is that therefore in the post Multiheaded linked to Brennan was not, despite appearances, making a case that actually existing politicians should actually go ahead and lie, but rather making an ivory-tower philosophical point that sometimes them lying would be “the right thing to do” in the abstract sense.
So would he recommend that the politicians do the “objectively wrong” thing?
For a wrong outcome B, you can usually imagine even worse outcome C.
In a situation with perfect information, it is better to choose a right outcome A instead of a wrong outcome B. But in a situation with an imperfect information, choosing B may be preferable to having A with some small probability p, and C with probability 1-p.
The lesson about the ethical injuctions seems to me that we should be aware that in some political contexts the value of p is extremely low, and yet because of obvious evolutionary pressures, we have a bias to believe that p is actually very large. Therefore we should recognize such situations with a large p (because that’s how it feels from inside), realize the bias, and apply a sufficiently strong correction, which usually means to stop.
So the problem is that the politicians can’t lie well enough??
Actually… yes.
More precisely, I would expect politicians to be good at lying for the goal of getting more personal power, because that’s what the evolution has optimized humans for; and the politicians are here the experts among humans.
But I expect all humans, including politicians, to fail at maximizing utility when defined otherwise.
Many internet libertarians aren’t very consequentialist, though. And really, just the basic application of rule-utilitarianism would expose many, many problems with that post. But really, though: while the “Non-Aggression Principle” appears just laughably unworkable to me… given that many libertarians do subscribe to it, is lying to voters not an act of aggression?
Depends on your point of view, of course, but I don’t think the bleeding-heart libertarians (aka liberaltarians) are actually libertarians. In any case, it’s likely that the guy didn’t spend too much time thinking it through. But so what? You know the appropriate xkcd cartoon, I assume...
Given that the guy is a professional philosopher I doubt ignorance is a good explanation. It’s probably a case of someone wanting to be to contrarian for his own good. Or at least the good of his cause. Given that he wrote a book to argue that most people shouldn’t vote, he might simply troll for academic controversy to get recognition and citations.
How much did you donate last year? Don’t answer that. Just compare it to the amount of taxes you paid, and realize that 19% of those taxes went to defense spending. (Veteran benefits, interest on debt incurred by defense spending and other indirect costs are not included in that number.) When you congratulate yourself on your altruism, don’t forget you’re also funding the NSA, the drone attacks in various middle east countries, and thousands of tanks sitting idly on a base somewhere.
In this case “outweigh” is relevant. If your altruistic activities don’t outweigh the impact of your taxes, your EA move is to live off-the-grid (assuming we’ve simplified down to those two factors, and neglecting tax avoidance methods).
You can easily control your earnings on the downside, is the point.
Fair enough. So what are better or worse options for spending of one’s tax dollars? Can you do anything, except try to pay less taxes (and spend the gain altruistically) or pay them in a country that will use them more effectively to improve the world?
Taxes paid to the country you live in count as a tax deduction, so in the common case that the host country has a higher tax rate than the US, a US citizen living abroad pays no tax to the US. And if you already have permanent residency somewhere else, changing your citizenship is not super difficult.
Why the heck do Effective Altruists need to be singled out for this? You seem to be punishing people for wanting to be effective altruists, which is super weird.
Not all but many effective altruists and certainly the dominant discourse in recent times care about earning to give, ie making a ton of money so that you can give more to charity. Making a ton of money in america has the side effect of giving a ton of money to the us government. If this is evil on net, it might be more effectively altruistic for someone not to make money to give to charity OR the government if you live in the US.
You get effective altruists wrong. They care about the results of their actions. It a philosophy about choosing effective actions about actions that aren’t. It’s not about feeling guilty that some of your actions have no big positive effects.
That means you focus your attention on area where you can achieve a lot instead of focusing it where you can’t do much.
I find the argument that the US would spend less on military when US citizens would pay less taxes questionable. You can’t simply defund a highly powerful organisation like the NSA. Less government money is rather going to be a problem for welfare payments.
In discussions about where an effective altruist is supposed to live it might be a worthwhile point the effect of tax money. Paying taxes in Switzerland instead of the US might be beneficial if you decide whether to life in San Francisco or Zurich.
I expect some people perceive effective altruists that way no matter what their attitudes; they feel the harping on about how much more ethical they are is implied.
It’s easy to be cynical about the military, but consider the simple fact that we live in one of the most peaceful ages ever. The Middle East conflicts of the last decade-plus involving the US have resulted in far fewer deaths than, say, the Vietnam War. You might say there should have been none of these conflicts to begin with, but things certainly could have been worse as well!
I was medically discharged from the military. The Veteran benefits that are paid for by taxes paid for my schooling (since I couldn’t stay in the military I had to get a different education to make a living), and also provide me with a disability check every month. So those taxes probably count as some sort of altruism.
Last week, after a lot of thought and help from LessWrong, I finally stopped believing in god and dropped my last remnants of Catholicism. It is turned out to be a huge relief, though coping with some of the consequences and realizations that come with atheism has been a little difficult.
Do any of you have any tips you noticed about yourself or others after just leaving religion? I’ve noticed a few small habits I need to get rid of, but I am worried I’m missing larger, more important ones.
Are there any particular posts I should skip ahead and read? I am currently at the beginning of reductionism. Are their any beliefs you’ve noticed ex-catholics holding that they don’t realize are obviously part of their religion? I do not have any one immediately around me I can ask, so I am very grateful for any input. Thank you!
Well, here at LessWrong, we follow a thirty-something bearded Jewish guy who, along with a small group of disciples, has performed seemingly impossible deeds, preaches in parables, plans to rise from the dead and bring with him as many of us as he can, defeat evil, and create a paradise where we can all live happily forever.
So yeah, getting away from Catholic habits of thought may be tough. With work, you’ll get there though...
Unlike religion, here no one claims to be all-knowing or infallible. Which, from my point of view at least, is why LessWrong is so effective. Reading the arguments in the comments of the sequences was almost as important as reading the sequences themselves.
I wouldn’t mind the paradise part or the living forever part though.
We? That’s generalizing a bit wouldn’t you say? It’s “LessWrong,” not yudkowsky.net after all.
Yes, of course. I was mostly just trying to be funny. One could keep the joke going and compare the monthly meetups, Winter Solstice meetup, the Effective Altruist movement, the Singularity, and so on to their complements in Christianity.
Speaking from experience: don’t kneejerk too hard. It’s all too easy to react against everything at all implicitly associated with a religion or philosophy that you now reject the truth-claims of and distort parts of your personality or day to day life or emotions or symbolic thought that have nothing to do with what you have rejected.
Thank you. Last week was full of “Is this religious? Yes? No? I can’t tell!.” My brain has thankfully returned to normal function, and I will avoid intently analyzing every thought for religious connotations. The lack of guilt is nice, and I don’t want to bring it back by stressing about the opposite.
Don’t forget that reversed stupidity is not intelligence; a belief doesn’t become wrong simply because it’s widely held by Catholics.
Similarly, there’s no need to be scared of responding positively to art or other ideas because they originated from a religious perspective; if atheism required us to do that, it would be almost as bleak a worldview as it’s accused of being. Adeste Fideles doesn’t stop being a beautiful song when you realize its symbols don’t have referents. I think of the Christian mythology as one of my primary fantasy influences—like The Lord of the Rings, Discworld, The Chronicles of Thomas Covenant or Doctor Who—so, if I find myself reacting emotionally to a Christian meme, I don’t have to worry that I’m having a conversion experience (or that God exists and is sneakily trying to win me over!): it’s perfectly normal, and lawful, for works of fiction to have emotional impact.
The religious allusions seem even blatant now, but there is no way I’m getting rid of my copy of Chronicles of Narnia. I still feel the urge to look in the back of wardrobes.
Thank you. I had a religious song stuck in my head yesterday, but remembered reading you comment so was able to bypass the feeling of guilt.
What others already said: Don’t try to reverse stupidity by avoiding everything conected to Catholicism. You are allowed to pick the good pieces and ignore the bad pieces, instead of buying or rejecting the whole package. Catholics also took some good parts from other traditions; which by the way means you don’t even have to credit them for inventing the good pieces you decide to take.
If you talk with other religious people, they will probably try the following trick on you: Give you a huge book saying that it actually answers all your questions, and that you should at least read this one book and consider it seriously before you abandon religion completely. Of course if you read the whole book and it doesn’t convince you, they will give you another huge book. And another. And another. The whole strategy is to surround you by religion memes (even more strongly than most religious people are), hoping that sooner or later something will “trigger” your religious feelings. And no matter how many books you read, if at some moment you refuse to read yet another book, you will be accused of leaving the religion only because of your ignorance and stubbornness, because this one specific book certainly did contain all answers to your questions and perfectly convincing counterarguments to your arguments, you just refused to even look at it. This game you cannot win: there is no “I have honestly considered all your arguments and found them unconvincing” exit node; the only options given to you are either to give up, or to do something that will allow your opponents to blame you of being willfully ignorant. (So you might as well do the “ignorant” thing now, and save yourself a lot of time.)
Don’t try to convince other people, at least not during the first months after deconversion. First, you need to sort out things for yourself (you don’t have a convincing success story yet). Second, by the law of reciprocation, if the other people were willing to listen to your explanations, this in turn gives them the moral right to give you a huge book of religious arguments and ask you to read it, which leads to the game described above.
Basicly, realize that you have a right to spend most of your time without thinking about Catholicism, either positively or negatively. That is what most atheists really do. If you were born on another planet, where religion wasn’t invented, you wouldn’t spend your time arguing against religion. Instead, you would just do what you want to do. So do it now.
This is known as cafeteria Catholicism. (I had only heard that used as an insult, but apparently there are people who self-identify as such.)
It reminds me of Transactional Analysis saying the best way to keep people in mental traps is to provide them two scripts: “this is what you should do if you are a good person”, but also “this is what you will do if you become a bad person (i.e. if you refuse the former script)”. So even if you decide to rebel, you usually rebel in the prescribed way, because you were taught to only consider these two options as opposites… while in reality there are many other options available.
The real challenge is to avoid both the “good script” and the “bad script”.
Thank you for the advice. I’ve started by rereading the scientific explanations of the big bang, evolution, and basically most general scientific principles. Looking at it without constant justification going on in my mind is quite refreshing.
So far I’ve been able to avoid most of the arguments, though I was surprised by how genuinely sad some people were. I’m going to keep quiet about religion for a while, and figure out what other pieces of my worldview I need to take a rational, honest look at.
I recommend this list:
http://rationalwiki.org/wiki/RationalWiki_Atheism_FAQ_for_the_Newly_Deconverted
I find myself to have a much clearer and cooler head when it comes to philosophy and debate around the subject. Previously I had a really hard time squaring utilitarianism with the teachings of religion, and I ended up being a total heretic. Now I feel like everything makes sense in a simpler way.
What are the most effective charities working towards reducing biotech or pandemic x-risk? I see those mentioned here occasionally as the second most important x-risk behind AI risk, but I haven’t seen much discussion on the most effective ways to fund their prevention. Have I missed something?
Biotech x-risk is a tricky subject, since research into how to prevent it also is likely to provide more information on how to engineer biothreats. It’s from nontrivial to know what lines of research will decrease the risk, and which will increase it. One doesn’t want a 28 Days Later type situation, where a lab doing research into viruses ends up being the source of a pandemic.
Note that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren’t AI risk. If you anticipate an intelligence explosion but aren’t worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).
You’re ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.
Not to mention that if your P(AI) isn’t close to one, you probably want to be prepared for the situation in which an AI never materializes.
As far as I remember from LW census data the median date for predicted AGI intelligence explosion didn’t fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.
Close. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.
From wikipedia article on rejection therapy:
“At the time of rejection, the player, not the respondent, should be in a position of vulnerability. The player should be sensitive to the feelings of the person being asked.”
How does one implement this? One of my barriers to social interactions is the ethical aspect to it; I feel uncomfortable imposing on others or making them uncomfortable. Using other people for one’s own therapy seems a bit questionable. Does anyone have anything to share about how to deal with guilt-type feelings and avoid imposing on others with rejection therapy?
I used to have the same, to the extent that I wouldn’t ask even ask teachers, people paid to help me, for help. I hated the feeling that I was a burden somehow. But I got over it in the space of a couple months by getting into a position where people were asking me for help all the time—and that made me realize it wasn’t an unpleasant or annoying experience, I actually liked it, and others were probably the same. In most cases you’re doing people a favor by giving them a chance to get warm-fuzzies for what’s (usually in the case of rejection therapy) a relatively simple request to fulfill.
Of course, there are still certain requests that might be uncomfortable to reject, and my thoughts on those are that they’re usually the ones where you feel like you’ve left someone out who really needed your help. So to get over this, don’t choose things to ask that are going to go bad if you don’t get it—for instance asking for a ride when it’s pouring out, or telling someone you need some money to call your kids at home so they don’t worry (instead of just ‘I need to make a call’). As long as what you ask is casual and you don’t seem desperate, people should have no problem rejecting it without feeling bad, and to lessen any impact even more you can smile and say ‘no problem, thanks anyway’ or something similar to show you’re alright without it.
Also use your sense, if you ask and they look uncomfortable going ‘oh, umm, well...’ you should be the one to jump in and say ‘hey, it’s no problem, you look busy so I’ll check with someone else’ or something like that, rather than waiting for them to have to say outright ‘no’. Some people don’t mind just saying no outright, some people do, so be attuned to that and no-one should be uncomfortable. Good luck!
In general, people in a public space are to an extent consenting to interact with other humans. If they aren’t, we have a system of recognized signals for it: Walking fast, looking downward, listening to music, reading, etc. I don’t think you should feel too guilty about imposing a brief few seconds of interaction on people out and about in public.
It’s argued there’s a risk that in the event of a global catastrophe, humanity would be unable to recover to our current level of capacity because all the easily accessible fossil fuels that we used to get here last time are already burned. Is there a standard, easily Googlable name for this risk/issue/debate?
Can’t help you out with an easy moniker, but I remember that problem being brought up as early as in Olaf Stapledon’s novel Last and First Men, published 1930.
I remember a short story posted on LW a few years ago about this. It was told from the perspective of people in a society of pre-industrial tech, wondering how (or even if) their mythical ancestors did these magical feats like riding around in steel carriages faster than any horse and things like that. The moral being that society hadn’t reached the required “escape velocity” to develop large-scale space travel and instead had declined once the fossil fuels ran out, never to return.
I can’t for the life of me find it though.
It’s also argued that, fossil fuels being literally the most energy-dense per unit-of-infrastructure-applied energy source in the solar system, our societal complexity is likely to decrease in the future as the hard-to-get ones are themselves drawn down and there becomes no way to keep drawing upon the sheer levels of energy per capita we have become accustomed to over the last 200 years in the wealthier nations.
I recommend Tom Murphy’s “do the math” blog for a frank discussion of energy densities and quantities and the inability of growth or likely even stasis in energy use to continue.
Huh? At which level of technology? And WTF is a “unit of infrastructure”?
At any level of technology. Where else in the solar system do you have that much highly reduced matter next to so much highly oxidized gas with a thin layer of rock between them, and something as simple as a drill and a furnace needed to extract the coal energy and a little fractional distillation to get at the oil? Everything else is more difficult.
“Unit of infrastructure” ~= amount of energy and effort and capital needed to get at it.
I am not going to believe that. Both because at the caveman level the fossil fuels are pretty much useless and because your imagination with respect to future technology seems severely limited.
This entirely depends on the technology level. And how are you applying concepts like “energy-dense” to, say, sunlight or geothermal?
Energy density refers only to fuels and energy storage media and doesn’t have much to do with grid-scale investment, although it’s important for things like transport where you have to move your power source along with you. (Short version: hydrocarbons beat everything else, although batteries are getting better.)
The usual framework for comparing things like solar or geothermal energy to fossil fuels, from a development or policy standpoint, is energy return on investment. (Short version: coal beats everything but hydroelectric, but nuclear and renewables are competitive with oil and gas. Also, ethanol and biodiesel suck.)
Coal was used as fuel before the Roman empire. It didn’t lead to an industrial revolution until someone figured out a way to turn it into mechanical energy substituting for human labor instead of just a heat source in a society where that could be made profitable due to a scarcity of labor. That was the easiest, surface-exposed deposits, yes, but you hardly need any infrastructure at all to extract the energy, and even mechanical energy extraction just needs a boiler and some pistons and valves. This was also true of peat in what is now the Netherlands during the early second millennium.
What does ‘technology level’ even mean? There’s just things people have figured out how to do and things people haven’t. And technology is not energy and you cannot just substitute technology for easy energy, it is not a question of technology level but instead the energy gradients that can be fed into technology.
Mostly in terms of true costs and capital (not just dollars) needed to access it, combined with how much you can concentrate the energy at the point of extraction infrastructure. For coal or oil you can get fantastic wattages through small devices. For solar you can get high wattages per square meter in direct sunlight, which you don’t get on much of the earth’s surface for long and you never get for more than a few hours at a time. Incredibly useful, letting you run information technology and some lights at night and modest food refrigeration off a personal footprint, but not providing the constant torrent of cheap energy we have grown accustomed to. Geothermal energy flux is often high in particular areas where it makes great sense (imagine Iceland as a future industrial powerhouse due to all that cheap thermal energy gradient), over most of the earth not so much.
Sunlight is probably our best bet for large chunks the future of technological civilization over most of the earth’s surface. It is still not dense. It’s still damn useful.
You don’t need ANY infrastructure to gather dry sticks in the forest and burn them. Guess that makes the energy density per unit of infrastructure infinite, then…
There are lots of energy gradients around. Imagine technology that allows you to sink a borehole into the mantle—that’s a nice energy gradient there, isn’t it? Tides provide the energy gradient of megatons of ocean water moving. Or, let’s say, technology provides a cheap and effective fusion reactor—what’s the energy gradient there?
You’ve been reading too much environmentalist propaganda which loves to extrapolate trends far into the future while making the hidden assumption that the level of technology will stay the same forever and ever.
Pretty much, until you need to invest in the societal costs to replant and regrow woods after you have cleared them, or you want more concentrated energy at which point you use a different source, or unless you value your time.
Yes. Some are easier to capture than others and some are denser than others. Fusion would be a great energy gradient if you can run it at rates massively exceeding those in stars, but everything I’ve seen suggests that the technology required for such a thing is either not forthcoming or if it is is so complicated that it’s probably not worth the effort.
It won’t but there are some things that technology doesn’t change. To use the nuclear example, you always need to perform the same chemical and other steps to nuclear fuels which requires an extremely complicated underlying infrastructure and supply chain and concentrated capital for it. Technology isn’t a genetic term for things-that-make-everything-easier, some things can be done and some things can’t, and other things can be done but aren’t worth the effort, and we will see what some of those boundaries are over time. I hope to at least make it to 2060, so I bet I will get to see the outcome of some of the experiments being performed!
Solar energy used to halve in price every 7 years. in the last 7 it more than halved. Battery performance also has a nice exponential improvement curve.
Various forms of solar are probably one of our better bets, though I’m not convinced that large chunks of the recent gains don’t come from massive effective subsidy from China and eventually the cost of the materials themselves could become insignificant compared to complexity and maintenance and end-of-life-recycling cost which are not likely to decrease much. Though battery performance… I haven’t seen anything about it that even looks vaguely exponential.
See http://qr.ae/rbMLh for the batteries.
To spell out a few things: the price of lithium batteries is decreasing. Since they are the most energy-dense batteries, this is great for the cost of electric cars, and maybe for the introduction of new portable devices, but it isn’t relevant to much else. In particular, performance is not improving. Moreover, there is no reason to expect them to ever be cheaper than existing less dense batteries. In particular, there is no reason to expect that the cost of storing electricity in batteries will ever be cheaper than the cost of the electricity, so they are worthless for smoothing out erratic sources of power, like wind.
I get the impression that most of the “recent gains” consist of forcing the utilities to take it and either subsidizing the price difference or passing the cost on to the customer. At least, the parties involved act like they believe this while attempting to deny it.
But even if some of the cost is subsidies and the real speed is only halving in price every 7 years that’s still good enough.
I don’t see why there shouldn’t be any way to optimise end of life costs and maintenance.
Does the argument take nuclear energy into account?
Yes. No nuclear power has ever been built without massive subsidies and insurance-guarantees, it only works right now because we externalize the costs of dealing with its waste to the future rather than actually paying the costs, and nuclear power is fantastically more complicated and prone to drastically expensive failures than simply burning things. Concentrating the fuel to the point that it is useful is an incredible chore as well.
Are you claiming nuclear energy has higher cost in $ per joule than burning fossil fuels? If so, can you back it up? If true, how do you know it’s going to remain true in the future? What happens when we reach a level of technology in which energy production is completely automatic? What about nuclear fusion?
The only reason the costs per joule in dollars are near each other (true factor of about 1.5-3x the cost in dollars between nuclear and the coal everyone knows and loves, according tothe EIA ) is that a lot of the true costs of nuclear power plants are not borne in dollars and are instead externalized. Fifty years of waste have been for the most part completely un-dealt-with in the hopes that something will come along, nuclear power plants are almost literally uninsurable to sufficient levels in the market such that governments have to guarantee them substandard insurance by legal fiat (this is also true of very large hydroelectric dams which are probably also a very bad idea), and power plants that were supposed to be retired long ago have had their lifetimes extended threefold by regulators who don’t want to incur the cost of their planned replacements and refurbishments. And the whole thing was rushed forwards in the mid 20th century as a byproduct of the national desire for nuclear weapons, and remarkably little growth has occurred since that driver decreased.
How do you know it won’t? More to the point, it’s not a question of technology. It’s a question of how much you have to concentrate rare radionuclides in expensive gas centrifuge equipment and how heavily you have to contain the reaction and how long you have to isolate the resultant stuff. Technology does not trump thermodynamics and complexity and fragility.
What does this mean and why is it relevant?
Near as I can tell, all the research on it so far has shown that it is indeed possible without star-style gravitational confinement, very difficult, and completely uneconomic. We have all the materials you need to fuse readily available, if it were easy to do it economically we would’ve after fifty years of work. It should be noted that the average energy output of the sun itself is about 1⁄3 of a watt per cubic meter—fusion is trying to produce conditions and reactions of the sort you don’t even see in the largest stars in the universe. (And don’t start talking about helium three on the moon, I point to a throwawy line in http://physics.ucsd.edu/do-the-math/2011/10/stranded-resources/ regarding that pipe dream.)
Is it possible I’m wrong? Yes. But literally any future other than a future of rather less (But not zero!) concentrated energy available to humanity requires some deus ex machina to swoop down upon us. Should we really bank on that?
That an quite unfair comparison. The way we deal with coal waste kills ten of thousands or even hundreds of thousands per year. The way we deal with coal waste might cost more money but doesn’t kill as many people. Simply dumping all nuclear waste in the ocean would probably a more safe way of disposing of waste than the way we deal with coal.
Even tunnel that were created in coal mining can collapse and do damage.
Coal isn’t a picnic either and I have my own rants about it too. But dealing with coal waste (safely or unsafely) is a question of trucking it, not running complicated chemical and isotopic purification or locking it up so thoroughly.
The obvious explanation of the timing is Three Mile Island and Chernobyl.
Do you believe that Japan and Germany built nuclear plants for the purpose of eventually building weapons?
Japan and Germany are interesting cases, both for the same reason: rich nations with little or declining fossil fuels. Germany’s buildout of nuclear power corresponds to the timing of the beginning of the decline in the production of high-quality coal in that country, and Japan has no fossil fuels of its own so nuclear was far more competitive. With plentiful fossil fuels around nobody does nuclear since it’s harder, though even the nations which use nuclear invariably have quite a lot of fossil fuel use which I would wager ‘subsidizes’ it.
What do you mean by “competitive”? Shipping coal adds very little to its cost, so the economic calculation is hardly different for countries that have it and countries that don’t. Perhaps national governments view domestic industries very differently than economists, but you haven’t said how to take this into account. I think Japan explicitly invoked “self-sufficiency” in its decision, perhaps meaning concerns about wartime.
What do you mean by “un-dealt-with”? What cost do you think it will incur in the future?
Interesting point. However the correct cost of insurance has to take into account probability of various failures and I see no such probability assessment in the article. Also, what about Thorium power?
Are you sure the problem is with lack of desire for nuclear weapons rather than with anti-nuclear paranoia?
But the ratio between the physical requisites and dollars (i.e. labor) depends on technology very strongly.
At some point we are likely to have sufficient automation so that little human labor is required for most things, including energy production. In these condition, energy (and most other things) will cost much less than today, with fossil fuels or without them.
Obviously it’s not easy, but it doesn’t mean it’s impossible. We have ITER.
So what? We already can create temperatures lower than anywhere in the universe and nuclear species that don’t exist anywhere in the universe, why not better fusion conditions?
I don’t think scientific and technological progress is “deus ex machina”. Given historical record and known physical limits, it is expected there is a lot of progress still waiting to happen. Imagine the energy per capita available to a civilization that builds Dyson spheres.
Mostly sitting around full of transuranic elements with half-lives in the tens of thousands of years in facilities that were meant to be quite temporary, without much in the way of functional or economically competitive breeder reactors even where they have been tried. They will eventually incur one of three costs: reprocessing, geological storage, or release.
Near as I can tell it’s a way to boost the amount of fertile fuel for breeder reactors by about a factor of five. The technology is similar, with advantages and disadvantages. No matter what you have to run refined material through very complicated and capital-intensive and energy-intensive things, keep things contained, and dispose of waste.
These fuel cycles do work and they do produce energy, and if done right some technologies of the suite promoted for the purpose might reduce the waste quite a bit. My gripe is the fact that they work well (not to mention safely) in stable civilizations with lots of capital and concentrated wealth to put towards it that isn’t being applied to more basic infrastructure. Given the vagaries of history moving wealth and power around and the massive cheap energy and wealth subsidy that comes from fossil fuels that will go away, I’m not convinced that they can be run for long periods of time at a level that can compensate for the torrents of cheap wealth you get from burning the black rocks. I wouldn’t be terrilbly surprised at some nuclear power plants being around in a few thousand years, but I would be surprised at them providing anything like as much per capita as fossil fuels do now due to the complexity and wealth concentration issues.
I don’t understand how automation changes the energy, material, or complexity costs (think supply chains or fuel flows) associated with a technology.
Yes, and fusion research is fascinating. But the fact that while understanding of nuclear physics has been pretty well constant for decades more and more money goes into more and more expensive facilities, when looking back at the history of fission power (which does work, I’m not disputing that, just the cornucopian claims about it) pretty much as soon as it was understood it was taken advantage of, suggests to me that the sheer difficulty of it is such that the sort of technology that makes it possible is likely to be completely uneconomic. Technology is not an all-powerful force, it just is an accumulation of knowledge about how to make things that are possible happen. Some things will turn out to not be possible, or require too much effort to be worthwhile.
Except that when we look out into the universe we don’t see Dyson spheres, or evidence of replicators from elsewhere having passed our way, and we would be able to see Dyson spheres from quite a distance. It doesn’t happen. I’ve never understood why so few people look at the Fermi paradox and consider the possibility that it doesn’t mean we are a special snowflake or that we are doomed, but instead that intelligent life just doesn’t have a grand destiny among the stars and never has.
How much does it cost to maintain the current facilities? By what factor does it make nuclear energy more expensive?
The most important component of economic cost is human labor. We have plenty of energy and materials in the universe left. “complexity” is not a limited resource so I don’t understand what “complexity cost” is.
Yes, but I think that current technology is very far from the limits of the possible.
Sure, because we are the only intelligent life the universe. What’s so surprising about that?
To anyone out there embedded in a corporate environment, any tips and tricks to getting ahead? I’m a developer embedded within the business part of a tech organization. I’ve only been there a little while though. I’m wondering how I can foster medium-term career growth (and shorter-term, optimize performance reviews).
Of course “Do your job and do it well” tops the list, but I wouldn’t be asking here if I wanted the advice I could read in WSJ.
From personal observations
most emphatically does not top the list. Certainly you have to do an adequate job, but your success in a corporate environment depends on your interpersonal skills more than on anything else. You depend on other people to get noticed and promoted, so you need to be good at playing the game. If you haven’t taken a Dale Carnegie course or similar, do so. Toastmasters are useful, too. In general, learning to project a bit more status and competence than you think you merit likely means that people would go along with it.
Just to give an example, I have seen a few competent but unexceptional engineers become CEOs and CTOs over a few short years in a growing company, while other, better engineers never advanced beyond a team lead, if that.
If you are an above average engineer/programmer etc. but not a natural at playing politics, consider exploring your own projects. If you haven’t read Patrick McKenzie’s blog about it, do so. On the other hand, if striking out on your own is not your dream, and you already have enough drive, social skills and charisma to get noticed, you are not likely to benefit from whatever people on this site can tell you.
Perhaps we could be more specific about the social / political skills. I am probably not good at these skills, but here are a few things I have noticed:
Some of your colleagues have a connection between them unrelated to the work, usually preceding it. (Former classmates. Relatives; not necessarily having the same surname. Dating each other. Dating the other person’s family member. Members of the same religious group. Etc.) This can be a strong emotional bond which may override their judgement of the other person’s competence. So for example, if one of them is your superior, and the other is your incompetent colleague you have to cooperate with, that’s a dangerous situation, and you may not even be aware of it. -- I wish I knew the recommended solution. My approach is to pay attention to company gossip, and to be careful around people who are clearly incompetent and yet not fired. And then I try to take roles where I don’t need their outputs as inputs for my work (which can be difficult, because incompetent people are very likely to be in positions where they don’t deliver the final product, as if either they or the company were aware of the situation on some level).
If someone complains about everything, that is a red flag; this person probably causes the problems, or at least contributes to them. On the other hand, if someone says everything is great and seems like they mean it, that’s possibly also a red flag; it could be a person whose mistakes have to be fixed by someone else (e.g. because of the reasons mentioned in the previous paragraph), and that someone else could become you.
Extra red flag is a person who makes a lot of decisions and yet refuses to provide any of them in a written form. (Here, “written form” includes a company e-mail, or generally anything that you could later show to a third party. For example in the case when the person insists on something really stupid, things get horribly wrong, and then suddenly the person says it was actually your idea.) -- One nice trick is to send them an e-mail containing the decisions they gave you, and say something like “here is the summary of our meeting; please confirm if it’s correct, or please correct me if I’m not”.
Sometimes a person becomes an informational bottleneck between two parts of the company. That could happen naturally, or could be a strategy on their part. In such case, try to find some informal parallel channels to the other part of the graph. Do it especially if you are discouraged by the given person from doing so. For example, if they say the other part is stupid and blames them for all troubles of your part. (Guess what: He is probably telling them the same thing about your part. So now he is the only person the whole company trusts to fight for their best interests against the other stupid part.)
Okay, this was all the dark side. From the light side, being nice to people and having small talk with them is generally useful. Remember facts about them, make notes if necessary (not in front of them). Make sure you connect with everyone at least once in a while, instead of staying within your small circle of comfort.
I’d beware conflating “interpersonal skills” with “playing politics.” For CEO at least (and probably CTO as well), there are other important factors in job performance than raw engineering talent. The subtext of your comment is that the companies you mention were somehow duped into promoting these bad engineers to executive roles, but they might have just decided that their CEO/CTO needed to be good at managing or recruiting or negotiating, and the star engineer team lead didn’t have those skills.
Second, I think that the “playing politics” part is true at some organizations but not at others. Perhaps this is an instance of All Debates are Bravery Debates.
My model is something like: having passable interpersonal/communication skills is pretty much a no-brainer, but beyond that there are firms where it just doesn’t make that much of a difference, because they’re sufficiently good at figuring out who actually deserves credit for what that they can select harder for engineering ability than for politics. However, there are other organizations where this is definitely not the case.
Certainly there is a spectrum there.
I did not mean it that way in general, but in one particular case both ran the company into the ground, one by picking a wrong (dying) market, the other by picking a poor acquisition target (the code base hiding behind a flashy facade sucked). I am not claiming that if the company promoted someone else they would have done a better job.
If we define “playing politics as “using interpersonal relationships to one’s own advantage and others’ detriment”, then I am yet to see a company with more than a dozen employees where this wasn’t commonplace.
If we define “interpersonal skills” as “the art of presenting oneself in the best possible light”, then some people are naturally more skilled at it than others and techies rarely top the list.
As for trusting the management to accurately figure out who actually deserves credit, I am not as optimistic. Dilbert workplaces are contagious and so very common. I’m glad that you managed to avoid getting stuck in one.
Yes, definitely agree that politicians can dupe people into hiring them. Just wanted to raise the point that it’s very workplace-dependent. The takeaway is probably “investigate your own corporate environment and figure out whether doing your job well is actually rewarded, because it may not be”.
I have a working hypothesis that it is, to a large degree, a function of size. Pretty much all huge companies are Dilbertian, very few tiny one are. It’s more complicated than just that because in large companies people often manage to create small semi-isolated islands or enclaves with culture different from the surroundings, but I think the general rule that the concentration of PHBs is correlated with the company size holds.
I worked mostly for small companies, and Dilbert resonates with me strongly.
It probably depends on power differences and communication taboos, which in turn correlate with the company size. In a large company, having a power structure is almost unaviodable; but you can also have a dictator making stupid decisions in a small company.
Being a manager is a radically different job from being an engineer. In fact, I think that (generalization warning!) good engineers make bad managers. Different attitudes, different personalities, different skill sets.
One particular simple and easy to follow tip, to add to the Toastmasters and taking leadership type courses advice, is that you should also you signal to those around you of your interest in these things as well. Some of the other advice here can take time and be hard to achieve, you don’t just turn a switch become charismatic or a great public speaker. So in the meantime while you work on all those awesome skills, don’t forget to just simply let others know about your drive, ambitions, and competency.
This is easier to pull off than the fake-it-till-you-make-it trick. It’s more about show-your-ambition-till-you-make-it. It’s easy to do because you don’t have to fake anything. It reminds me of this seduction advice I read from Mystery’s first book that went something along the lines of, you don’t have to be already rich to seduce somebody, you just have to let them know you have ambition and desire to one day be rich/successful.
I recently read this piece on meritocracy—rung quite true to me from personal experience. I work with a guy of similar ability to me, but I think I would beat him on most technical and simple people skills. However, he still gets ahead from being more ambitious and upfront than I am, and while he’s a bit more qualified on paper it’s used to far better effect. (No bitterness, he’s still a good guy to work with and I know it’s up to me to be better. Also I’m in kind of mid-level finance rather than coding.)
I think that article is a bit bitter. It probably applies to some organizations, but I think most places at least manage to consider competence as a substantial part of the mix in promotion decisions.
Which is not to say signaling ambition isn’t valuable (I absolutely believe it is). Just that the article is bitter.
http://lesswrong.com/lw/jsp/political_skills_which_increase_income/ is an article by a LessWrong person that lists factors. Political abilities are important. That means signal modesty, making apologies when necessary and flattering people above you in the chain of command.
Here’s an idea for enterprising web-devs with a lot more free time than me: an online service that manages a person’s ongoing education with contemporary project management tools.
Once signed up to this service, I would like to be able to define educational projects with tasks, milestones, deliverables, etc. against which I can record and monitor my progress. If I specify dependencies and priorities, it can carry out wazzy critical path analysis and tell me what I should be working on and in what order. It can send me encouraging/harassing emails if I don’t update it regularly.
Some use cases:
I have enrolled in a formal course of study such as an undergrad degree. I can specify my subjects, texts, deadlines, tests and the like. It will tell me what I should be studying in what order, what areas I’m neglecting, and what I really need to get done before the coming weekend.
I have recently started a new job, and have a package of technologies and skills to learn. Some are more important than others, or have much longer time horizons. If I have x hours a week to develop these skills, it will tell me what I should be doing with those x hours.
Conversely, I am an employer or educator (or world-saving organisation) who wishes oversight of another person’s study. I can assign them a prefabricated syllabus and observe their progress.
Some things that might fall out of a system like this once the infrastructure is in place:
A community whose members can offer each other high-context support and advice
A lot of useful and interesting data on effective learning routes through various subjects, perhaps powering a recommendation service
I imagine there are enough autodidacts and students on LessWrong to establish a preliminary user base. I for one would happily pay for a service like this.
Will add this to my list of ed-tech start-up ideas to validate.
i’m interested in your other ed-tech startup ideas, if you don’t mind sharing.
List of them are here: http://www.quantifiedstartup.net/startup/
Student Relationship Management software? Sounds like a neat idea.
Ouch, that made my mind come up with a different startup idea, Relationship Management software. Basically it would be a website where you can post updates about your relationship every day, like “Last night we argued for 30 minutes” or “I feel that he’s unusually emotionally distant” or something like that. You would also input your partner’s astrological sign, and so on. And the website would give you an overall prognosis and some sort of bullshit psychological advice, like “Try to be more conscious of your needs in the relationship” or “At this point it’s likely that he’s cheating on you”. And it would show tons of ads for related products and services. I think some people would love it!
For a different sort of person, any sort of quantified self about relationships would be interesting. (I heard that an app exists where you record a happy face or a sad face after every time talking to a long distance partner, and it doesn’t give you any advice. Unfortunately, I can’t remember the name or where I heard of it.)
For a minimal product, perhaps just start with the dependencies and priorities side of things? That seems to be the core of such a product, and the rest is dressing it up for usability.
Does anyone have good resources on hypnosis, especially self-hypnosis? I’m mostly looking for how-tos but effectiveness research and theoretical grounding are also welcome.
http://cognitiveengineer.blogspot.com/
by jimmy, our resident evil hypnotist
“Monsters & Magical Sticks: There’s No Such Thing As Hypnosis?” is a fine book for explaining what hypnosis is.
The recurring punchline is that there’s no Hypnosis but there are hypnotic phenomena. Being a good hypnotist is basically about using a bunch of hypnotic phenomena to go where you want to go.
Framing an interaction is something very important. A hypnosis therapist I know says that the hypnosis sessions for quitting smoking begins with the call.
The patient calls to make an appointment. He answers and asks whether the person has made a decision to quit smoking. If the patient says “no” he tells the patient to call again once he made the decision. Hypnotherapist do a lot of stuff like this.
In the spirit of Matthew McConaughey’s Oscar acceptance speech, who is the you-in-ten-years that you are chasing?
The most important writer of Latin American science fiction.
See http://lesswrong.com/lw/g94/link_your_elusive_future_self/ for a reason you can’t know.
I have no idea. (Is that a bad thing?)
I am currently teaching myself basic Spanish. At the moment, I’m using my library’s (highly limited) resources to refresh my memory of Spanish learned in high school and college. However, I know I won’t go far without practice. To this end, I’d like to find a conversation partner.
Does anyone have any recommendation of resources for language learners? Particularly resources that enable conversation (written or spoke) so learners can improve and actually use what they are learning? The resource wouldn’t have to be dedicated solely to Spanish learning. Eventually, I want to learn other languages as well (such as German and French).
ROI in learning a foreign language is low, unless it is English. But if you must, I would say the next best thing to immersive instruction would be to watch spanish hulu as an aid to learning. You’d get real conversions at conversational speeds.
So after gwern pointed out that there is a transcript and I have read it I made a back of the envelope calculation.
Assumptions: According to Wikipedia people with a bachelor’s degree or higher make $56078 per year, so about $27 per hour. Learning German increases yearly income by 4% and learning it takes about 750 class hours, according to the foreign service institute. Learning Spanish increases income by 1.4% and takes 600 class hours to learn. If we assume that one class hour costs 11.25$ per hour (by glancing at various prices posted on different sites) we can make a calculation.
Assuming the language is learnt instead of working and that the foregone hours have no impact on later earning and that the foregone hours are paid at average salary, the student incurs opportunity cost in addition to the pure class cost. Ignoring all other effects, learning German costs $28657 with return of 2243 p.a. and learning Spanish costs $22926 with return $841 p.a. This works out to 7.8% and 3.6% on initial investment respectively.
So after 13 years learning German pays off, after 28 years learning Spanish pay off. Assuming the language is learnt at young age, at least learning German can be worthwhile. More benign assumptions, such as learning outside of class with some kind of program like Duolingo, will increase the return further, making learning even more worthwhile.
Of course I did not consider learning something else in those hundreds of hours that could have an even greater effect on income but for high income earners language learning is a very plausible way to increase their income. I assume this goes especially for people that have more obvious use of learning an additional language like translaters or investors.
You’re assuming that the correlation is purely causal and none of the increased income correlating with language learning is due to confounds; this is never true and so your ROI is going to be overstated.
Most people have discount rates >4%, which excludes the latter. Throw in some sort of penalty (50% would not be amiss, given how many correlations crash and burn when treated as causal), and that gets rid of the former.
Language learning for Americans just doesn’t work out unless one has a special reason.
(Unless, of course, it’s a computer language.)
Would be nice to know those. The paper states that people in managerial positions get substantially higher relative returns from learning a foreign language. That would be a special reason.
Maybe the returns are much higher for low-income earners. That question is uninteresting for the average LW user but still. I further wonder what the return on learning a language is in the future.
As an aside, I am surprised how hostile US Americans can be when it is suggested to learn another language.
Personally, I find most suggestions and discussion of Americans learning other languages to be highly irritating. They have not considered all of the relevant factors (continent sized country with 310m+ people speaking English, another 500m+ English-speakers worldwide, standard language of all aviation / commerce / diplomacy / science / technology, & many other skills to learn with extremely high returns like programming), don’t seem to care even when it is pointed out that the measured returns are razor-thin and near-zero and the true returns plausibly negative, and it serves as an excuse for classism, anti-Americanism, mood affiliation with cosmopolitanism/liberalism, and all-around snootiness.
It doesn’t take too many encounters with someone who is convinced that learning another language is a good use of time which will make one wealthier, morally superior, and more open-minded to start to lose one’s patience and become more than a little hostile.
It’s a bit like people who justify video-gaming with respect to terrible studies about irrelevant cognitive benefits (FPSes make me faster at reaction-time? how useful! not) - I want to grab them, shake them a little bit, and say ‘look at yourself from the outside! can’t you see that you’re transparently grabbing at flimsy justifications for something which you do for completely different reasons? You didn’t start playing Halo because you read and meta-analyzed a bunch of psych studies and decided that the lifetime reduction in risk from a 3% fall in simple reaction-time was worth 20 hours a week of work. And we both know you didn’t learn French because it’s going to pay off in future salary increases—you learned it because you had to learn something in high school and French has better cultural prejudices associated with it than Spanish!’
True. I come from the other side growing up in Germany and having met with a lot of foreign knowledge workers unwilling to learn even a lick of German. I actually know of several people that are unable to say “No, thank you” or “One beer please” and unwilling to learn. Personally I see this as highly disrespectful of the host country. After stating this opinion the unwillingness is then justified with the international status of English.
Anyhow, we are delving off into politics and thus I’d like to end this debate at this point with your understanding. I hope the downvote is not from you and moreso that is not because of that line only.
Yes, living in a foreign country is a substantially different proposition (and I’d guess that the correlated increases in income would be much higher). But comparing to Germany highlights part of why it’s such a bad idea for Americans: the population of America alone is 3.78x that of Germany, never mind the entire Anglophone world.
Disclaimer: I won’t listen to the podcast, because I am boycotting any medium that is not text.
Language learning may have extremely low ROI in general but extremely high in special cases. E.g. I would not be surprised by finding that people learning the language of the foreign country they live in increases their subjective wellbeing. Or if people want to work as translators. Or they are investors and are specialising in a region not speaking English as their main language.
This almost seems like a fallacy. I might call it “homogenity bias” or “mistaking the average for the whole” just to find out that is already known under a different name and well documented.
Good news! Freakonomics is, along with Econtalk and patio11, one of the rare podcasts which (if you had click through) you can see provides transcripts for most/all of all their podcasts.
Well fuck me. I saw the streaming bar and closed the tab, so entirely my fault.
Thank you for notifying.
Great link. From the cited paper:
Thanks for the link. I hadn’t actually considered language learning in an ROI-fashion, but it’s obviously something I should think about before making heavy investments.
I still think it worth the time since my field involves dealing with non-English parties often. Though I have no distinct need for bilingualism at the moment, it will make me more hireable. However, I do need to evaluate my time learning Spanish against, say, the gains of spending that same time learning programming languages.
I’d like to be your conversation partner. My Spanish is Colombian. PM me for contact details.
Also, there’s Papora.
If you are willing/able, the best way is to go to a Spanish school in Mexico or Central America and live with a host family for a month or two. I learned more in two months doing that than my first four university classes combined. This probably doesn’t fall under “teaching yourself,” but if you are serious the other things can’t even touch the ROI of an immersive experience, in terms of time and money to Spanish acquired
Fluenz is a great computer-based program, but it’s expensive. I used Rosetta Stone a bit, this is way better.
Pimsular audio tapes for car rides or MP3 player
Duolingo is free, but isn’t for active conversation.
Look on Meetup.com for a spanish conversation meetup.
italki is a good option for a conversation focus.
http://markmanson.net/foreign-language
http://www.andrewskotzko.com/how-to-unlock-foreign-languages/
The last two links are about principles/suggestions. I agree with most of them. This is my advice: say anything that you can, whenever you can. Embarrassment is often the biggest obstacle. When you are beginning with conversations, say anything you can, even if it is a single word, grammatically incorrect, or irrelevant.
With regard to what niceguyanon said about low ROI on languages aside from English, I think there are social capital benefits, self-confidence benefits, cognitive functioning benefits that are valuable. Not to mention travel benefits—Spanish makes travel in many countries easy.
Thank you very much for those links! They’re very helpful.
As I mentioned in reply to niceguyanon, my field is one where language acquisition has a higher value than it might for other fields. And, I’ll admit, I do feel that the confidence and cultural benefits are worth the investment, for me at least. Expression and communication are important to my work. Becoming a more efficient communicator means making myself more valuable.
I know that audio tapes and text books are not how I’m going to learn a language. Like many of my peers, I spent two years in classes repeating over and over “tener, tener… tengo, tengo… tengas, tengas....” and gained nothing out of it except how to say “Me gusto chocolate.” I know how language learning doesn’t work, if nothing else.
You could look into volunteering at a charity that serves Hispanics, or find an ESL conversation group and see whether they would interested in spending time speaking Spanish with you.
Recommendations for good collections of common Deep Wisdom? General or situation specific would be helpful (e.g. all the different standard advice you get while picking your college major, or going through a tough break up).
Check these out.
http://lesswrong.com/lw/gx5/boring_advice_repository/
http://lesswrong.com/lw/i64/repository_repository/
Yet another possible failure mode for naive anthropic reasoning.
I am curious about whether Borderline Personality Disorder is overrepresented on LessWrong compared to the general population.
Is Wikipedia’s article on BPD a good description of your personality any time in the past 5 years? For the sake of this poll, ignore the specific “F60.31 Borderline type” minimum criteria.
[pollid:678]
You are bound to ‘find’ that BPD is overrepresented here by surveying in this manner. (hint: medical student syndrome)
I could repeat this poll in a venue where the people are similarly prone to medical student syndrome, but not as prone to filling some kind of void with rationality or other epiphanies. That would provide a baseline for comparison. But I don’t yet know where exactly I would find such a venue.
The five results so far go against that.
You can’t detect whether a systematic bias in the sampling method exists by looking at the results.
If you have a prior, you can.
In a slightly more unrolled manner, if the results you are getting are inconsistent with your ideas of how the world works, one hypothesis that you should update is that your ideas about the world are wrong. But another hypothesis that you should also update is that your sampling method is wrong, e.g. by having a systematic bias.
Sure you can, in principle. When you have measured covariates, you can compare their sampled distribution to that of the population of interest. Find enough of a difference (modulo multiple comparisons, significance, researcher degrees of freedom, etc.) and you’ve detected bias. Ruling out systematic bias using your observations alone is much more difficult.
Even in this case, where we don’t have covariates, there are some patterns in the ordinal data (the concept of ancillary statistics might be helpful in coming up with some of these) that would be extremely unlikely under unbiased sampling.
That means that you need more data. Having a standard against which to train your model means that you need more than just the results of your measurement.
I was just contesting your statement as a universal one. For this poll, I agree you can’t really pursue the covariate strategy. However, I think you’re overstating challenge of getting more data and figuring out what to do with it.
For example, measuring BPD status is difficult. You can do it by conducting a psychological examination of your subjects (costly but accurate), you can do it by asking subjects to self-report on a four-level Likert-ish scale (cheap but inaccurate), or you could do countless other things along this tradeoff surface. On the other hand, measuring things like sex, age, level of education, etc. is easy. And even better, we have baseline levels of these covariates for communities like LessWrong, the United States, etc. with respect to which we might want to see if our sample is biased.
You argued against a more general statement than the one I made. But I did choose my words in a way that focused on drawing conclusions from the results and not results + comparison data.
Leaving aside the sample size, a sample value of zero cannot be an overestimate.
There no reason to leave aside sample size. The value was zero by nature of low sampling size.
The observed reality that the first 5 people voted that BPD doens’t apply to them provides nearly zero bayesian evidence against the idea of systematic bias by surveying in that manner.
While ignoring the sample size, I’d put a high probability on my comment having something to do with the intense response in the other direction. (I am not even sure how you can read all of it and not think that it is at least ‘poorly descriptive’, no matter who you are)
There are probably checklist to diagnose Borderline Personality Disorder that are much better than simply reading a Wikipedia article and thinking about whether it applies to you.
I found one, which doesn’t look enormously reputable but is probably better than wikipedia.
People with borderline personality disorder generally lack “insight,” i.e. they are typically unaware that they have BPD; will deny having it; and will get extremely defensive at the suggestion they have it.
One can contrast with, for example, obsessive/compulsive disorder sufferers who usually do have pretty good insight.
So a survey based on self-reporting is not going to be very helpful.
Anyway, I doubt that there are many people on this board with BPD. This is based on my interactions and observations.
Also, this discussion board doesn’t seem like it would be very attractive to someone with BPD since it doesn’t offer a steady stream of validation. For example, it’s common on this board for other posters, even those who agree with you on a lot of stuff, to challenge, question, or downvote your posts. For someone with BPD, that would be pretty difficult to handle.
The main mental issue I sense on this board (possibly disproportionate to the general population) is Asperger’s. There also seems to be a good deal of narcissism, though perhaps not to the point where it would qualify as a mental disorder.
So if a person with BPD would discover LW and decide they like the ideas, what would they most likely do?
My model says they would write a lot of comments on LW just to prove how much they love rationality, expecting a lot of love and admiration in return. At first they would express a lot of admiration towards people important in the rationalist community; they would try to make friends by open flattery (by giving what they want to get most). Later they would start suggesting how to do rationality even better (either writing a new sequence, or writing hundreds of comments repeating the same few key ideas), trying to make themselves another important person, possibly the most important one. But they would obviously keep missing the point. After the first negative reactions they would backpedal and claim to be misunderstood. Later they would accuse some people of persecuting them. After seeing that the community does not reward this strategy, they would accuse the whole LW of persecution, and try to split apart their own rationalist subcommunity centered around them.
I hate to point this out, but it is already easy enough to ridicule the proper spelling; its spelled Asperger.
Edit: Sorry tried to delete this comment, but that doesn’t seem to possible for some reason.
Fixed. FWIW thanks.
According to the Wikipedia article: “People with BPD feel emotions more easily, more deeply and for longer than others do.”
To me that doesn’t seem like the LW crowd, what would make you think that there’s an overrepresentation?
Because it’s a group of people who are excited for years about a rule for calculating conditional probability?
Yeah, I’m not serious here, but I will use this to illustrate the problem with self-diagnosis based on a description. Without hard facts, or without being aware how exactly the distibution in the population looks like, it’s like reading a horoscope.
Do I feel emotions? Uhm, yes. Easily? Uhm, sometimes. More deeply than others? Uhm, depends. For longer than others? I don’t have good data, so, uhm, maybe. OMG, I’m a total psycho!!!
No, there are a lot of data points.
One example: At the community we had one session where having empathy was a point. The person who’s on stage to explain the rest what empathy is talks about how it’s having an accurate mental model of other people and not that empathy is about feeling emotions.
I don’t want to say that having an accurate mental model of other people isn’t useful, but it’s not what people mean with the word empathy in a lot of other communities. Empathy usually refers to a process that’s about feeling emotions.
I actually attributed this to a higher than normal base rate of Asperger Syndrome.
I had an impulse to answer “very descriptive,” but I controlled it.
Better: Alice had an impulse to answer “not at all descriptive” but she controlled it and said ‘very descriptive’.
Clearly not :-D
I seem to feel emotions less pronounced as average persons. Would have been interesting to add that as an option.
Is anyone going to be at the Eastercon this weekend in Glasgow? Or, in London later in the year, Nineworlds or the Worldcon?
ETA: In case it wasn’t implied by my asking that, I will be at all of these. Anyone is free to say hello, but I’m not going to try to arrange any sort of organised meetup, given the fullness of the programmes of these events.
In the last open thread, someone suggested rationality lolcats, and then I made a few memes, but only put them on last minute. In case anyone would like to see them, they are here.
What’s a good Bayesian alternative to statistical significance testing? For example, if I look over my company’s email data to figure out what the best time of the week to send someone an email is, and I’ve got all possible hours of the week ordered by highest open rate to lowest open rate, how can I get a sense of whether I’m looking at a real effect or just noise?
In that scenario, how much does it really matter? It’s free to send email at one time of week rather than another, so your only cost is the opportunity cost of picking a bad time to email people, which doesn’t seem likely to be too big.
Our email send by the hour would get far lumpier, so we would have to add more servers in order to handle a much higher peak emails sent per minute. And it takes development effort to configure emails to send at an intelligent time based on the user’s timezone.
OK, here’s a proposed solution I came up with. Start with the overall open rate for all emails regardless of time of the week. Use that number, and your intuition for how much variation you are likely to see between different days and times (perhaps informed by studies on this subject that people have already done) to construct some prior distribution over the open probabilities you think you’re likely to see. You’ll want to choose a distribution over the interval (0, 1) only… I’m not sure if this one or this one is better in this particular case. Then for each hour of the week, use maximum-a-posteriori estimation (this seems like a brief & good explanation) to determine the mode of the posterior distribution, after you’ve updated on all of the open data you’ve observed. (This provides an explanation of how to do this.) The mode of an hour’s distribution is your probability estimate that an email sent during that particular hour of the week will be opened.
Given those probability estimates, you can figure out how many opens you’d get if emails were allocated optimally throughout the week vs how many opens you’d get if they were allocated randomly and figure out if optimal allocation would be worthwhile to set up.
Not Bayesian, but can’t you just do ANOVA w/ the non-summarized time of day vs. open rate (using hourly buckets)? That seems like a good first-pass way of telling whether or not there’s an actual difference there. I confess that my stats knowledge is really just from natural sciences experiment-design parts of lab classes, so I have a bias towards frequentist look-up-in-a-table techniques just because they’re what I’ve used.
Rant for a different day, but I think physics/engineering students really get screwed in terms of learning just enough stats/programming to be dangerous. (I.e., you’re just sort of expected to know and use them one day in class, and get told just enough to get by- especially numerical computing and C/Fortran/Matlab).
Suppose you have three hypotheses: (1) It’s better to email in the morning (2) It’s better to email in the evening (3) They’re equally good
Why do you care about (3)? If you’re just deciding whether to email in the morning or evening, (3) is irrelevant to ranking those two options.
The full-fledged Bayesian approach would be to identify the hypotheses (I’ve simplified it by reducing it down to just three), decide what your priors are, calculate the probability of seeing the data under each of the hypotheses, and then combing that data according to the Bayesian formula to find the posterior probability. However, you don’t have to run through the math to see that if your prior for (1) and (2) are equal, and the sample is skewed towards evening, then the posterior for (2) will be larger than the posterior for (1).
The only time you’d actually have to run through the math is if your priors weren’t equal, and you’re trying to decide whether the additional data is enough to overcome the difference in the priors, or if you have some consideration other than just choosing between morning or evening (for instance, you might find it more convenient to just email when you first have something to email about, in which case you’re choosing between “email in morning”, “email in evening” and “email whenever it’s convenient to me”).
“Statistical significance” is just a shorthand to avoid having to actually doing a Bayesian calculation. For instance, suppose we’re trying to decide whether a study showing that a drug is effective is statistically significant. If the only two choices were “take the drug” and “don’t take the drug”, and we were truly indifferent between those two options, the issue of significance wouldn’t even matter. We should just take the drug. The reason we care about whether the test is significant is because we aren’t indifferent to the two choices (we have a bias towards the status quo of not taking the drug, making the drug would cost money, there are probably going to be side effects of the drug, etc.) and there are other options (take another drug, have more drug trials, etc.) When a level of statistical significance is chosen, an implicit statement is being made about how much weight is being given towards the status quo.
Does anyone know of a way to collaboratively manage a flashcard deck in Anki or Mnemosyne? Barring that, what are my options so far as making it so?
Even if only two people are working on the same deck, the network effects of sharing cards makes the card-making process much cheaper. Each can edit the cards made by the other, they can divide the effort between the two of them, and they reap the benefit of insightful cards they might not have made themselves.
You could use some sort of cloud service: for example, Dropbox. One of the main ideas behind of Dropbox was to have a way for multiple people to easily edit stuff collaboratively. It has a very easy user interface for such things (just keep the deck in a synced folder), and you can do it even without all the technical fiddling you’d need for git.
If the deck format is some kind of text like XML you could look into using git for distribution and a simple text editor for editing.
Exactly the right avenue. Unfortunately, Anki typically uses its own idiosyncratic data format that’s not very handy for this kind of thing, but it’s possible to export and import decks to text, as it turns out.
The issue with this is that if you’re months into studying a deck and you’d like to merge edits from other contributors, I’m not certain that you simultaneously import the edits and keep all of your progress.
Even so, the text deck route has the most promise as far as I can tell.
Anki itself stores it”s data in SQLlite databases.
I think there a good chance that Anki itself will get better over time at collaborative deck editing. I think it’s one of the reason why Damien made the integrating with the web interface on of the priorities in Anki 2
I found this on Twitter, specifically related to applications for the blind (but the article is more general-purpose): Glasses to simulate polite eye contact
Having read only the article and the previously-mentioned tweet, and no comments and knowing nothing about what it actually looks like, I’m predicting that it falls into the uncanny valley, at best.
I’ve seen them and believe the correct descriptor is “ridiculous”.
I was thinking “creepy”, but I guess it’s that too.
What’s the copyright/licensing status of HPMOR?
Given that it’s fanfiction copyright isn’t straightforward. Harry Potter is sort of owned by J.K. Rowling. If you want to do something with HPMOR, sent Eliezer an email to ask for permission and he will probably grant it to you.
Good question. I thought http://hpmor.com/info/ would cover the licensing, but nope. Some googling doesn’t turn up any explicit licensing either.
That page leaves it clear:
“All fanfiction involves borrowing the original author’s characters, situations, and world. It is ridiculous to turn around and complain if your own ideas get borrowed in turn. Anyone is welcome to steal anything from any fanfiction I write.”
I think that only speaks to writing fanfiction of Eliezer’s fanfiction, not rights over the text itself. By default, the copyright is solely Eliezer’s unless and until he says otherwise.
He only says you’re allowed to steal it. Not to use it with permission. If you take it without permission, that’s stealing, so you have permission, which means that you didn’t steal it, etc.
No, no, no: He didn’t say that you don’t have permission if you don’t steal it, only that you do have permission if you do.
What you said is true: If you take it without permission, that’s stealing, so you have permission, which means that you didn’t steal it.
However, your argument falls apart at the next step, the one you dismissed with a simple “etc.” The fact that you didn’t steal it in no way invalidates your permission, as stealing ⇒ permission, not stealing ⇔ permission, and thus it is not necessarily the case that ~stealing ⇒ ~permission.
The exception proves the rule. Since he gave permission to steal it, that implies that you don’t have permission to take it in general.
I was wondering if there are any services out there that will tie charitable donations to my spending on a certain class of good, or with a certain credit card. E.g. Every time I buy a phone app or spend on in-app purchases, a matching amount of money goes to a particular charity.
There’s a lot of credit cards that will give a fix percentage of money to charity whenever you use them but I don’t think any will go up to the amounts I bet you want.
Hi, I’ve been intermittently lurking here since I started reading HPMOR. So now I joined and the first thing I wanted to bring up is this paper which I read about the possibility that we are living in a simulation. The abstract:
“This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.”
Quite simple, really, but I found it extremely interesting.
http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf
I don’t have enough karma to create my own post, so I’m cross posting this from a gist
Pascal’s Wager and Pascal’s Mugging as Fixed Points of the Anthropic Principle
Skepticism Meets Belief
Pascal’s Wager and Pascal’s Mugging are two thought experiments that explore what happens when rational skepticism meets belief. As skepticism and belief move towards each other, they approach a limit such that it’s impossible to cross from one to the other without some outside help.
Pascal’s Wager takes the point of view of a rational being attempting to make a decision about whether to believe in a higher being. As humans we can empathize with this point of view; we often have to make important decisions with incomplete or even dubious information. Pascal’s Wager says: it’s impossible to have enough information to make a rational decision about God’s existence so the rational position is to believe just in case there is and belief is important.
Pascal’s Mugging takes the point of view of the higher being attempting to cajole the rational being into paying a token fee to prevent an outrageously terrible but more outrageously unlikely event from happening. Due to choosing a skeptical muggee who demands comparably strong proofs for infinitesimal priors, there’s no amount of magic that would be an effective proof to convince the muggee.
I’m God, Therefore I Am
Both of these accounts show a lack of empathy with the higher being’s point of view because they start from the assumption that the higher being doesn’t exist and the rational being is in need of a convincing reason to believe in something extraordinarily unlikely. The first truth that the higher being will know is the same that we know: “I think, therefore I am”. If we want to empathize with this point of view then we have to start with the assumption of existence and work backward.
The anthropic principle does exactly that. It starts from the assumption that the present is truth (infinite prior) and works backward through the more and more unlikely sequence of events that got us here. Since we have an infinite prior to work with, we can justify as many ridiculously unlikely coincidences as we need to explain the present that are still a possible universe, e.g. compatible with scientific observation. This is how we can claim with a straight face that it’s possible the Sun is the first star in 13+ billion years to achieve our level of civilization, which explains why we seem to be alone and we’re all here and not somewhere else.
Belief As Reality Fluid
If we combine our new found empathy for higher beings with the anthropic principle we can flip the stories around and gain new insight. From the point of view of a future evolved higher being, their present is the infinite prior and the path between our present and theirs is what happened, despite its infinitesimal likeliness from our point of view. If the higher being knows it exists why does it care about belief? The only reason I can imagine that a higher being would care about belief is if belief was critical to its existence.
In order for a lower being to evolve into a higher being the lower being has to eventually believe the higher being is possible so that they can work toward becoming or creating it. From the point of view of the higher being, this belief is a requirement for it to exist and be sentient. If we imagine a higher being as something that is engineered then at some point a project manager in a technologically advanced civilization with the capability to build a higher being is going to have to be convinced that not only can a higher being exist but that they can build it.
Building a Higher Being
From this point of view, Pascal’s Wager looks like a filter to identify skeptical people who can truly change their mind (actually believe in a higher being despite being both rational and having flimsy or circumstantial evidence). If time is cyclical and the ultimate outcome of this universe feeds into the starting point of the next, then a higher being would have the ability to influence the general path of the next cycle. Pascal’s Mugging starts to look like a description of the least unlikely event required to bootstrap a higher being into existence. If this is the case, then in some sense the higher being is also its own creator (or at least a very similar previous version). From this vantage the creator can empathize with the process of creating the universe they find themselves in and gain insight into why things are like they are, from an implementation perspective, e.g. “if I was going to program the universe, X would do Y because of Z”.
In some sense, the higher being wills itself into existence as part of a stable, cyclical universe in which some amount of belief is a requirement. Their only responsibility is ensuring that a civilization somewhere in the next universe they build culminates in that one infinite/infinitesimal prior (depending on your point of view) so that the next iteration can be built.
I was brought up a pretty devout Catholic, but I stopped going to church and declared myself an atheist to my family before I got out of high school. I have always been pretty proud of myself for having the intelligence and courage to do this. But today I realized that I follow a thirty-something bearded Jewish guy who, along with a small group of disciples, has performed seemingly impossible deeds, preaches in parables, plans to rise from the dead and bring as many of us as he can with him, defeat evil, and create a paradise where we can all live happily forever. So maybe I haven’t shaken off as many of those old beliefs as I thought...
Happy Easter, everybody.
Gaia
Bit of a smorgasbord of a post (or a gish-gallop, if I’m not mincing words). Sorry to say, but much of your reasoning is opaque to me. Possibly because I misunderstand. Infinite priors? Anthropic reasoning applied to ‘higher beings’, because we emphatize with such a higher being’s cogito? You lost me there.
I’d say that the possibility of a non-expected FOOM process would be a counterexample, but then again, I have no idea whether you’d qualify a superintelligence of the uFAI variety as a ‘higher being’.
Didn’t see that coming.
It may be that you’ve put a large amount of effort into coming to the conclusions you have, but you really need to put some amount of effort into bridging those inferential gaps.
Gaia+VR
If you’re going to make up new meanings for words, you should at least organize the definitions to be consistent with dependencies: dependent definitions after words they are dependent on, and related definitions as close to each other as possible. In your list, there are numerous words that are defined in terms of words whose definitions appear afterwards. Among other problems, this allows for the possibility of circular definitions.
Also, many of the definitions don’t make sense. e.g.
“An algorithm that guides reproduction over a population of networks toward a given criteria. This is measured as an error rate.”
Syntactically, “this” would refer to “criteria”, which doesn’t make sense. If it doesn’t refer to criteria, then it’s not clear what it does refer to.
I think your post is a bit rambling and incoherent but I very much support your style of making long comments in the fashion of posts with BOLD section headings etc.
Evil Stupid Thing Alert!
“The Duty to Lie to Stupid Voters”—yes, really
I decided to post it here because it’s just so incredibly stupid and naively evil, but also because it’s using LW-ish language in a piece on how to—in essence—thoroughly corrupt the libertarian cause. Thought y’all would enjoy it.
Standard rejoinders. Furthermore: even if Brennan is ignorant of the classical liberal value of republicanism, why can’t he use his own libertarian philosophy to unfuck himself? How is lying like this ethical under it? Why does he discuss the benefits of such crude, object-level deception openly, on a moderately well read blog, with potential for blowback? By VALIS, this is a perfect example of how much some apparently intelligent people could, indeed, benefit from reading LW!
I am down voting this because:
a) I don’t want to see people pushing politics on LW in any form.
b) It is entirely nonobvious to me that this is either evil or stupid.
Consider two concepts: “credibility” and “multiple rounds”. That’s what makes it stupid.
Consider another idea: “I don’t care about multiple rounds because after a single win I can do enough”. That’s what makes it evil.
Well I am apparently too stupid to understand why the quoted article is stupid or evil, not to mention incredibly stupid or naively evil.
In any consequentialist theory combined with some knowledge of the actual world as it functions that we live in I don’t see how you can escape the conclusion that a politician running has a right to lie to voters. An essential conclusion from observing reality is that politicians lie to voters. Upon examination, it is hard NOT to conclude that politicians who don’t lie enough don’t get elected. If we are consequentialist, then either 1) elected politicians do create consequences and so a politician who will create good consequences had best lie “the right amount” to get elected or 2) elected politicians do not create consequences in which case it is consequentially neutral whether a politician lies, and therefore morally neutral.
If you prefer a non-consequentialist or even anti-consequentialist moral system, then bully for you, it is wrong (within your system) for politicians to lie to voters, but that conclusion is inconsequential, except perhaps for a very small number of people, presumably the politician who’s soul is saved or who’s virtue is kept intact by his pyrrhic act of telling the truth.
A lot of the superficial evilness and stupidity is softened by the follow-up post, where in reply to the objection that politicians uniformly following this principle would result in a much worse situation, he says:
So maybe he just meant that in some situations the “objectively right” action is to lie to voters, without actually recommending that politicians go out and do it (just as most utilitarians would not recommend that people try to always act like strict naive utilitarians).
I’m confused. So would he recommend that the politicians do the “objectively wrong” thing?
All of that looks a lot like incoherence, unwillingness to accept the implications of stated beliefs, and general handwaving.
So the problem is that the politicians can’t lie well enough?? X-D
No, that’s not what he means. Quoting from the post (which I apologize for not linking to before):
So, to recap. Brennan says “lying to voters is the right thing when good results from it”. His critics say, very reasonably, that since politicians and humans in general are biased in their own favor in manifold ways, every politician would surely think that good would result from their lies, so if everyone followed his advice everyone would lie all the time, with disastrous consequences. Brennan replies that this doesn’t mean that “lying is right when good results from it” is false; it just means that due to human fallibilities a better general outcome would be achieved if people didn’t try to do the right thing in this situation but followed the simpler rule of never lying.
My interpretation is that therefore in the post Multiheaded linked to Brennan was not, despite appearances, making a case that actually existing politicians should actually go ahead and lie, but rather making an ivory-tower philosophical point that sometimes them lying would be “the right thing to do” in the abstract sense.
So, is there any insight here other than restating the standard consequentialist position that “doing X is right when it leads to good outcomes”?
Especially given how Brennan backpedals into deontological ethics once we start talking about the real world?
For a wrong outcome B, you can usually imagine even worse outcome C.
In a situation with perfect information, it is better to choose a right outcome A instead of a wrong outcome B. But in a situation with an imperfect information, choosing B may be preferable to having A with some small probability p, and C with probability 1-p.
The lesson about the ethical injuctions seems to me that we should be aware that in some political contexts the value of p is extremely low, and yet because of obvious evolutionary pressures, we have a bias to believe that p is actually very large. Therefore we should recognize such situations with a large p (because that’s how it feels from inside), realize the bias, and apply a sufficiently strong correction, which usually means to stop.
Actually… yes.
More precisely, I would expect politicians to be good at lying for the goal of getting more personal power, because that’s what the evolution has optimized humans for; and the politicians are here the experts among humans.
But I expect all humans, including politicians, to fail at maximizing utility when defined otherwise.
Consequentialism has no problems with lying at all.
Many internet libertarians aren’t very consequentialist, though. And really, just the basic application of rule-utilitarianism would expose many, many problems with that post. But really, though: while the “Non-Aggression Principle” appears just laughably unworkable to me… given that many libertarians do subscribe to it, is lying to voters not an act of aggression?
Depends on your point of view, of course, but I don’t think the bleeding-heart libertarians (aka liberaltarians) are actually libertarians. In any case, it’s likely that the guy didn’t spend too much time thinking it through. But so what? You know the appropriate xkcd cartoon, I assume...
Given that the guy is a professional philosopher I doubt ignorance is a good explanation. It’s probably a case of someone wanting to be to contrarian for his own good. Or at least the good of his cause. Given that he wrote a book to argue that most people shouldn’t vote, he might simply troll for academic controversy to get recognition and citations.
A question for effective altruists in the US.
How much did you donate last year? Don’t answer that. Just compare it to the amount of taxes you paid, and realize that 19% of those taxes went to defense spending. (Veteran benefits, interest on debt incurred by defense spending and other indirect costs are not included in that number.) When you congratulate yourself on your altruism, don’t forget you’re also funding the NSA, the drone attacks in various middle east countries, and thousands of tanks sitting idly on a base somewhere.
Are your donations outweighing this?
For utility maximizers there is no “outweigh”. There is only “better” and “worse”.
In this case “outweigh” is relevant. If your altruistic activities don’t outweigh the impact of your taxes, your EA move is to live off-the-grid (assuming we’ve simplified down to those two factors, and neglecting tax avoidance methods).
You can easily control your earnings on the downside, is the point.
Fair enough. So what are better or worse options for spending of one’s tax dollars? Can you do anything, except try to pay less taxes (and spend the gain altruistically) or pay them in a country that will use them more effectively to improve the world?
You don’t get any options for spending of your tax dollars, so there are no better or worse ones.
Depends on your citizenship and the specifics of the situation. The US, for example, taxes its citizens on their worldwide income.
Taxes paid to the country you live in count as a tax deduction, so in the common case that the host country has a higher tax rate than the US, a US citizen living abroad pays no tax to the US. And if you already have permanent residency somewhere else, changing your citizenship is not super difficult.
Why the heck do Effective Altruists need to be singled out for this? You seem to be punishing people for wanting to be effective altruists, which is super weird.
Not all but many effective altruists and certainly the dominant discourse in recent times care about earning to give, ie making a ton of money so that you can give more to charity. Making a ton of money in america has the side effect of giving a ton of money to the us government. If this is evil on net, it might be more effectively altruistic for someone not to make money to give to charity OR the government if you live in the US.
Effective altruists are the ones who care particularly much about what their money does.
You get effective altruists wrong. They care about the results of their actions. It a philosophy about choosing effective actions about actions that aren’t. It’s not about feeling guilty that some of your actions have no big positive effects.
That means you focus your attention on area where you can achieve a lot instead of focusing it where you can’t do much. I find the argument that the US would spend less on military when US citizens would pay less taxes questionable. You can’t simply defund a highly powerful organisation like the NSA. Less government money is rather going to be a problem for welfare payments.
In discussions about where an effective altruist is supposed to live it might be a worthwhile point the effect of tax money. Paying taxes in Switzerland instead of the US might be beneficial if you decide whether to life in San Francisco or Zurich.
Maybe he’s been antagonized by some smug effective altruist harping on about how much more ethical he is. I suspect things like that happen.
I expect some people perceive effective altruists that way no matter what their attitudes; they feel the harping on about how much more ethical they are is implied.
It’s easy to be cynical about the military, but consider the simple fact that we live in one of the most peaceful ages ever. The Middle East conflicts of the last decade-plus involving the US have resulted in far fewer deaths than, say, the Vietnam War. You might say there should have been none of these conflicts to begin with, but things certainly could have been worse as well!
I was medically discharged from the military. The Veteran benefits that are paid for by taxes paid for my schooling (since I couldn’t stay in the military I had to get a different education to make a living), and also provide me with a disability check every month. So those taxes probably count as some sort of altruism.