I ate something I shouldn’t have the other day and ended up having this surreal dream where Mencius Moldbug had gotten tired of the state of the software industry and the Internet and had made his personal solution to it all into an actual piece of working software that was some sort of bizarre synthesis of a peer-to-peer identity and distributed computing platform, an operating system and a programming language. Unfortunately, you needed to figure out an insane system of phoneticized punctuation that got rewritten into a combinator grammar VM code if you wanted to program anything in it. I think there even was a public Github with reams of code in it, but when I tried to read it I realized that my computer was actually a cardboard box with an endless swarm of spiders crawling out of it while all my teeth were falling out, and then I woke up without ever finding out exactly how the thing was supposed to work.
One of Urbit’s problems is that we don’t exactly have a word for what Urbit is. If there is such a word, it somehow means both “operating system” and “network protocol,” while somehow also implying “functional” and “deterministic.”
Not only is there no such word, it’s not even clear there should be one. And if there was, could we even hear it? As Wittgenstein said: if a lion could talk, we would not understand him. But heck, let’s try anyway.
For an example of fully rampant Typical Mind Fallacy in Urbit, see the security document. About two-thirds of the way down, you can actually see Yarvin transform into Moldbug and start pontificating on how humans communicating on a network should work, and never mind the observable evidence of how they actually have behaved whenever each of the conditions he describes have obtained.
The very first thing people will do with the Urbit system is try to mess with its assumptions, in ways that its creators literally could not foresee (due to Typical Mind Fallacy), though they might have been reasonably expected to (given the real world as data).
So I’ve debated a lot of religious people in my youth, and a common sort of “inferential drift”, if you can call if that, is that they believe that if you don’t think something is true or doesn’t exist, then this must mean that you don’t want said thing to be true or to exist. It’s like a sort of meta-motivated reasoning; they are falsely attributing your conclusions due to motivated reasoning. The most obvious examples are reading any sort of Creationist writing that critiques evolution, where they pretty explicitly attribute accepting the theory of evolution to a desire for god to not exist.
I’ve started to notice it in many other highly charged, mind-killing topics as well. Is this all in my head? Has anyone else experienced this?
I used to get a lot of people telling me I was an atheist because I either didn’t want there to be a god or because I wanted the universe to be logical (granted, I do want that, but they meant it in the pejorative Vulcan-y sense). I eventually shut them up with “who doesn’t want to believe they’re going to heaven?” but it took me a while to come up with that one.
I don’t understand it either, but this is a thing people say a lot.
That does seem close to Bulverism. But what I described seem to be happening at a subconscious bias level, where people are somewhat talking past each other due to a sort of hidden assumption of Bulverism.
No, that is a mere assertion (which may or may not be true). If they claimed that he is wrong because he is engaging in motivated reasoning, then that would be ad hominem.
Wait, what? This might be a little off topic, but if you assert that they lack evidence and are drawing conclusions based on motivated reasoning, that seems highly relevant and not ad hominem. I guess it could be unnecessary, as you might try to focus exactly on their evidence, but it would seem reasonable to look at the evidence they present, and say “this is consistent with motivated reasoning, for example you describe many things that would happen by chance but nothing similar contradictory, so there seems to be some confirmation bias” etc.
Robin Hanson defines “viewquakes” as “insights which dramatically change my world view.”
Are there any particular books that have caused you personally to experience a viewquake?
Or to put the question differently, if you wanted someone to experience a viewquake, can you name any books that you believe have a high probability of provoking a viewquake?
I’m not sure if it is possible or has a high chance of success to give someone a book in the hope of provoking a viewquake. Most people would detect being influenced.
Compare with trying to give people the bible to convert them doesn’t work either even though it also could provoke a viewquake—after all the bible is also much different from other common literature.
To actually provoke a viewquake it must be a missing piece either connecting pieces or buildig on them and thus causing an aha moment. And the trouble is: This depends critically on your prior knowledge thus not every book will work on everyone.
I know of a few former-theists whose atheist tipping point was reading Susan Blackmore’s The Meme Machine. I recall being fairly heavily influenced by this myself when I first read it (about twelve years ago, when it was one of only a small handful of popular books on memetics), but suspect I might find it a bit tiresome and erroneous if I were to re-read it.
Primarily how much biology and ecosystems could have largescale impacts on society and culture in ways which stayed around even after the underlying issue was no longer around. One of the examples there is how the prevalence of diseases (yellow fever, malaria especially) had long-term impacts on differences in North American culture in both the South and the North.
Reading Wittgenstein’s Philosophical Investigations prompted the biggest viewquake I’ve ever experienced, substantially changing my conception of what a properly naturalistic worldview looks like, especially the role of normativity therein. I’m not sure I’d assign it a high probability of provoking a viewquake in others, though, given his aphoristic and often frustratingly opaque style. I think it worked for me because I already had vague misgivings about my prior worldview that I was having trouble nailing down, and the book helped bring these apprehensions into focus.
A more concrete scientific viewquake: reading Jaynes, especially his work on statistical mechanics, completely altered my approach to my Ph.D. dissertation (and also, incidentally, led me to LW).
The biggest world-shattering book for me was the classic, Engines of Creation by K. Eric Drexler. I was just 21 and the book had a large impact on me. Nowadays though, the ideas in the book are pretty mainstream, so I don’t think it would have the same effect for a millenial.
While it’s overoptimistic and generally a bit all over the place, Kurzweil’s The Singularity is Near might still be the most bang for the book single introduction to the “humans are made of atoms” mindset you can throw at someone who is reasonably popular science literate but hasn’t had any exposure to serious transhumanism.
It’s kinda like how The God Delusion might not be the most deep book on the social psychology of religion, but it’s still a really good book to give to the smart teenager who was raised by fundamentalists and wants to be deprogrammed.
After reading Engines of Creation, The Singularity is Near didn’t have nearly as much effect on me. I just thought, “Well, duh” while reading it. I can imagine how it would affect someone with little exposure to transhumanist ideas though. I agree with you that it’s a good choice.
The sympathetic nervous system activation that helps you tense up to take a punch or put on a burst of speed to outrun an unfriendly dog isn’t quite so helpful when you’re bracing to defend yourself against an intangible threat, like, say, admitting you need to change your mind.
Once of CFAR’s instructors will walk participants through the biology of the fight/flight/freeze response and then run interactive practice on how to deliberately notice and adjust your response under pressure. The class is capped at 12, due to its interactive nature.
An iteration of this class was one of the high points of the May 2013 CFAR retreat for me. It was extraordinarily helpful in helping me get over various aversions, be less reactive and more agenty about my actions, and generally enjoy life more. For instance, I gained the ability to enjoy, or substantially increased my enjoyment of, several activities I didn’t particularly like, including:
improv games
additional types of social dance
conversations with strangers
public speaking
It also helped substantially with CFAR’s comfort zone expansion exercises. Highly recommended.
A bit. Most of the techniques were developed by one of the CFAR instructors, and I can’t reproduce his instruction, nor do I want to steal his thunder. The closest thing you can find out more about is mindfulness-based stress reduction. (But the real value of the class is being able to practice with Val and ask him questions, which unfortunately I can’t do justice to in a LW comment.)
Anyone here familiar enough with General Semantics and willing to write an article about it? Preferably not just a few slogans, but also some examples of how to use it in real life.
I have heard it mentioned a few times, and it sounds to me a bit LessWrongish, but I admit I am too lazy now to read a whole book about it (and I heard that Korzybski is difficult to read, which also does not encourage me).
I just started rereading Science and Sanity and maybe the project will develop into a lesswrong post.
When it comes to Korzybski being difficult to read I think it’s because the idea he advocates are complex.
As he writes himself:
For those other readers who insist on translating the new terms with new structural implications into their old habitual language, and choose to retain the old terms with old structural implications and old semanatic relations this work will not appear simple.
It’s a bit like learning a foreign language in a foreign language. In some sense that seems necessary.
A lot of dumb down elements of General Semantics made it into popular culture but the core seems to be intrinsicly hard.
Non-violent communication is the intellectual heir of E-prime which was the heir of semantic concerns in General Semantics. Recent books on the subject are well reviewed. It is a useful tool in communicating across large value rifts.
Non-violent communication is the intellectual heir of E-prime which was the heir of semantic concerns in General Semantics.
I don’t think it makes sense to speak of a single framework as the heir of General Semantics. General Semantics influenced quite a lot.
General Semantics itself is quite complex. Nonviolent communication is pretty useless when you want to speak about scientific knowledge.
General Semantics notions of thinking about relations and structure are on the other hand are quite useful.
Does Rosenberg cite Bourland (or Korzybski) anywhere? I thought these were independent inventions that happened upon some tangential ideas about non-judgmental thinking.
I had thought that there was a link in someone Rosenberg worked with developing it but now I can’t find anything. The elimination of the “to-be” verb forms does not seem explicit in NVC methodology. I think you are correct and they are independent.
I noticed that in the survey results from last year that there was a large number of people who assigned a non-trivial probability to the simulation hypothesis, yet identified as atheist.
I know this is just about definitions and labels, so isn’t an incredibly important issue, but I was wondering why people choose to identify that way. It seems to me that if you assign a >20% chance to us living in a computer simulation that you should also identify as agnostic.
If not, it seems like you are using a definition of god which includes all the major religions, yet excludes our possible simulators. What is the distinction that you think makes the simulation not count as theism?
Probably these people use a definition of theism that says that a god has to be an ontologically basic entity in an absolute sense, not just relative to our universe. If our simulators are complex entities that have evolved naturally in their physical universe (or are simulated in turn by a higher level) then they don’t count as gods by this definition.
Also, the general definition of God includes omniscience and omnipotence, but a simulator-god may not be either, e.g. due to limited computing resources they couldn’t simulate an arbitrarily large number of unique humans.
Hmm, that is a distinction that is pretty clear cut. However most people who believe in god believe that all people have ontologically basic souls. Therefore, since they think ontologically basic is nothing particularly special, I do not think that they would consider that a particularly important part of the definition of a god.
They might think that being ontologically basic is a necessary condition for being a god, but not a sufficient condition. Then simulators are not gods, but souls are not gods either because they do not satisfy other possible necessary conditions: e,g, having created the universe, or being omnipotent, omniscient and omnibenevolent (or at least being much more powerful, knowing and good than a human), etc.
Or perhaps, they believe being ontologically basic is necessary and sufficient for being a god, but interpret this not just as not being composed of material parts, but in the stronger sense of not being dependent on anything else for existing (which souls do not satisfy because they are created by God, and simulators don’t because they have evolved or have been simulated in turn). (ETA: this last possibility probably applies to some theists but not the atheists you are talking about.)
What is your response to the argument I gave below?
I feel like there are two independent questions:
1) Does there exist a creator with a mind?
2) Are minds ontologically basic?
I think that accurately factors beliefs into 2 different questions, since there are (I think) very few people who believe that god has an an ontologically basic mind yet we do not.
I do not think it is justified to combine these questions together, since there are people who say yes to 1 but not 2, and many many people who say yes to 2 but not 1.
They are indeed logically distinct questions. However, up to a few years ago all or almost all people who said yes to 1 also said yes to 2. The word “theism” was coined with these people in mind and is strongly associated with yes to 2 and with the rest of the religious memeset.
Thus, it is not surprising that many people who only accept (or find likely) 1 but not 2 would reject this label for fear of false associations. Since people accepting both 1 and 2 (religionists) tend to differ philosophically very much in other things from those accepting 1 but not 2 (simulationists), it seems better to use a new technical term (e.g. “creatorism”) for plain yes to 1, instead of using a historical term like “theism” that obscures this difference.
Disagree with theists that people have ontologically basic souls; further disagree with the claim that the ‘ontologically basic’ / ‘supernatural’ aspect of a god is unimportant to its definition.
(What theists think is not relevant to a question about the beliefs of people who not self-identify as theists.)
I think that accurately factors beliefs into 2 different questions, since there are (I think) very few people who believe that god has an an ontologically basic mind yet we do not.
I do not think it is justified to combine these questions together, since there are people who say yes to 1 but not 2, and many many people who say yes to 2 but not 1.
Calling myself an agnostic would put me in an empirical cluster with people who think gods worthy of worship might exist, and possibly have some vague hope for an afterlife (though I know not all agnostics believe these things). I do not think of potential matrix overlords the way people think of the things they connect to the words “God” and “gods”. I think of them as “those bastards that (might) have us all trapped in a zoo.” And if they existed, I wouldn’t expect them to have (real) magic powers, nor to be the creators of a real universe, just a zoo that looks like one. I do not think that animals trapped in a zoo with enclosure walls painted with trees and such to look like a real forest should think of zookeepers as gods, even if they have effectively created the animals’ world, and may have created the animals themselves (through artificial breeding, or even cloning), and I think that is basically analogous to what our position would be if the simulation hypothesis was correct.
Hmm. I was more thinking about a physics simulation by something that is nothing like a human than an ancestor simulation like in Bostrom’s original argument. I think that most people who assign a non-trivial chance to ancestor simulation would assign a non-trivial chance to physics simulation.
I don’t think either variety is very similar to a zoo, but if we were in a physics simulation, I do not think our relationship with our simulators is anything like a animal-zookeeper relationship.
I also think that you should taboo the word “universe,” since it implies that there is nothing containing it. Whatever it is that we are in, our simulators created all of it, and probably could interfere if they wanted to. They are unlikely to want to now, since they went so long without interfering so far.
I also think that you should taboo the word “universe,” since it implies that there is nothing containing it.
It may have once meant that, like the word “atom” once meant “indivisible.” But that’s not how people seem to use it anymore. Once a critical mass of people start misusing a word, I would rather become part of the problem than fight the inevitable.
Theism usually involves God as the explanation of why the world exists, and why we are conscious. In usual simulation scenarios, a world happens through physics and natural selection etc. And then a copy of part of that world is made. Yes, the copying process “made” the copy, but most explanations of how the copied world is the way it is (from the point of view of those in it) still has to do with physics, natural selection, etc. and not the copying process.
In other words, “who designed our world?” is more relevant than “who created our world?”.
There’s an annoying assumption that no parent would want their child to have a greatly extended lifespan, but I think it’s a reasonable overview otherwise, or at least I agree that there’s not going to be a major increase in longevity without a breakthrough. Lifestyle changes won’t do it.
I’ve been working on a series of videos about prison reform. During my reading, I came across an interesting passage from wikipedia:
In colonial America, punishments were severe. The Massachusetts assembly in 1736 ordered that a thief, on first conviction, be fined or whipped. The second time he was to pay treble damages, sit for an hour upon the gallows platform with a noose around his neck and then be carted to the whipping post for thirty stripes. For the third offense he was to be hanged.[4] But the implementation was haphazard as there was no effective police system and judges wouldn’t convict if they believed the punishment was excessive. The local jails mainly held men awaiting trial or punishment and those in debt.
What struck me was how preferable these punishments (except the hanging, but that was very rare) seem compared to the current system of massive scale long-term imprisonment. I would much rather pay damages and be whipped than serve months or years in jail. Oddly, most people seem to agree with Wikipedia that whipping is more “severe” than imprisonment of several months or years (and of course, many prisoners will be beaten or raped in prison). Yet I think if you gave people being convicted for theft a choice, most of them would choose the physical punishment instead of jail time.
I’m reminded of the perennial objections to Torture vs Dust Specks to the effect that torture is a sacred anti-value which simply cannot be evaluated on the same axis as non-torture punishments (such as jail time, presumably), regardless of the severities involved..
The key quote, “Incarceration destroys families and jobs, exactly what people need to have in order to stay away from crime.” If we had wanted to create a permanent underclass, replacing corporal punishment with prison would have been an obvious step in the process.
Obviously that’s not why people find imprisonment so preferable to torture, though; TheOtherDave’s “sacred anti-value” explanation is correct there. It would be interesting to know exactly how a once-common punishment became seen as unambiguously evil, though, in the face of “tough on crime” posturing, lengthening prison sentences, etc.
Maybe it’s a part of human hypocrisy: we want to punish people, but in a way that doesn’t make our mirror neurons feel their pain. We want people to be punished, without thinking about ourselves as the kind of people who want to harm others. We want to make it as impersonal as possible.
So we invent punishments that don’t feel like we are doing something horrible, and yet are bad enough that we would want to avoid them. Being locked behind bars for 20 years is horrible, but there is no speficic moment that would make an external observer scream.
It is, incidentally, not obvious to everyone that the desire to create a stable underclass didn’t drive our play a significant role in our changing attitudes towards prisons… in fact, it’s not even obvious to me, though I agree that they didn’t play a significant role in our changing attitudes towards torturing criminals.
Because corporal punishment is an ancient display of power; the master holding the whip and the servant being punished for misbehavior. It’s obviously effective, and undoubtedly more humane than incarceration, but it’s also anathema to the morality of the “free society” where everyone is supposed to be equal and thus no-one can hold the whip.
(Heck, even disciplining a child is considered grounds to put the kid in foster care; if you want corporal punishment v incarceration, that’s a hell of a dichotomy. And for every genuinely abused kid CPS saves, how many healthy families get broken up again?)
The idea is childish and unrealistic, but nonetheless popular because it plays on the fear and resentment people feel towards those above them. And in a democracy, popular sentiment is difficult to defeat.
Don’t look at it from the perp point of view, look at it from an average-middle-class-dude or a suburban-soccer-mom point of view.
If there’s a guy who, say, committed a robbery in your neighborhood, physical punishment may or may not deter him from future robberies. You don’t know and in the meantime he’s still around. But if that guy gets sent to prison, the state guarantees that he will not be around for a fairly long time.
That is the major advantage of prisons over fines and/or physical punishments.
On the other hand, making people spend long periods of time in a low-trust environment surrounded by criminals seems to be a rather effective way of elevating recidivism when they do get out, so the advantage as implemented in our system is on rather tenuous footing.
And of course, the prison system comes with the major disadvantage that imprisoning people is a highly expensive punishment to implement.
I am not arguing that prisons are the proper way to deal with crime. All I’m saying is that arguments in favor of imprisonment as the preferred method of punishing criminals exist.
That’s only an advantage if the expected cost to society of keeping him in prison is less than the expected cost (broadly construed) to society of him keeping on robbing.
If there’s a guy who, say, committed a robbery in your neighborhood, physical punishment may or may not deter him from future robberies. You don’t know and in the meantime he’s still around. But if that guy gets sent to prison, the state guarantees that he will not be around for a fairly long time.
This is totally obvious, I’m not sure why you felt you needed to point that out.
The point of my comment is that it is interesting that prison isn’t viewed as cruel, even though it’s obviously more harsh than alternatives. Obviously there are other reasons people prefer prison as a punishment for others.
Isn’t freedom important for human dignity? It seems that any kind of punishment infringes on human dignity to some extent. Also, remember that prisoners are often subject to beatings and rape by other prisoners or guards—something which is widely known.
According to the standard moral doctrine it’s not as central as bodily integrity. The state is allowed to take away freedom of movement but not bodily integrity or force people to work as slaves.
Also, remember that prisoners are often subject to beatings and rape by other prisoners or guards—something which is widely known.
That’s a feature of the particular way a prison is run.
Video playback speed was mentioned on the useful habits repository thread a few weeks ago and I asked how I could do the same. Youtube’s playback speed option is not available on all videos. Macs apparently have a plug-in you can download, I don’t own a mac so that’s not helpful. You could download the video then play it back, but that wastes time. I just learned a solution that works across all OS’ with out the need to download the video first.
Less Wrong and its comments are a treasure trove of ethical problems, both theoretical and practical, and possible solutions to them (the largest one to my knowledge; do let me know if you are aware of a larger forum for this topic). However, this knowledge is not easy to navigate, especially to an outsider who might have a practical interest in it. I think this is a problem worth solving and one possible solution I came up with is to create a StackExchange-style service for (utilitarian, rationalist) ethics. Would you consider such a platform for ethical questions to be useful? Would you participate?
Possible benefits:
Making existing problems and their answers easier to navigate through the use of tagging and a stricter question-answer format.
“Deconcentration of attention is opposite to concentration and can be interpreted as a process of dismantling of the figures in the field of perception and transformation of the perceptual field into a uniform (in the sense that no individual elements could be construed as a perceptual figure) background.”
Seems slightly pseudosciencey, but perhaps valuable.
This is a game I like to play with myself actually. I sit and observe my surroundings, consciously removing labels from the objects in my visual field until it’s clear that everything is one big continuity of atoms. It’s fun and brings back for me that childlike feeling of seeing thing for the first time again. I have to be in the right frame of mind to do it and it’s much harder when in a man-made environment (where everything is an object) than in nature.
But I’ve never had a word for it before, so thanks.
Actually, I’d be interested to hear what other mental games LWers play to amuse themselves.
Some more games I play:
‘Fly arounds,’ where I visualize my perspective moving around the room, zooming out of the walls of the building I’m in, and exploring/getting new views on places I know. It’s fun to ‘tag’ an imaginary person and see what their perspective moving through an average day would be.
‘People watching,’ where I pick a person walking by and try to read their actions and relationships with the people they’re with. They then get a full backstory and life.
‘Contingency.’ What would happen if a car drove through the door right now/that guy pulled a gun/I suddenly realized that I am actually Jason Bourne? This xkcd puts it best.
I have a half written post about the cultural divisions in the environmentalist movement that I intend to put on a personal blog in the nearish future. (Tl;Dr there “Green” groups who advocate different things in a very emotional/moral way vs. “Scientific” environmentalists)
I’ve been thinking about comparisons between the structure of that movement and how future movements might tackle other potential existential risks, specifically UFAI. Would people be interested in a post here specifically discussing that?
how future movements might tackle other potential existential risks, specifically UFAI
Is there anything you’ve learnt that’s particular about groups trying to tackle x-risk in particular? If not, you could just make a post describing what you’ve learnt about groups that challenge big problems. Generality at no extra cost.
Political and social movements as a whole are so massive and varied that I don’t think I could really give much non-trivial analysis. I’m not sure there’s really a separate category of ‘big problem’ that can be separated out, all movements think their problem is big, and all big problems are composed of smaller problems.
I make the comparison between UFAI and environmentalism because its probably the only major risk that presently is really in public consciousness,* so provides a model of how people will act in response. E.g. the solutions that technical experts favour may not be the ones that the public support even if they agree on the problem.
*A few decades ago nuclear weapons might have also been analogous, but, whether correctly or not, the public perception of their risk has diminished.
From what I can tell, it’s actually a teeny-tiny number of people, but they get disproportional media coverage for reasons that should be obvious considering the interests of those doing the covering.
FWIW, while I’ve not met many misanthropic greens in real life, about half of the greens I’ve met on the Internet range from mildly to extremely misanthropic.
I wouldn’t say misanthropic, maybe more a matter of scope insensitivity and an overromanticised view of the ‘natural’ state of the world. But I think they genuinely believe it would make humans better off, whereas truly misanthropic greens wouldn’t care.
Just thinking… could it be worth doing a website providing interesting parts of settled science for laypeople?
If we take the solid, replicated findings, and remove the ones that laypeople don’t care about (because they have no use for them in everyday life)… how much would be left? Which parts of human knowledge would be covered most?
I imagine a website that would first provide a simple explanation, and then a detailed scientific explanation with references.
Why? Simply to give people idea that this is science that is useful and trustworthy—not the things that are too abstract to understand or use, and not some new hypotheses that will be disproved tomorrow. Science, as a friendly and trustworthy authority. To get some respect for science.
Wikipedia seems close enough to what you’re describing … and improving Wikipedia (plenty of science pages are flagged as “this is hard to understand for non-specialists) seems like the easiest way to move it closer.
The wikipedia contains millions of topics, so the subset of “settled science” is lost among them. Creating a “Settled Science” portal could be an approximation.
As an example of where my idea differs from the wikipedia approach: the wikipedia Science portal displays a link to article about Albert Einstein. Yes, Albert Einstein was an important scientist, but his personal biography is not science. So one difference would be that the “settled science encyclopedia” would not include Einstein or any other scientist (except among the references). Only the knowledge, which could be also used on a different planet with different history and different names and biographies of the scientists.
Also, in wikipedia you have a whole page about a topic. Some parts of the page may be settled science, other parts are not; but both parts are on the same page, in the same encyclopedia. It would be cognitively easier for a reader to know “if it is on SettledScienceEncyclopedia.com″, it is settled science.
EDIT: I agree that improving scientific articles on wikipedia, not just making them more correct but also more accessible to wide public, is a worthy goal.
Take a subject like evolution.The fact that evolution happens is setteled science for a long time.
On the other hand if you take a school book on evolution that was written 30 years ago there a good chance that it has examples of how one species is related to another species that got overturned when we got genome data.
People used to respect Science, as an abstract mysterious force which Scientists could augur and even use to invoke the odd miracle. In a way, people in the nineteenth and early twentieth centuries saw Scientists in a similar way to how pre-Christian Europe saw priests; you need one on hand when you make a decision, and contradict them at your peril, but ultimately they’re advisers rather than leaders.
That attitude is mostly gone now, but it could be useful to bring it back. Ordinary people are not going to provide useful scientific insights or otherwise helpfully (1) participate in the process, so keeping them out of the way and deferential is going to be more valuable then trying to involve them. There seems to be a J curve between 100% scientific literacy and old-school Science-ism, and it seems to me at least that climbing back up to an elitist position is the option most likely to actually work in our lifetimes.
If anything, the more easily lay people can lay their hands on scientific materials the worse the situation is; the Dunning-Kruger effect and a lack of actual scientific training / mental ability means that laypeople are almost certain to misinterpret what they read in ways which disagree with the actual scientific consensus. Just look at the huge backlash against biology and psychometry these days; most of the people I’ve argued with in person or online have no actual qualifications but feel entitled to opinions on the issues because they stumbled through an article on pub-med and know the word methodology.
People used to respect Science, as an abstract mysterious force which Scientists could augur and even use to invoke the odd miracle. In a way, people in the nineteenth and early twentieth centuries saw Scientists in a similar way to how pre-Christian Europe saw priests; you need one on hand when you make a decision, and contradict them at your peril, but ultimately they’re advisers rather than leaders.
That attitude is mostly gone now,
Is this true? It pattern matches to a generic things-were-better-in-the-old-days complaint and I’m not sure how one would get a systematic idea of how much people trusted science & scientists 100-200 years ago.
(Looking at the US, for instance, I only find results from surveys going back to the late 1950s. Americans’ confidence in science seems to have fallen quite a lot between 1958 and 1971-2, probably mostly in the late 1960s, then rebounded somewhat before remaining stable for the last 35-40 years. I note that the loss of trust in science that happened in the 1960s wasn’t science-specific, but part of a general loss of confidence experienced by almost all institutions people were polled about.)
but it could be useful to bring it back. Ordinary people are not going to provide useful scientific insights or otherwise helpfully (1) participate in the process, so keeping them out of the way and deferential is going to be more valuable then trying to involve them.
The average science PhD is two standard deviations out from the population mean in terms of intelligence, has spent ~8-10 years learning the fundamental background required to understand their field, and is deeply immersed in the culture of science. And these are the ‘newbs’ of the scientific community; the scrappy up-and-comers who still need to prove themselves as having valuable insights or actual skills.
So yes, for all practical purposes the barrier to genuine understanding of scientific theories and techniques is high enough that a layman cannot hope to have more than a cursory understanding of the field.
And if we want laymen to trust in a process they cannot understand, the priest is the archetypal example of mysterious authority.
So yes, for all practical purposes the barrier to genuine understanding of scientific theories and techniques is high enough that a layman cannot hope to have more than a cursory understanding of the field.
First, there is no logical connection between your first paragraph and the second one and I don’t see any reason for that “so, yes”.
Second, that claim is, ahem, bullshit. I’ll agree that someone with low IQ “cannot hope to have more than a cursory understanding”, but for such people this statement is true for much more than science. High-IQ laymen are quite capable of understanding the field and, often enough, pointing out new approaches which have not occurred to any established scientists because, after all, that’s not how these things are done.
And if we want laymen to trust in a process they cannot understand
No, I don’t want laymen to trust in a process they cannot understand.
How high is “high-IQ” and how low is “low IQ” in your book?
Someone with an above-average IQ of 115-120, like your average undergrad, visibly struggles with 101 / 201 level work and is deeply resistant to higher-level concepts. Actually getting through grad school takes about a 130 as previously mentioned, and notable scientists tend to be in the 150+ range. So somewhere from 84-98% of the population is disqualified right off the bat, with only the top 2-0.04% capable of doing really valuable work.
And that’s assuming that IQ is the only thing that counts; in actuality, at least in the hard sciences, there is an enormous amount of technical knowledge and skill that a person has to learn to provide real insight. I cannot think of a single example in the last 50 years which fits your narrative of the smart outsider coming in and overturning a well-established scientific principle, although I would love to hear of one if you know any.
No, I don’t want laymen to trust in a process they cannot understand.
So no more trusting chemotherapy to treat your cancer? The internet to download your music, or your iPod to play it? A fixed wing aircraft to transport you safely across the Atlantic? Must be tough even just driving to work, now that your car is mostly computer-controlled and made of materials with names that sound like alphabet soup.
Almost every aspect of modern life, even for a polymathic genius, is going to be at least partially mysterious; the world of our tools and knowledge is far too complex for the human mind to fully grasp.
Someone with an above-average IQ of 115-120, like your average undergrad, visibly struggles with 101 / 201 level work and is deeply resistant to higher-level concepts. Actually getting through grad school takes about a 130 as previously mentioned, and notable scientists tend to be in the 150+ range.
Not reality. 41% of people in the US are enrolled in college (in 2010) Source. If we assumed that the US has representative IQ and use a 15 SD IQ scale, then the top 41% of IQs are all people with IQ of at least 103.41. I calculated that average IQ of a the top 41% of the population on wolfram alpha. (It is easy, because by definition, IQ follows a normal distribution.) I got 114.2.
If US citizens between 18 and 24 are representative of the entire population in terms of IQ, it is literally impossible for the average IQ of an undergrad student to be 115 or higher.
Hmm. I’m not 95% confident of then number I gave, but I haven’t been able to turn up anything disconfirming.
I did a bunch of research on the heritability of IQ last year for a term paper and I repeatedly saw the claim that university students tend to be 1sd above the local population mean, although that may not apply in a place with more liberal admissions practices like the modern US. More research below, and I’ll edit in some extra stuff tomorrow when my brain isn’t fried.
Surprisingly, at least looking at science / engineering / math majors, it looks like people are smarter than I would have guessed; Physics majors had the highest average at 133 with Psychology majors pulling up the rear with 114, and most of them are clustered around 120 − 130. For someone who deals with undergrads, that is frankly shockingly high.
Outside of the sciences, even the “dumbest” major, social work, managed a 103 and a lot of the popular majors are in the 105-115 range. Another big surprise here too; Philosophy majors are really damn bright with a 129 average, right up under Math majors. Never would have guessed that one.
Still, it’s obvious that the 115-120 figure I gave was overly optimistic. Once I look at some more data I will amend my initial post so that it better reflects reality.
Naive hypothesis: Given the Flynn effect, and that college students are younger than the general population, could that explain the difference? That Coscott’s conditional “If US citizens between 18 and 24 are representative of the entire population in terms of IQ” is false?
IQ tests are at least supposed to be normed for the age group in question, in order to eliminate such effects, but I don’t know how it’s done for the estimates in question.
How high is “high-IQ” and how low is “low IQ” in your book?
I don’t have specific ranges in mind, but I think I’d call grad-student level sufficiently high-IQ.
smart outsider coming in and overturning a well-established scientific principle
Not necessarily overturning a principle, but rather opening up new directions to expand into. How about Woz, Jobs, Gates, all that crowd? They were outsiders—all the insiders were at IBM or, at best, at places like Xerox PARC.
Almost every aspect of modern life, even for a polymathic genius, is going to be at least partially mysterious
Of course, but you don’t trust a process you don’t understand. You trust either people or the system built around that process. If your doctor gives you a pill to take, you trust your doctor, not the biochemistry which you don’t understand. If you take a plane across the Atlantic, you trust the system that’s been running commercial aviation for decades with the very low accident rate.
How about Woz, Jobs, Gates, all that crowd? They were outsiders
They were outsiders of business companies, not of science. It’s not like Gates never learned math at school, and then miraculously proved Fermat theorem in his dreams. It’s more like he took mostly some else’s work, made a few smart business decisions, and became extra rich.
It’s impractical for every single person to understand every single scientific theory. Even the domain of ‘settled science’ is far larger than anyone could hope to cover in their lifetime.
It’s true that scientific authority is no substitute for evidence and experiment, but as Elezier pointed out in one of the streams (I can’t find the link right now), it’s not like scientific authority is useless for updating beliefs. If you have to make a decision, and are stuck in choosing between the scientific consensus opinion and a random coin toss, the scientific consensus opinion is a far far better choice, obviously.
‘Trust’, in this context, doesn’t mean 100% infallible trust in scientific authority. If you take the alternative route and demand that everyone be knowledgeable in everything they make choices in, you wind up in situations like the current one we’re having with climate change, where scientists are pretty much screaming at the top of their lungs that something has to be done, but it’s falling on deaf political ears partly because of the FUD spreaded by anti-science groups casting doubt on scientific consensus opinion.
you wind up in situations like the current one we’re having with climate change
Funny that you mention that.
I consider myself a reasonably well educated layman with a few functioning brain cells. I’ve taken an interest in the global warming claims and did a fair amount of digging (which involved reading original papers and other relevant stuff like Climategate materials). I’ll skip through all the bits not relevant to this thread but I’ll point out that the end result is that my respect for “climate science” dropped considerably and I became what you’d probably describe as a “climate sceptic”.
Given the rather sorry state of medical science (see Ioannidis, etc.), another area I have some interest in, I must say that nowadays when people tell me I must blindly trust “science” because I cannot possibly understand the gnostic knowledge of these high priests, well, let’s just say I’m not very receptive to this idea.
Regardless of whether you personally agree with the consensus on climate change, the fact is that most politicians in office are not scientists and do not have the requisite background to even begin reading climate change papers and materials. Yet they must often make decisions on climate change issues. I’d much prefer that they took the consensus scientific opinion rather than making up their own ill-formed beliefs. If the scientific opinion turns out to be wrong, I will pin the full blame on the scientists, not the decision makers.
And, as I’m saying, this generalizes to all sorts of other issues. I feel like I’m repeating myself here, but ultimately a lot of people find themselves in situations where they must make a decision based on limited information and intelligence. In such a scenario, often the best choice is to ‘trust’ scientists. The option to ‘figure it out for yourself’ is not available.
I’d much prefer that they took the consensus scientific opinion
In general I would agree with you. However, as usual, real life is complicated.
The debate about climate has been greatly politicized and commercialized. Many people participating in this debate had and have huge incentives, (political, monetary, professional, etc.) to bend the perceptions in their favor. Many scientists behaved… less than admirably. The cause has been picked up (I might even say “hijacked”) by the environmental movement which desperately needed a new bogeyman, a new fear to keep the money flowing. There has been much confusion—some natural and some deliberately created—over which questions exactly are being asked and answered. Some climate scientists decided they’re experts on economics and public policy and their policy recommendations are “science”.
All in all it was and is a huge and ugly mess. Given this reality, “just follow the scientific consensus” might have been a good prior, but after updating on all the evidence it doesn’t look like a good posterior recommendation in this particular case.
Imagine, you have something like this back in 1900.
Do you remember how settled was that the Universe is slowing down at its expansion? The only thing wasn’t settled was the slowing rate—is it big enough to stop one day and reverse. 20 years ago.
Just now, they discuss Big Bang. Settled long ago.
I am not saying your idea isn’t good. It is, but the controversy is imminent.
I am sitting on an unpublished and (depending on how much I want to do) potentially almost complete puzzle game, thus far entirely my own work, and I need to decide what to do with it. I wrote most of it starting almost 4 years ago, and mostly stopping a year after that, as a way to teach myself to program. I’ve revisited it a few times since then, performing lots of refactoring and optimization as my coding skills improved, and implementing a couple of new ideas as I thought them up. Currently the game mechanics are pretty polished. With a few weeks of bug fixes I would say publishable. I’ve made and tested 40 levels. Because they are short, I would like to make 2 or 3 times as many before publishing. I estimate that this would take several months at the rate I am currently able to devote free time to it. Lastly, the artwork, sound effects, and music are sorely lacking. I would need to commission an artist skilled at 3D modeling, rigging, skinning, and animation to make at least 2 human models (1 male, 1 female), and one giant spider model, with about 20 animations each (the human models can share skeletons and animations). I could use something like this for music, and something like this for sound effects. The code is already in place to play sound and music. I have written a complicated storyline, but I am not confident it is good writing. I have not gotten a million words of bad fiction out of the way. Integrating it into the game would take a lot of coding time (though I have laid some of the groundwork already), and I think it might be better to make it Yet Another Puzzle Game With No Storyline. If I was to include it, I estimate it would take 9 months at my current rate of time spent on this project per time lived. I would also want to make a tutorial out of several intro levels (have temporary overlays “Press these keys to run” and such). It’s using the Unity Game Engine (Currently the free version), meaning I can publish to quite a lot of platforms without much work.
I would like to get the opinion of someone with relevant knowledge, whether it is worth trying to sell this, and how much further work I should put into it first (funging against finishing grad school in computer engineering faster, and ultimately either hardware engineering work for some big corporation plus high-risk, high expected dollar investment on the side (if I can learn to do it well), or working in startups directly). I’m mostly optimizing for expected dollars, because after I ensure a comfortable enough existence for myself (I don’t intend to have kids) I want to use the rest for effective altruism.
I can provide an alpha version of the game or partial storyline notes on request.
My friend did an extremely simple Unity game (with nice graphics and music), added AdMob advertising, put an Android version as a free game on Google Play, and gets about 20 dollars a month (during the recent half of the year, and the number seems stable). That’s the only data point I have.
I suppose your game would be better (but I don’t really know what the players value), so… let’s make a wild guess that it could make 50 dollars a month during the following 5 years. That’s like 5×12×50 = 3000 dollars total. Maybe! If you need 9 months to finish it (beware the planning fallacy!), it is 300 dollars per month of work. I don’t know how much time during the month you would spend coding. Discounting for the planning fallacy and the uncertainty of outcome, let’s make it, say, 100 dollars per month of work.
Of course, besides money you get some additional benefits such as feeling good and having a nice item in your portfolio (probably irrelevant for most jobs you consider).
If the payoff is that low, it’s not worth working in the storyline (which is what would take 9 months (Edit: typo)). I’m already making a decent wage as a TA. It could still be worth publishing roughly as-is. But I’m hoping I can get away with publishing to PC/Mac/Linux and charging a few dollars per player.
You can publish it on google play now, as it is… and if you later decide so, edit the storyline, add a level or two, and sell it on PC later.
The advantage is that a) you get some money now, and b) when the final version is ready, you will already have a few fans, which will be more likely to buy it. (Another advantage is that if your game has some bugs or other problems, you can use the feedback to polish the game before you start charging players. I suspect a paying customer will be more angry about bugs.)
From what you say, it sounds like it would be quite a while before ad revenue from a free game would pay back what I spent on commissioning3D artists.
An ad banner like in AdMob would interfere with gameplay quite a lot. The control scheme is designed for full keyboard (but would work well with a game controller with joysticks). It would take significant work to translate it to a tablet screen (a cell phone screen is definitely too small). Maybe this kind of annoyance would be a feature if I was trying to sell a full version that was ad-free alongside it, but my game is complicated and I expect will take some getting into it, and I think this would just drive most people away and earn it 1-star ratings.
I’m not that worried about bugs that would significantly damage user experience in gameplay. I’ve been playing it for a while myself (Until Minecraft, it was my favorite game to play while listening to debates). The remaining few ones are basically just results of things I’ve added recently, like smooth camera transitions when you’re playing as a spider and you crawl on a wall. (which has caused the camera to wiggle a little bit under some conditions, I think it’s due to numerical instability in the way I made it rotate to follow the character) The bugs I would expect to take time to fix are the ones that only show up on other platforms than the one I’ve played on (PC), and I can find those by looking through the way my game interacts with the operating system (saving user-created files, loading them, browsing for them, changing screen resolution, accessing preferences files). It’s not necessary to play through the game to find them. The outside view says “There will be more bugs than you expect, and it doesn’t take much to ruin user experience.” To which I respond that I have published software before (not that I own, but that I developed during internships) and I have some feel for how bad bugs popping up is, and that I would take that “feel” into acccount when testing it thoroughly on different platforms before release, and I don’t expect that to take more than a couple of weeks.
The gaining fans thing is a good and important point. I might be able to do that with a Humble Indie Bundle,which has the advantage of a precedent that is pretty much accepted where basically giving it away for free ends when the humble bundle ends, you don’t have to create a “deluxe” version of the game to justify it not being free anymore.
As far as feedback about things besides bugs (level difficulty is a concern), I bet I can find people willing to test a beta version and give feedback for the privelege of playing it early, or (at worst) in return for playtesting their own games (if I ask around at my school’s gamebuilders club, whose meeting I’m planning to attend next week to demo my game and get their opinion on the same question I asked here (“how viable do you think this is commercially?”)). I have looked at the games they are making online. They appear to be a lot less complicated and polished than mine, and will not take much work to play as much as I expect them to maybe expect in return for playing some levels of mine. I have played many games, and never sent an email to a developer giving them feedback. I wouldn’t expect much feedback if I just published a game, even if I included a message saying “please send feedback.”
It’s not going to be worth spending nine months making a complicated storyline that players will press A to skip. Save it for an RPG.
Would would be worth doing, if you can do it well, is to take elements of a storyline that set a tone, and integrate it into the game to provide a unique setting (eg Braid, Binding of Isaac). But don’t do a convoluted plot that pops up between levels.
I think I will take this advice. I have code to let the player read “memories” of other characters scattered throughout the levels, which I can provide a little text for. And I like my backstory and setting more than I like the story that I came up with for the player to play through. Edit: Double post, sorry. It looked like it wasn’t submitting my comment so I copied the text opened a new tab, checked to see that the comment wasn’t there, and then pasted, but apparently the other comment was just late to show up.
I think I will take this advice. I have code to let the player read “memories” of other characters scattered throughout the levels, which I can provide a little text for. And I like my backstory and setting more than I like the story that I came up with for the player to play through.
Much to my surprise, Richard Dawkins and Jon Stewart had a fairly reasonable conversation about existential risk on the Sept. 24, 2013 edition of The Daily Show. Here’s how it went down:
STEWART: Here’s my proposal… for the discussion tonight. Do you believe that the end of our civilization will be through religious strife or scientific advancement? What do you think in the long run will be more damaging to our prospects as a human race?
In reply, Dawkins says Martin Rees (of CSER) thinks humanity has a 50% chance of surviving the 21st century, and one cause for such worry is that powerful technologies could get into the hands of religious fanatics. Stewart replies:
STEWART: …[But] isn’t there a strong probability that we are not necessarily in control of the unintended consequences of our scientific advancement?… Don’t you think it’s even more likely that we will create something [for which] the unintended consequence… is worldwide catastrophe?
DAWKINS: That is possible. It’s something we have to worry about… Science is the most powerful to do whatever you want to do. If you want to do good, it’s the most powerful way to do good. If you want to do evil, it’s the most powerful way to do evil.
STEWART: …You have nuclear energy and you go this way and you can light the world, but you go this [other] way, and you can blow up the world. It seems like we always try [the blow up the world path] first.
DAWKINS: There is a suggestion that one of the reasons that we don’t detect extraterrestrial civilizations is that when a civilization reaches the point where it could broadcast radio waves that we could pick up, there’s only a brief window before it blows itself up… It takes many billions of years for evolution to reach the point where technology takes off, but once technology takes off, it’s then an eye-blink — by the standards of geological time — before...
STEWART: …It’s very easy to look at the dark side of fundamentalism… [but] sometimes I think we have to look at the dark side of achievement… because I believe the final words that man utters on this Earth will be: “It worked!” It’ll be an experiment that isn’t misused, but will be a rolling catastrophe.
DAWKINS: It’s a possibility, and I can’t deny it. I’m more optimistic than that.
STEWART: … [I think] curiosity killed the cat, and the cat never saw it coming… So how do we put the brakes on our ability to achieve, or our curiosity?
DAWKINS: I don’t think you can ever really stop the march of science in the sense of saying “You’re forbidden to exercise your natural curiosity in science.” You can certainly put the brakes on certain applications. You could stop manufacturing certain weapons. You could have… international agreements not to manufacture certain types of weapons...
And then the conversation shifted back to religion. I wish Dawkins had mentioned CSER’s existence.
And then later in the (extended, online-only) interview, Stewart seemed unsure as to whether consciousness persisted after one’s brain rotted, and also unaware that 10^22 is a lot bigger than a billion. :(
Jon’s what I call normal-smart. He spends most of his time watching TV, mainly US news programs, and they’re quite destructive to rational thinking, even if the purpose is for comedic fodder and to discover hypocrisy. He’s very tech averse, letting the guests he has on the show come in with information he might use, trusting (quite good) intuition to fit things into reality. As such, I like to use him as an example of what more normal people feel about tech / geek issues.
Every time he has one of these debates, I really want to sit down as moderator so I can translate each side, since they often talk past each other. Alas, it’s a very time restricted format, and I’ve only seen him fact check on the fly once (Google, Wikipedia).
The number thing was at least partly a joke, along the lines of “bigger than 10 doesn’t make much sense to me”—scope insensitivity humor. I’ve done similar before.
I’m beginning to think that we shouldn’t be surprised by reasonably intelligent atheists having reasonable thoughts about x-risk. Both of the two reasonably intelligent, non-LWer atheists I talked to in the past few weeks about LW issues agreed with everything I said on them and said that it all seemed sensible and non-surprising. Most LW users started out as reasonably intelligent atheists. Where did the “zomg everyone is so dumb and only LW can think” meme originate from, exactly? Is there any hard data on this topic?
The Relationship Escalator—an overview of assumptions about relationships, and exceptions to the assumptions. The part that surprised me was the bit about the possibility of dialing back a relationship without ending it.
Poll Question: What are communities are you active in other than Less Wrong?
Communities that you think are closely related to Less Wrong are welcome, but I am also wondering what other completely unrelated groups you associate with. How do you think such communities help you? Are there any that you would recommend to an arbitrary Less Wronger?
Contra dance. Closely correlated with LessWrong; also correlated with nerdy people in general. I would recommend it to most LessWrongers; it’s good even for people who are not generally good at dancing, or who have problems interacting socially. (Perhaps even especially for those people; I think of it as a ‘gateway dance.’)
Other types of dance, like swing dance. Also some correlation with LessWrong, somewhat recommended but this depends more on your tastes. Generally has a higher barrier to entry than contra dancing.
I’m going to second Contra Dance. It’s really fun and easy to start while having a decent learning curve such that you don’t hit a skill ceiling fast. Plus you meet lots of people and interact with them in a controlled, friendly, cooperative fun fashion.
My local hackerspace, and broadly the US and European hacker communities. This is mainly because information security is my primary focus, but I find myself happier interacting with hackers because in general they tend not only to be highly outcome-oriented (i.e., inherently consequentialist), but also pragmatic about it: as the saying goes, there’s no arguing with a root shell. (Modulo bikeshedding, but this seems to be more of a failure mode of subgroups that don’t strive to avoid that problem.) The hacker community is also where I learned to think of communities in terms of design patterns; it’s one of the few groups I’ve encountered so far that puts effort into that sort of community self-evaluation. Mostly it helps me because it’s a place where I feel welcome, where other people see value in the goals I want to achieve and are working toward compatible goals. I’d encourage any instrumental rationalist with an interest in software engineering, and especially security, to visit a hackerspace or attend a hacker conference.
Until recently I was also involved in the “liberation technology” activism community, but ultimately found it toxic and left. I’m still too close to that situation to evaluate it fairly, but a lot of the toxicity had to do with identity politics and status games getting in the way of accomplishing anything of lasting value. (I’m also dissatisfied with the degree to which activism in general fixates on removing existing structures rather than replacing them with better ones, but again, too close to evaluate fairly.)
The only two communities I am currently active in right now (other than career/family communities) are Less Wrong and Unitarian Universalism.
In the past had a D&D group that I participated very actively in. I think that the people I played D&D with in high school had a very big and positive effect on my development.
I think that I would like to and am likely to develop a local community of people to play strategy board games in the future.
I’m active in Toastmasters and martial arts (mostly the community of my specify school). Overall Toastmasters seems pretty effective at its stated goals of improving public speaking and leadership skills. Its also fun (at least for me). Additionally, both force me to actually interact with other people, which is nice and not something that the rest of my live provides.
I’m active in (though not really a member of) the “left-libertarian” community, associated with places like Center for a Stateless Society (though I myself am not an anarchist) and Bleeding Heart Libertarians. I’m also a frequent reader and occasional commenter on EconLog.
Less related, I’m an active poster on GameFAQs and on a message board centered around the Heroes of Might and Magic game series.
I also used to be active on GameFAQs. For about a year in 2004 it was most of my internet activity, specifically the Pikmin boards. That was a long time ago though when I was a high school freshman.
Orthogonal to LW, I’m very active in my university’s Greek community, serving as VP of a fraternity. It’s been excellent social training and I’ve had a very positive experience.
I was wondering if anyone had any opinions/observations they would be would be willing to share about Unitarian Universalism. My fiancee is an atheist and a Unitarian Universalist, and I have been going to congregation with her for the last 10 months. I enjoy the experience. It is relaxing for me, and a source of interesting discussions. However, I am trying to decide if my morality has a problem with allying myself this community. I am leaning towards no. I feel like they are doing a lot of good by providing a stepping stone out of traditional religion for many people. I am however slightly concerned about what effect this community might have on my future children. I would love to debate this issue with anyone who is willing, and I think that would be very helpful for me.
The UU “Seven Principles and Purposes” seem like a piece of virtue ethics. If you don’t mind this particular brand of it, then why not.
From Wikipedia:
“We come from One origin, we are headed to One destiny, but we cannot know completely what these are, so we are to focus on making this life better for all of us, and we use reason when we can, to find our way. ”
If you discard the ornamental fluff in this “philosophy” and “focus on making this life better for all of us”, then it’s as good a guideline as any.
As I said in responding to another comment, this is the part of UU that I relate to. However, the problem is that while UUs might be slightly above average rationality, “we can use reason when we can” means that beliefs come from thinking for yourself as opposed to reading e.g. the bible, and the stuff they come up with by thinking for themselves is usually not all that great by my standards. I am worried that I am giving UU too much credit because they happen to use the word “reason,” when in reality they mean something very different than what I mean.
the stuff they come up with by thinking for themselves is usually not all that great by my standards
They are just humans, aren’t they? I am afraid that at this moment it is impossible to assemble a large group of people who would all think on LW-level. Not including obvious bullshit, or at least not making it a core of group beliefs, is already a pretty decent result for a large group of humans.
Perhaps one day CFAR will make a curricullum that can replicate rationality quickly (at least on suitable individuals) and then we can try to expand rationality to mass level. Until then, having a group without obviously insane people in power is probably the best you can get.
I am worried that I am giving UU too much credit because they happen to use the word “reason,”
You already reflected on this, so just: don’t emotionally expect what is not realistic. They are never going to use reason as you define it. But the good news is that they will not punish you for using reason. Which is the best you can expect from a religious group.
You inspired me to google whether there are UU in Slovakia. None found, although there are some in the neighbor countries: Czech, Hungary.
I wonder whether it would be possible to create a local branch here, to draw people, who just want to feel something religious but don’t want to belong to a strict organization, away from Catholicism (which in my opinion has huge negative impacts on the country). There seem to be enough such people here, but they are not organized, so they usually stay within the churches of their parents.
The problem is, I am not the right person to start something like this, because I don’t feel any religious need; for me the UU would be completely boring and useless. I am not sure if I could pretend interest at least for long enough to collect a group of people, make them interested in the idea, put them into contact with neighbor UUs, and then silently sneak away. ;-)
Also, I suspect the religion is not about ideas, but about organized community. (For example, the only reason you are interested in UU is because your fiancee is. And your fiancee probably has similar reasons, etc.) Starting a new religious community where no support exists, would need a few people willing to sacrifice a lot of time and work—in other words, true believers. Later, when the community exists, further recruitment should be easier.
Well, at least this is the first social engineering project I feel I could have higher than 1% chance of doing successfully, if I decided to. (Level 3 of Yudkowsky Ambition Scale in a local scope?)
Unitarian Universalism is different from Unitarianism. UU is basically a spin-off of Unitarianism from when they combined with Universalism in 1961 in North America. As a result, there are very few UU churches outside of NA.
Unitarianism is on average more Christian than UU, and there exist some UU congregations that also have a Christian slant. (The one I was talking about is not one of them) I have also heard that some UU churches are considerably more tolerant of everything other than Christianity than they are of Christianity. (Probably because their members were escaping Christianity) The views change from congregation to congregation because they are decided from the bottom up from the local congregants.
The UUA has free resources, such as transcribed sermons you could read, for people who wanted to start a congregation.
I think I gain some stuff from it that is not directly from my fiancee. I don’t know if it is enough to continue going on my own. It is a community that roughly follows strategy 1 of the belief signalling trilemma, which I think is nice to be in some of the time. The sermons are usually way too vague, but have produced interesting thoughts when I added details to them on my own and then analyzed my version. There is also (respectful) debating, which I think I find fun regardless of who I am debating with. I like how it enables people to share significant highs or lows in their life, so the community can help them. There are pot-lucks and game nights, and courses on philosophy and religions. There is also singing, which I am not so crazy about, but my fiancee loves.
They are reaching many of the wrong conclusions. I think this might be because their definition of “use reason” is just to think about their beliefs, which is not enough. When I say “use reason,” I mean thinking about my beliefs in a specific way. That specific way is something that I think a lot of us have roughly in common on less wrong, and it would take to long to describe all the parts of it now. To point out a specific example, one UU said to me “There are some mysteries we can never get answers to, like what happens when we die,” and then later “I am a firm believer in reincarnation, because I have had experiences where I felt my past lives.” I never questioned to her that she had those experiences, and argued a bit and was able to get her to change her first statement, because reincarnation experiences were evidence against it, which I thought was an improvement. However, not noticing how contradictory these beliefs were is not something I would call “reason.”
Perhaps what is bothering me is a difference in cognitive ability, and UUs version of “reason” is as much as I can expect from the average person. Or, perhaps these are people who are genuinely interested in being rational, and would be very supportive of learning how, but have not yet learned. It could also be that they just want to say that they are using “reason.”
Not much. That is a good idea. I was considering hosting a workshop on rationality through the church. If I ever go through with it, that will probably be part of it. My parents’ UU church had a class on what QM teaches us about theology and philosophy.
I’m not really invested enough in the question to debate it, but I know plenty of atheists (both with and without children) who are active members of UU churches because they get more of the things they value from a social community there than they do anywhere else, and this seems entirely sensible to me. What effects on your future children are you concerned about?
I am concerned that they will treat supernatural claims as reasonable. I consider myself rational enough to be able to put up with some of the crazy stuff many UU individuals believe (beliefs not shared by the community). I am worried that my children might believe them, and even more worried that might not look at beliefs critically enough.
Yes, they will treat supernatural claims as reasonable, and expect you (and your kids) to treat them that way as well, at least in public, and condemn you (and your kids) for being rude if you (they) don’t.
If you live in the United States, the odds are high that your child’s school will do the same thing.
My suggestion would be that you teach your children how to operate sensibly in such an environment, rather than try to keep them out of such environments, but of course parenting advice from strangers on the Internet is pretty much worthless.
Yes, they will treat supernatural claims as reasonable, and expect you (and your kids) to treat them that way as well, at least in public, and condemn you (and your kids) for being rude if you (they) don’t.
I actually do not think that is true. They will treat supernatural claims as reasonable, but would not condemn me for not treating them as reasonable. They might condemn me for being avoidably rude, but I don’t even know about that.
We actually plan on homeschooling, but that is not for the purpose of keeping kids out of an insane environment as much as trying to teach them actually important stuff.
If your elementary-schooler goes around insistently informing the other little kids that Santa isn’t real, you will likely be getting an unhappy phone call from the school, never mind the religious bits that the adults actually believe.
However, I am trying to decide if my morality has a problem with allying myself this community.
What’s your moral system? If you get value from the community it’s probably more moral to focus your efforts on donating more for bed nets than on the effect that you have on the world through being a member of that community.
I think it is not productive to analyze anything as being moral by comparing it to working for money for bed nets. Most everything fails.
I think I might have made a mistake in saying this was a moral issue. I think it is more of an identity issue. I the the consequences for the world of me being Unitarian are minimal. Most of the effect is on me. I think the more accurate questions I am trying to answer are:
Are Unitarians good under my morals? Do their shared values agree with mine enough that I should identify as being one?
I think the reason this is not a instrumental issue for me, and rather an epistemic issue, is because I believe the fact that I will continue to go to congregation is already decided. It is a fun bonding time which sparks lots of interesting philosophical discussion. If I were not in my current relationship, I would probably bring that question back on the table.
I realize that this does not change the fact that the answer is heavily dependent on my moral system, so I will try to comment on that with things that are specific to UU.
I generally agree with the 7 principles of UU, with far more emphasis on “A free and responsible search for truth and meaning.” However, these principles are not particularly controversial, and I think most people would agree with most of them. The defining part of UU, I think, is the strategy of “Let’s agree to disagree on the metaethics and metaphysics, and focus on the morals themselves which are what matters.” I feel like this could be a good thing to do some of the time. Ignore the things that we don’t understand and agree on, and work on making the world better using the values we do understand and agree on. However, I am concerned that perhaps the UU philosophy is not just to ignore the metaethics and metaphysics temporarily so we can work together, but rather to not care about these issues and not be bothered by the fact that we appear confused. This I do not approve of. These are important questions, and you don’t know if what you don’t know can’t hurt you.
They are important because they are confusing. Of all the things that might possibly cause a huge change to my decision making, I think understanding open questions about anthropic reasoning is probably at the top of the list. I potentially lose a lot by not pushing these topics further.
Of all the things that might possibly cause a huge change to my decision making, I think understanding open questions about anthropic reasoning is probably at the top of the list.
For most people I don’t think that meta ethical considerations have a huge effect on their day to day decision making.
Metaphysics seems interesting. Do you think that you might start believing in paranormal stuff if you spend more effort on investigating metaphysical questions? What other possible changes in your metaphysical position could you imagine that would have a huge effects on your decision making?
I potentially lose a lot by not pushing these topics further.
Going to UU won’t stop you from discussing those concepts on LessWrong.
I’m personally part of diverse groups and don’t expect any one group to fulfill all my needs.
I do not think that I will start believing in paranormal stuff. I do not know what changes might arise from changes in my metaphysical position. I was not trying to single out these things as particularly important as much as I am just afraid of all things that I don’t know.
Going to UU won’t stop you from discussing those concepts on LessWrong.
I’m personally part of diverse groups and don’t expect any one group to fulfill all my needs.
This is good advice. My current picture of UU is that it has a lot of problems, most of which are not problems for me personally, since I am also a rational person and in LW. I think UU and LW are the only groups which I am actively a part of other than my career. I wonder what other viewpoints I am missing out on.
I’m seeing a lot of comments in which it is implicitly assumed that most everyone reading lives in a major city where transportation is trivial and there is plenty of memetic diversity. I’m wondering if this assumption is generally accurate and I’m just the odd one out, or if it’s actually kinda fallacious.
A city of ~200,000 people if you include the outlying rural areas, in which you can go from the several block wide downtown to farmland in 4-5 miles in the proper directions. Fifteen minutes from another city of 60,000 which is very much a state college town. Forty minutes away from a city of nearly 500,000 people.
Granted the city of ~200,000 has a major university and a number of biotech companies.
I think living in a big city is the standard that most people here consider normal. It’s like living in the first world. We know that there are people from India who visit but we still see being from the first world as normal.
When you have the choice between living in a place with memetic diversity or not living in such a place the choice seems obvious.
I’m back in school studying computer science (with a concentration in software engineering), but plan on being a competent programmer by the time I graduate, so I figure I need to learn lots of secondary and tertiary skills in addition to those that are actually part of the coursework. In parallel to my class subjects, I plan on learning HTML/CSS, SQL, Linux, and Git. What else should be on this list?
Preliminaries: Make sure you can touch type, being able to hit 50+ wpm without sweat makes it a lot easier to whip up a quick single-screen test program to check up something. Learn a text editor with good macro capabilities, like Vim or Emacs, so you can do repetitive structural editing of text files without having to do every step by hand. Get into the general habit of thinking that whenever you find yourself doing several repetitive steps by hand, something is wrong and you should look into ways to automate the loop.
Working with large, established code bases, like Vladimir_Nesov suggested, is what you’ll probably end up doing a lot as a working programmer. Better get used to it. There are many big open-source projects you can try to contribute to.
Unit tests, test-driven development. You want the computer to test as much of the program as possible. Also look into the major unit testing frameworks for whatever language you’re working on.
Build systems, rigging up a complex project to build with a single command line command. Also look into build servers, nightly builds and the works. A real-world software project will want a server that automatically builds the latest version of the software every night and makes noise to the people responsible if it won’t build, or if an unit test fails.
Oh, and you’ll want to know a proper command line for that. So when learning Linux, try to do your stuff in the command line instead of sticking to the GUI. Figure out where the plaintext configuration files driving whatever programs you use live and how to edit them. Become suspicious about software that doesn’t provide plaintext config files. Learn about shell scripting and onliners, and what the big deal in Unix about piping output from one program to the next is.
Git is awesome. After you’ve figured out how to use it on your own projects, look into how teams use it. Know what people are talking about when they talk about a Git workflow. Maybe check out Gerrit for a collaborative environment for developing with Git. Also check out how bug tracking systems and how those can tie into the version control.
Know some full stack of web development. If you want a web domain running a neat webapp, how would you go about getting the domain, arranging for the hosting, installing the necessary software on the computer, setting up the web framework and generating the pages that do the neat thing? Can you do this by rolling your own minimal web server instead of Apache and your own minimal web framework instead of whatever out of the box solution you’d use? Then learn a bit about the out of the box web server and web framework solutions.
Have a basic idea about the JavaScript ecosystem for frontend web development.
Look into cloud computing. It’s new enough not to have made it into many curricula yet. It’s probably not going to go away anytime soon. How would you use it, why would you want to use it, when would you not want to use it? Find out why map-reduce is cool.
Learn how the Internet works. Learn why people say that the Internet was made by pros and the web was made by amateurs. Learn how to answer the interview question “What happens between typing an URL in the address field and the web page showing up in the browser” in as much detail as you can.
Look into the low-level stuff. Learn some assembly. Figure out why Forth is cool by working through the JonesForth tutorial. Get an idea how computers work below the OS level. The Elements of Computing Systems describes this for a toy computer. Read up on how people programmed a Commodore 64, it’s a lot easier to understand than a modern PC.
Learn about the difference between userland and kernel space in Linux, and how programs written (in assembly) right on top of the kernel work. See how the kernel is put together. See if you can find something interesting to develop in the kernel-side code.
Learn out how to answer the interview question “What happens between pressing a key on the keyboard and a letter showing up on the monitor” in as much detail as you can.
Write a simple ray-tracer and a simple graphics program that does something neat with modern OpenGL and shaders. If you want to get really crazy with this, try writing a demoscene demo with lots of graphical effects and a synthesized techno soundtrack. If you want even crazier, try to make it a 4k intro.
Come up with a toy programming language and write a compiler for it.
Write a toy operating system. Figure out how to make a thing that makes a PC boot off the bare iron, prints “Hello world” on the screen and doesn’t do anything beyond that. Then see how far you can get in making the thing do other things.
not having to pay attention to the keyboard, your fingers should know what do without taking up mindspace
Yes, this is a critical skill. Especially when someone is learning programming, it is so sad to see their thinking interrupted all the time by things like: “when do I find the ‘&’ key on my keyboard?”, and when the key is finally found, they already forgot what they wanted to write.
your typing being able to keep up with your thinking
This part is already helped by many development environments, where you just write a few symbols and press Ctrl+space or something, and it completes the phrase. But this helps only with long words, not with symbols.
It’s not the top speed, it’s the overhead. It is incredibly irritating to type slowly or make typos when you’re working with a REPL or shell and are tweaking and retrying multiple times: you want to be thinking about your code and all the tiny niggling details, and not about your typing or typos.
It’s a good start, but I notice a lack of actual programming languages on that list. This is a very common mistake. A typical CS degree will try to make sure that you have at least basic familiarity with one language, usually Java, and will maybe touch a bit on a few others. You will gain some superpowers if you become familiar with all or most of the following:
A decent scripting language, like Python or Ruby. The usual recommendation is Python, since it has good learning materials and an easy learning curve, and it’s becoming increasingly useful for scientific computing.
A lisp. Reading Structure and Interpretation of Computer Programs will teach you this, and a dizzying variety of other things. It may also help you achieve enlightenment, which is nice. Seriously, read this book.
Something low-level, usually C.
Something super-low-level: an assembly language. You don’t have to be good at writing in it, but you should have basic familiarity with the concepts. Fun fact: if you know C, you can get the compiler to show you the corresponding assembly.
You should take the time to go above-and-beyond in studying data structures, since it’s a really vital subject and most CS graduates’ intuitive understanding of it is inadequate. Reading through an algorithms textbook in earnest is a good way to do this, and the wikipedia pages are almost all surprisingly good.
When you’re learning git, get a GitHub account, and use it for hosting miscellaneous projects. Class projects, side projects, whatever; this will make acquiring git experience easier and more natural.
I’m sure there’s more good advice to give, but none of it is coming to mind right now. Good luck!
Sorry if I wasn’t clear. I intended the list to include only skills that make you a more valuable programmer that aren’t explicitly taught as part of the degree. Two Java courses (one object-oriented) are required as is a Programming Languages class that teaches (at least the basics of) C/C++, Scheme, and Prolog. Also, we must take a Computer Organization course that includes Assembly (although, I’m not sure what kind). Thanks for the advice.
In school you are typically taught making small projects. Make a small algorithm, or a small demonstration that you can display an information in an interactive user interface.
In real life (at least in my experience), the applications are typically big. Not too deep, but very wide. You don’t need complex algorithms; you just have dozens of dialogs, hundreds of variables and input boxes, and must create some structure to prevent all this falling apart (especially when the requirements keep changing while you code). Also you have a lot of supporting functionality in a project (for example: database connection, locking, transactions, user authentification, user roles and permissions, printing, backup, export to pdf, import from excel, etc.). Again, unless you have structure, it falls apart. And you must take good care of many things that may go wrong (such as: if the user’s web browser crashes, so the user cannot explicitly log out of the system, the edited item should not remain locked forever).
To be efficient at this, you also need to know some tools for managing projects. Some of those tools are Java-specific, so your knowledge of Java should include them; they are parts of the Java ecosystem. You should use javadoc syntax to write comments; JUnit to write unit tests; Maven to create and manage projects, some tools to check your code quality, and perhaps even Jenkins for continuous integration. Also the things you already have on your list (HTML, CSS, SQL, git) will be needed.
To understand creating web applications in Java, you should be able to write your own servlet, and perhaps even write your own JSP tag. Then all the frameworks are essentially libraries built on this, so you will be able to learn them as needed.
As an exercise, you could try to write a LessWrong-like forum in Java (with all its functionality; of course use third-party libraries where possible); with javadoc and unit tests. If you can do that, you are 100% ready for the industry (the next important skill you will need is leading a team of people who don’t have all of these skills yet, and then you are ready for the senior position). But that can take a few months of work.
There is another aspect of working on big projects that seems equally important. What you are talking about I’d call “design”, the skill of organizing the code (and more generally, the development process) so that it remains intelligible and easy to teach new tricks as the project grows. It’s the kind of thing reading SICP and writing big things from scratch would teach.
The other skill is “integration”, ability to open up an unfamiliar project that’s too big to understand well in a reasonable time, and figure out enough about it to change what you need, in a way that fits well into the existing system. This requires careful observation, acting against your habits, to conform to local customs, and calibration of the sense of how well you understand something, so that you can judge when you’ve learned just enough to do your thing right, but no less and not much more. Other than on a job, this could be learned by working a bit (not too much on each one, lest you become comfortable) on medium/large open source projects (implementing new features, not just fixing trivial bugs), possibly discarding the results of the first few exercises.
I’ve TAed a class like the Programming Languages class you described. It was half Haskell, half Prolog. By the end of the semester, most of my students were functionally literate in both languages, but I did not get the impression that the students I later encountered in other classes had internalized the functional or logical/declarative paradigms particularly well—e.g., I would expect most of them to struggle with Clojure. I’d strongly recommend following up on that class with SICP, as sketerpot suggested, and maybe broadening your experience with Prolog. In a decade of professional software engineering I’ve only run into a handful of situations where logic programming was the best tool for the job, but knowing how to work in that paradigm made a huge difference, and it’s getting more common.
I know actuaries have huge tables of probabilites of death at any given age based on historical data. Where can I find more detailed data for cause of death? Can someone point me to similar tables for major life events such as probabilites of being robbed, laid off, being in an accident of some kind, getting divorced and so on?
I am becoming a believer in being prepared and even if there is no cost-effective preventative measure, being mentally prepared for an event is very beneficial too in my experience.
Oh wow, a highly motivated person can do significant original mortality research via their online tool. You can generate cause of death graphs for almost any sort of cohort you might care about.
It seems to be pretty well decided that (as opposed to directly promoting Less Wrong, or Rationality in general), spreading HPMoR is a generally good idea. What are the best ways to go about this, and has anyone undertaken a serious effort?
I came to the conclusion, after considering creating some flyers to post around our meetup’s usual haunts, that online advocacy would be much more efficient and cost effective. Then, after thinking that promotion on large sites with high signal to noise is mostly useless, realized that sharing among smaller communities that you are already a part of (game/specific interest forums, Facebook groups, etc.) might increase likelihood of a clickthrough, due to an even modest amount of social clout and in-group effect (as opposed to creating an account just to spam). And, posting (and bumping) is a very trivial inconvenience—but if you are still held back by the effort of creating a blurb, I’m happy to provide the one I used.
Of course, you should only do this where the forum has made the foolish choice to allow signatures. (One of the things I appreciate about Reddit/LW compared to forums is how they strongly discourage signatures.)
Convince me of this claim that you think is well decided.
I am not convinced that from the viewpoint of a non-rationalist that HPMoR doesn’t have many of the same problems as Spock. I can see many people reading the book, feeling that HP is too “evil,” and deciding that “rationality” is not for them.
Also, EY said “Authors of unfinished stories cannot defend themselves in the possible worlds where your accusation is unfair.” This should swing both ways. If it turns out that HP goes crazy because he was being meta and talking to himself too much, then spreading HPMoR is probably not as good an idea.
Why is “downvoted” so frequently modified by “to oblivion”? Can we please come up with a new modifier here? This is already a dead phrase, a cliche which seems to get typed without any actual thought going into it. Wouldn’t downvoting “to invisibility” or “below the threshold” or even just plain “downvoting”, no modifier, make a nice change?
Slang vocabulary tends to become more consistent and repetitive over time in my experience. New phrases will appear and then go to fixation until everyone uses them. The only answer is to try to be as creative as possible in your own word choices.
Is the problem of measuring rationality related to the problem of measuring programming skill? Both are notoriously hard, but I can’t tell if they’re hard for the same reason...
A personal anecdote I’d like to share which relates to the recent polyphasic sleep post ( http://lesswrong.com/lw/ip6/polyphasic_sleep_seed_study_reprise/ ):
My 7 year old son who always tended to sleep long and late seems to have developed segmented sleep by himself in the last two weeks.
He claims to wake e.g. at 3:10 AM gets dressed, butters his school bread—and gets to bed again—in our family bed. It’s no joke. He lies dressed in bed and his satchel is packed.
And the interesting thing is: He is more alert and less bad tempered than before. He doesn’t do afternoon naps though—at least none that I know of.
What can have caused this? Maybe the seed was that our children were always allowed to come into the family bed in the night (but only in the night) which they did often.
I remember reading somewhere (sorry, no link) that waking up at the midnight, and then going to sleep again after an hour or so, was considered normal a few hundred years ago. Now this habit is gone, probably because we make the night shorter using artificial lights.
Yes. I know. See e.g. http://en.wikipedia.org/wiki/Segmented_sleep
I knew that beforehand. That was the reason I wasn’t worried when my children woke up at night and crawled into our family bed (some other parents seem to worry.about the quality of their childrens sleep).
But I’m surprised that he actually segmented and that it went this far. I understood that artificial lighting—and we have enough of that—suppresses this segmentation.
I understood that artificial lighting—and we have enough of that—suppresses this segmentation.
Perhaps it is not the light per se, but the fact that when you stay awake at evening, and wake up on alarm clock in the morning, the body learns to give up the segmented sleep to protect itself from the sleep deprivation. Maybe the time interval for your children between going to sleep and having to wake up is large enough.
Possibly. But he has been a late riser always and he doesn’t really go to sleep earler than before. In fact he get earler than before. But maybe his sleep pattern just changes due to normal development.
My older son (9 years) also sometimes gets up in the night to visit the family bed. But I guess he is not awake long. He likes to build things and read or watch movies (from our file server) until quite late in the evening (often 10 PM). We allow that because he has no trouble getting up early.
Do I have a bias or useful heuristic? If a signal is easy to fake, is it a bias to assume that it is disingenuous or is it an useful heuristic?
I read Robert Hanson’s post about why there are so many charities specifically focusing on kids and he basically summed it up as signalling to seem kind, for potential mates, being a major factor. There were some good rebuttals in the comment sections but whether or not signalling is at play is not the point, I’m sure to a certain degree it is, how much? I don’t know. The point is that I automatically dismiss the authenticity of a signal if the signal is difficult to authenticate. In this example it is possible for people to both, signal that they care about children for a potential mate, as well as actually really caring about children ( e.g. innate emotional response).
EDIT: Just to be clear, this is a question about signalling and how I strongly associate easy to fake signals with dishonest signalling, not about charities.
Every heuristic involves a bias when you use it in some contexts.
Yes, but does it more often yield a satisfactory solution across many contexts if yes, then I’d label it a useful heuristic and if it is often wrong I would label it a bias.
You’re not using your words as effectively as you could be. Heuristics are mental shortcuts, bias is a systematic deviation from rationality. A heuristic can’t be a bias, and a bias can’t be a heuristic. Heuristics can lead to bias. The utility of a certain heuristic might be evaluated based on an evaluation of how much computation using the heuristic saves versus how much bias using the heuristic will incur. Using a bad heuristic might cause an individual to become biased, but the heuristic itself is not a bias.
I agree with your last sentence. The important thing should be how much good does the charity really do to those children. Are they really making their lives better, or is it merely some nonsense to “show that we care”?
Because there are many charities (at least in my country) focusing on providing children things they don’t really need; such as donating boring used books to children in orphanages. Obviously, “giving to children in orphanages” is a touching signal of caring, and most people don’t realize that those children already have more books than they can read (and they usually don’t wish to read the kind of books other people are throwing away, because honestly no one does). In this case, the real help to children in orphanages would be trying to change the legislation to make their adoption easier (again, this is an issue in my country, in your part of the world the situation may be different), helping them avoid abuse, or providing them human contact and meaningful activities. But most people don’t care about the details, not even enough to learn those details.
This depends on what you mean by “care”, i.e., they care about children in the sense that they derive warm fuzzies from doing things that superficially seem to help them. They don’t care in the sense that they aren’t interested in how much said actions actually help children (or whether they help them at all).
If I do something for myself, and there is no obvious result, I see that there is no obvious result, and i disappoints me. If I do something for other people, there is always an obvious result: I feel better about myself.
Because other people reward you socially for doing things for other people. If you do something good for person A, it makes sense for a person A to reward you—they want to reinforce the behavior they benefit from. But it also makes sense for an unrelated person B to reward you, despite not benefiting from this specific action—they want to reinforce the general algorithm that makes you help other people, because who knows, tomorrow they may benefit from the same algorithm.
The experimental prediction of this hypothesis is that the person B will be more likely to reward you socially for helping person A, if the person B believes they belong to the same reference class as person A (and thus it is more likely that an algorithm benefiting A would also benefit B).
Now who would have a motivation to reward you for helping yourself? One possibility is a person who really loves you; such person would be happy to see you doing things that benefit you. Parents or grandparents may be in that position naturally.
Another possibility is a person who sees you as a loyal member of their tribe, but not a threat. For such person, your success is a success of the tribe is their success. They benefit from having stronger allies; unless those allies becoming strong changes their position within the tribe. So one would help members of their tribe who are significantly weaker… or perhaps even significantly stronger… in either case the tribe becomes stronger and the relative position within the tribe is not changed. The first part is teachers helping their students, or tribe leaders helping their tribe except for their rivals; the second part is average tribe members supporting their leader.
Again, the experimental prediction would be that when you join some “tribe”, the people stronger than you will support you at the beginning, but then will be likely to stab you in the back when you reach their level.
Now, how to use this knowledge for your success in the real life. We are influenced by social rewards whether we want it or not. One strategy could be trying to reward myself indirectly—for example make a commitment that when I make something useful for myself, I will reward myself by exposing myself to a friendly social interaction. Second strategy is to find company of people who love me, by using “do they reward me for helping myself?” as a filter. (Problem is how to tell a difference between these people, and those that reward me for being a weak member of their tribe, and will later backstab me when I become stronger.) Third strategy is to find company of people much stronger than me with similar values. (And not forget to switch to even stronger people when I become strong.) Another strategy could be to join a group that feels far from the victory… a group that is still in the “conquering the world” mode, not in the “sharing the spoils” mode. (Be careful when the group reaches some victories.)
Anecdotal verification: one of my friends said that when he was running out of money, it made sense for him to buy meals for other people. Those people didn’t reciprocate, but third parties were more likely to help him.
Then I guess people from CFAR should go to some universities and give lectures about… effective altruism. (With the expected result that the students will be more likely to support CFAR and attend their seminars.) Or I could try this in my country when recruiting for my local LW group.
I guess it also explains why religious groups focus so much on charity. It is difficult to argue against a group that many people associate with “helping others”, even if other actions of the group hurt others. The winning strategy is probably making the charity 10% of what you really do, but 90% of what other people associate with you.
EDIT: Doing charity is the traditional PR activity of governments, U.N., various cults and foundations. I feel like reinventing the wheel again. The winning strategies are already known and fully exploited. I just didn’t recognize them as viable strategies for everyone including me, because I was successfully conditioned to associate them with someone else.
Sure. For example if you are donating money, you display your ability to make more money than you need. And if you donate someone else’s money (like a church that takes money from state), you display your ability to take money from people, which is even more impressive.
Because it’s considered good to even try to help someone else so you care less about outcomes. EG donating to charity is a good act regardless of whether you check to see if your donation saved a life. On the other hand, doing something for yourself that has no real benefits is viewed as a waste of time.
I am wondering what a PD tournament would look like if the goal was to maximize the score of the group rather than the individual player. For some payoff matrices, always cooperate trivially wins, but what if C/D provides a greater net payoff than C/C, which in turn is higher than D/D? Does that just devolve to the individual game? It feels like it should, but it also feels like giving both players the same goal ought to fundamentally change the game.
I haven’t worked out the math; the thought just struck me while reading other posts.
The game you are talking about should not be called PD.
The solution will be for everyone to pick randomly, (weighted based on the difference in C/C and D/D payoff) until they get a C/D outcome, and then just picking the same thing over and over. (This isn’t a unique solution, but it seems like a Schelling point to me.)
what if C/D provides a greater net payoff than C/C
The Prisoner’s Dilemma is technically defined as requiring that this not be the case, precisely so that one doesn’t ahve to consider the case (in iterated games) where the players agree to take turns cooperating and defecting. You are considering a related but not identical game. Which is of course totally fine, just saying.
If you allow C/D to have a higher total than CC, then it seems there is a meta-game in coordinating the taking-turns—“cooperating” in the meta-game takes the form of defecting only when it’s your turn. Then, the players maximise both their individual scores and the group score by meta-cooperating.
Ilya Shkrob’s In The Beginning is an attempt to reconcile science and religion. It’s the best such attempt that I’ve seen, better than I thought possible. If you enjoy “guru” writers like Eliezer or Moldbug, you might enjoy this too.
I haven’t found one, so I’ll try to summarize here:
“Prokaryotic life probably came to Earth from somewhere else. It was successful and made Earth into a finely tuned paradise. (A key point here is the role of life in preserving liquid water, but there are many other points, the author is a scientist and likes to point out improbable coincidences.) Then a tragic accident caused individualistic eukaryotic life to appear, which led to much suffering and death. Evolution is not directionless, its goal is to correct the mistake and invent a non-individualistic way of life for eukaryotes. Multicellularity and human society are intermediate steps to that goal. The ultimate goal is to spread life, but spreading individualistic life would be bad, the mistake has to be corrected first. Humans have a chance to help with that process, but aren’t intended to see the outcome.”
The details of the text are more interesting than the main idea, though.
Hold on, is he trying to imply that prokaryotes aren’t competitive? Not only does all single-celled life compete, it competes at a much faster pace than multicellular life does.
Based on that summary, I’d say that it’s interesting because it draws on enough real science to be superficially plausible, while appealing to enough emotional triggers to make people want to believe in it enough that they’ll be ready to ignore any inconsistencies.
Superficially plausible: Individuals being selfish and pursuing their own interest above that of others is arguably the main source of suffering among humans, and you can easily generalize the argument to the biosphere as a whole. Superorganisms are indeed quite successful due to their ability to suppress individualism, as are multi-celled creatures in general. Humans do seem to have a number of adaptations that make them more successful by reducing individualistic tendencies, and it seems plausible to claim that even larger superorganisms with more effective such adaptations could become the dominant power on Earth. If one thinks that there is a general trend of more sophisticated superorganisms being more successful and powerful, then the claim that “evolution is not directionless” also starts to sound plausible. The “humans have a chance to help with that process but aren’t intended to see the outcome” is also plausible in this context, since a true intelligent superorganism would probably be very different from humanity.
“Evolution leads to more complex/intelligent creatures and humans are on top of the hierarchy” is an existing and widely believed meme that similarly created a narrative that put humans on top of the existing order, and this draws on that older meme in two ways: it feels plausible and appealing for many of the same reasons why the older meme was plausible, and anyone who already believed in the old meme will be more inclined to see this as a natural extension of the old one.
Emotional triggers: It constructs a powerful narrative of progress that places humans at the top of the current order, while also appealing to values related to altruism and sacrificing oneself for a greater whole, and providing a way to believe that things are purposeful and generally evolving towards the better.
The notion of a vast superorganism that will one day surpass and replace humanity also has the features of vastness and incomprehensibility, two features which Keltner and Haidt claim form the heart of prototypical cases of awe:
Vastness refers to anything that is experienced as being much larger than the self, or the self’s ordinary level of experience or frame of reference. Vastness is often a matter of simply physical size, but it can also involve social size such as fame, authority, or prestige. Signs of vastness such as loud sounds or shaking ground, and symbolic markers of vast size such as a lavish office can also trigger the sense that one is in the presence of something vast. In most cases vastness and power are highly correlated, so we could have chosen to focus on power, but we have chosen the more perceptually oriented term “vastness” to capture the many aesthetic cases of awe in which power does not seem to be at work.
Accommodation refers to the Piagetian process of adjusting mental structures that cannot assimilate a new experience (Piaget & Inhelder, 1966/1969). The concept of accommodation brings together many insights about awe, such as that it involves confusion (St. Paul) and obscurity (Burke), and that it is heightened in times of crisis, when extant traditions and knowledge structures do not suffice (Weber). We propose that prototypical awe involves a challenge to or negation of mental structures when they fail to make sense of an experience of something vast. Such experiences can be disorienting or even frightening, as in the cases of Arjuna and St. Paul, since they make the self feel small, powerless, and confused. They also often involve feelings of enlightenment, and even rebirth, when mental structures expand to accomodate truths never before known. We stress that awe involves a need for accomodation, which may or may not be satisfied. The success of one’s attempts at accomodation may partially explain why awe can be both terrifying (when one fails to understand) and enlightening (when one succeeds).
The more I think of it, the more impressive the whole thing starts to feel like, in the “memeplex that seems very effectively optimized for spreading and gaining loyal supporters” sense.
Sounds like an attempt to reconcile, not science and religion in general, but specifically science and the Christian concepts of the Fall and original sin; or possibly some sort of Gnosticism.
(Aleister Crowley made similar remarks about individuality as a disease of life in The Book of Lies, but didn’t go so far as to attribute it to eukaryotes.)
Sounds like an attempt to reconcile, not science and religion in general, but specifically science and the Christian concepts of the Fall and original sin; or possibly some sort of Gnosticism.
Well the relevant story (God banishing Adam and Eve from the Garden of Eden) is in Genesis, so it’s in the Torah as well. Gnostics considered the Fall a good thing—it freed humanity from the Demiurge’s control.
I don’t mean to say your conclusion is wrong, but I’d like to point out that if Eliezer’s ideas were summed up as one paragraph and posted to some other website, many people there would respond using the same thought process that you used. Anyway, a text can be wrong and still worth reading. I think the text I linked to is very worth reading. If you get halfway through and still think that it’s stupid, let me know—I’ll be pretty surprised.
I like this. Like all good religion, it’s an idea which feels true and profound but is also clearly preposterous.
It reminds me of some concepts in animes I liked, like the Human Instrumentality Project in Neon Genesis Evangelion and the Ragnarok Connection in Code Geass.
On the other hand the event extinguished more species than the comet that killed the dinosaurs. Maybe those amphibians just had a good strategy for dealing with the heat.
However! If it was cool enough on places far from Siberia, then it’s obvious that this lava lake caused high temperatures around it. Not the “Global Warming caused by CO2 buildup, 250 million years ago”.
Then, big amphibians could survive in Antarctica, for example.
Amphibians were always fresh water creatures. And if the oceans were hot because of this super-volcano, some distant ponds and lakes could be—just warm.
Hm, the trouble is that this doesn’t account for the insulating effect of air, or a thin cool surface layer. A layer of air can reflect a lot of radiated heat right back into its source. Dare I say you might need something like a climate model to decide this?
I ate something I shouldn’t have the other day and ended up having this surreal dream where Mencius Moldbug had gotten tired of the state of the software industry and the Internet and had made his personal solution to it all into an actual piece of working software that was some sort of bizarre synthesis of a peer-to-peer identity and distributed computing platform, an operating system and a programming language. Unfortunately, you needed to figure out an insane system of phoneticized punctuation that got rewritten into a combinator grammar VM code if you wanted to program anything in it. I think there even was a public Github with reams of code in it, but when I tried to read it I realized that my computer was actually a cardboard box with an endless swarm of spiders crawling out of it while all my teeth were falling out, and then I woke up without ever finding out exactly how the thing was supposed to work.
Welcome to Urbit
I love the smell of Moldbug in the morning.
For an example of fully rampant Typical Mind Fallacy in Urbit, see the security document. About two-thirds of the way down, you can actually see Yarvin transform into Moldbug and start pontificating on how humans communicating on a network should work, and never mind the observable evidence of how they actually have behaved whenever each of the conditions he describes have obtained.
The very first thing people will do with the Urbit system is try to mess with its assumptions, in ways that its creators literally could not foresee (due to Typical Mind Fallacy), though they might have been reasonably expected to (given the real world as data).
I love those dream posts in the open threads.
Note that [explaining-the-joke](http://rot13.com/)rirelguvat hc gb gur pbzchgre orvat n pneqobneq obk vf yvgrenyyl gehr.
I think that he actually implemented the spiders.
Is there a name for this following bias?
So I’ve debated a lot of religious people in my youth, and a common sort of “inferential drift”, if you can call if that, is that they believe that if you don’t think something is true or doesn’t exist, then this must mean that you don’t want said thing to be true or to exist. It’s like a sort of meta-motivated reasoning; they are falsely attributing your conclusions due to motivated reasoning. The most obvious examples are reading any sort of Creationist writing that critiques evolution, where they pretty explicitly attribute accepting the theory of evolution to a desire for god to not exist.
I’ve started to notice it in many other highly charged, mind-killing topics as well. Is this all in my head? Has anyone else experienced this?
I used to get a lot of people telling me I was an atheist because I either didn’t want there to be a god or because I wanted the universe to be logical (granted, I do want that, but they meant it in the pejorative Vulcan-y sense). I eventually shut them up with “who doesn’t want to believe they’re going to heaven?” but it took me a while to come up with that one.
I don’t understand it either, but this is a thing people say a lot.
This seems pretty close to a Bulverism: http://en.wikipedia.org/wiki/Bulverism
That does seem close to Bulverism. But what I described seem to be happening at a subconscious bias level, where people are somewhat talking past each other due to a sort of hidden assumption of Bulverism.
Then perhaps...
I’ve heard it called “psychologizing”.
If someone else accuses you of engaging in motivated reasoning that’s ad hominem.
No, that is a mere assertion (which may or may not be true). If they claimed that he is wrong because he is engaging in motivated reasoning, then that would be ad hominem.
Wait, what? This might be a little off topic, but if you assert that they lack evidence and are drawing conclusions based on motivated reasoning, that seems highly relevant and not ad hominem. I guess it could be unnecessary, as you might try to focus exactly on their evidence, but it would seem reasonable to look at the evidence they present, and say “this is consistent with motivated reasoning, for example you describe many things that would happen by chance but nothing similar contradictory, so there seems to be some confirmation bias” etc.
Robin Hanson defines “viewquakes” as “insights which dramatically change my world view.”
Are there any particular books that have caused you personally to experience a viewquake?
Or to put the question differently, if you wanted someone to experience a viewquake, can you name any books that you believe have a high probability of provoking a viewquake?
Against Intellectual Monopoly converted me from being strongly in favor of modern copyright to strongly against it.
The Feynman Lectures on Computation did this for me by grounding computability theory in physics.
I’m not sure if it is possible or has a high chance of success to give someone a book in the hope of provoking a viewquake. Most people would detect being influenced. Compare with trying to give people the bible to convert them doesn’t work either even though it also could provoke a viewquake—after all the bible is also much different from other common literature. To actually provoke a viewquake it must be a missing piece either connecting pieces or buildig on them and thus causing an aha moment. And the trouble is: This depends critically on your prior knowledge thus not every book will work on everyone.
Compare with http://en.wikipedia.org/wiki/Zone_of_proximal_development
If someone will actually get through the density of the text, moldbug has been known to provoke a few viewquakes.
I know of a few former-theists whose atheist tipping point was reading Susan Blackmore’s The Meme Machine. I recall being fairly heavily influenced by this myself when I first read it (about twelve years ago, when it was one of only a small handful of popular books on memetics), but suspect I might find it a bit tiresome and erroneous if I were to re-read it.
I tried to read it a few years after reading a bunch of dawkins and found it hard to get through
A microecon textbook given to a reflective person.
The Sequences.
“1493” and “The Better Angels of Our Nature”
What was the viewquake for you in 1943?
Primarily how much biology and ecosystems could have largescale impacts on society and culture in ways which stayed around even after the underlying issue was no longer around. One of the examples there is how the prevalence of diseases (yellow fever, malaria especially) had long-term impacts on differences in North American culture in both the South and the North.
Understanding Power by Noam Chomsky.
Reading Wittgenstein’s Philosophical Investigations prompted the biggest viewquake I’ve ever experienced, substantially changing my conception of what a properly naturalistic worldview looks like, especially the role of normativity therein. I’m not sure I’d assign it a high probability of provoking a viewquake in others, though, given his aphoristic and often frustratingly opaque style. I think it worked for me because I already had vague misgivings about my prior worldview that I was having trouble nailing down, and the book helped bring these apprehensions into focus.
A more concrete scientific viewquake: reading Jaynes, especially his work on statistical mechanics, completely altered my approach to my Ph.D. dissertation (and also, incidentally, led me to LW).
The Anti-Christ would be my #1 pick, for both versions of the question. Stumbling on Happiness is a good second choice though.
The biggest world-shattering book for me was the classic, Engines of Creation by K. Eric Drexler. I was just 21 and the book had a large impact on me. Nowadays though, the ideas in the book are pretty mainstream, so I don’t think it would have the same effect for a millenial.
While it’s overoptimistic and generally a bit all over the place, Kurzweil’s The Singularity is Near might still be the most bang for the book single introduction to the “humans are made of atoms” mindset you can throw at someone who is reasonably popular science literate but hasn’t had any exposure to serious transhumanism.
It’s kinda like how The God Delusion might not be the most deep book on the social psychology of religion, but it’s still a really good book to give to the smart teenager who was raised by fundamentalists and wants to be deprogrammed.
After reading Engines of Creation, The Singularity is Near didn’t have nearly as much effect on me. I just thought, “Well, duh” while reading it. I can imagine how it would affect someone with little exposure to transhumanist ideas though. I agree with you that it’s a good choice.
CFAR has a class on handling your fight/flight/freeze reaction this Saturday Sept 28th.
The sympathetic nervous system activation that helps you tense up to take a punch or put on a burst of speed to outrun an unfriendly dog isn’t quite so helpful when you’re bracing to defend yourself against an intangible threat, like, say, admitting you need to change your mind.
Once of CFAR’s instructors will walk participants through the biology of the fight/flight/freeze response and then run interactive practice on how to deliberately notice and adjust your response under pressure. The class is capped at 12, due to its interactive nature.
An iteration of this class was one of the high points of the May 2013 CFAR retreat for me. It was extraordinarily helpful in helping me get over various aversions, be less reactive and more agenty about my actions, and generally enjoy life more. For instance, I gained the ability to enjoy, or substantially increased my enjoyment of, several activities I didn’t particularly like, including:
improv games
additional types of social dance
conversations with strangers
public speaking
It also helped substantially with CFAR’s comfort zone expansion exercises. Highly recommended.
For those of us who can’t be in Berkeley in < 1 week’s notice, can you go into more detail on the methods?
A bit. Most of the techniques were developed by one of the CFAR instructors, and I can’t reproduce his instruction, nor do I want to steal his thunder. The closest thing you can find out more about is mindfulness-based stress reduction. (But the real value of the class is being able to practice with Val and ask him questions, which unfortunately I can’t do justice to in a LW comment.)
Would you be able to post a summary for people unable to attend? I find the topic very interesting, but habitually reside in a different continent,.
Anyone here familiar enough with General Semantics and willing to write an article about it? Preferably not just a few slogans, but also some examples of how to use it in real life.
I have heard it mentioned a few times, and it sounds to me a bit LessWrongish, but I admit I am too lazy now to read a whole book about it (and I heard that Korzybski is difficult to read, which also does not encourage me).
I just started rereading Science and Sanity and maybe the project will develop into a lesswrong post.
When it comes to Korzybski being difficult to read I think it’s because the idea he advocates are complex.
As he writes himself:
It’s a bit like learning a foreign language in a foreign language. In some sense that seems necessary. A lot of dumb down elements of General Semantics made it into popular culture but the core seems to be intrinsicly hard.
Non-violent communication is the intellectual heir of E-prime which was the heir of semantic concerns in General Semantics. Recent books on the subject are well reviewed. It is a useful tool in communicating across large value rifts.
I don’t think it makes sense to speak of a single framework as the heir of General Semantics. General Semantics influenced quite a lot.
General Semantics itself is quite complex. Nonviolent communication is pretty useless when you want to speak about scientific knowledge. General Semantics notions of thinking about relations and structure are on the other hand are quite useful.
Does Rosenberg cite Bourland (or Korzybski) anywhere? I thought these were independent inventions that happened upon some tangential ideas about non-judgmental thinking.
I had thought that there was a link in someone Rosenberg worked with developing it but now I can’t find anything. The elimination of the “to-be” verb forms does not seem explicit in NVC methodology. I think you are correct and they are independent.
I noticed that in the survey results from last year that there was a large number of people who assigned a non-trivial probability to the simulation hypothesis, yet identified as atheist.
I know this is just about definitions and labels, so isn’t an incredibly important issue, but I was wondering why people choose to identify that way. It seems to me that if you assign a >20% chance to us living in a computer simulation that you should also identify as agnostic.
If not, it seems like you are using a definition of god which includes all the major religions, yet excludes our possible simulators. What is the distinction that you think makes the simulation not count as theism?
Probably these people use a definition of theism that says that a god has to be an ontologically basic entity in an absolute sense, not just relative to our universe. If our simulators are complex entities that have evolved naturally in their physical universe (or are simulated in turn by a higher level) then they don’t count as gods by this definition.
Also, the general definition of God includes omniscience and omnipotence, but a simulator-god may not be either, e.g. due to limited computing resources they couldn’t simulate an arbitrarily large number of unique humans.
Hmm, that is a distinction that is pretty clear cut. However most people who believe in god believe that all people have ontologically basic souls. Therefore, since they think ontologically basic is nothing particularly special, I do not think that they would consider that a particularly important part of the definition of a god.
If you read the survey questions God get’s defined as an ontologically basic entity for the sake of the survey.
Oh. I was looking at the excel data and missed that. Oops. Maybe this means a lot more people agree with me than I thought.
They might think that being ontologically basic is a necessary condition for being a god, but not a sufficient condition. Then simulators are not gods, but souls are not gods either because they do not satisfy other possible necessary conditions: e,g, having created the universe, or being omnipotent, omniscient and omnibenevolent (or at least being much more powerful, knowing and good than a human), etc.
Or perhaps, they believe being ontologically basic is necessary and sufficient for being a god, but interpret this not just as not being composed of material parts, but in the stronger sense of not being dependent on anything else for existing (which souls do not satisfy because they are created by God, and simulators don’t because they have evolved or have been simulated in turn). (ETA: this last possibility probably applies to some theists but not the atheists you are talking about.)
What is your response to the argument I gave below?
They are indeed logically distinct questions. However, up to a few years ago all or almost all people who said yes to 1 also said yes to 2. The word “theism” was coined with these people in mind and is strongly associated with yes to 2 and with the rest of the religious memeset.
Thus, it is not surprising that many people who only accept (or find likely) 1 but not 2 would reject this label for fear of false associations. Since people accepting both 1 and 2 (religionists) tend to differ philosophically very much in other things from those accepting 1 but not 2 (simulationists), it seems better to use a new technical term (e.g. “creatorism”) for plain yes to 1, instead of using a historical term like “theism” that obscures this difference.
Yes. I disagree with them.
(Eliminating the supernatural aspect explains the human mind, and explains away God.)
Disagree with simulatarians about whether or not we are simulated?
Disagree with theists that people have ontologically basic souls; further disagree with the claim that the ‘ontologically basic’ / ‘supernatural’ aspect of a god is unimportant to its definition.
(What theists think is not relevant to a question about the beliefs of people who not self-identify as theists.)
I feel like there are two independent questions:
1) Does there exist a creator with a mind?
2) Are minds ontologically basic?
I think that accurately factors beliefs into 2 different questions, since there are (I think) very few people who believe that god has an an ontologically basic mind yet we do not.
I do not think it is justified to combine these questions together, since there are people who say yes to 1 but not 2, and many many people who say yes to 2 but not 1.
Calling myself an agnostic would put me in an empirical cluster with people who think gods worthy of worship might exist, and possibly have some vague hope for an afterlife (though I know not all agnostics believe these things). I do not think of potential matrix overlords the way people think of the things they connect to the words “God” and “gods”. I think of them as “those bastards that (might) have us all trapped in a zoo.” And if they existed, I wouldn’t expect them to have (real) magic powers, nor to be the creators of a real universe, just a zoo that looks like one. I do not think that animals trapped in a zoo with enclosure walls painted with trees and such to look like a real forest should think of zookeepers as gods, even if they have effectively created the animals’ world, and may have created the animals themselves (through artificial breeding, or even cloning), and I think that is basically analogous to what our position would be if the simulation hypothesis was correct.
Hmm. I was more thinking about a physics simulation by something that is nothing like a human than an ancestor simulation like in Bostrom’s original argument. I think that most people who assign a non-trivial chance to ancestor simulation would assign a non-trivial chance to physics simulation.
I don’t think either variety is very similar to a zoo, but if we were in a physics simulation, I do not think our relationship with our simulators is anything like a animal-zookeeper relationship.
I also think that you should taboo the word “universe,” since it implies that there is nothing containing it. Whatever it is that we are in, our simulators created all of it, and probably could interfere if they wanted to. They are unlikely to want to now, since they went so long without interfering so far.
It may have once meant that, like the word “atom” once meant “indivisible.” But that’s not how people seem to use it anymore. Once a critical mass of people start misusing a word, I would rather become part of the problem than fight the inevitable.
If you were using the word that way, then it seems they are “creators of a (real) universe.”
Theism usually involves God as the explanation of why the world exists, and why we are conscious. In usual simulation scenarios, a world happens through physics and natural selection etc. And then a copy of part of that world is made. Yes, the copying process “made” the copy, but most explanations of how the copied world is the way it is (from the point of view of those in it) still has to do with physics, natural selection, etc. and not the copying process.
In other words, “who designed our world?” is more relevant than “who created our world?”.
Why are there so few people living past 115?
There’s an annoying assumption that no parent would want their child to have a greatly extended lifespan, but I think it’s a reasonable overview otherwise, or at least I agree that there’s not going to be a major increase in longevity without a breakthrough. Lifestyle changes won’t do it.
I’ve been working on a series of videos about prison reform. During my reading, I came across an interesting passage from wikipedia:
What struck me was how preferable these punishments (except the hanging, but that was very rare) seem compared to the current system of massive scale long-term imprisonment. I would much rather pay damages and be whipped than serve months or years in jail. Oddly, most people seem to agree with Wikipedia that whipping is more “severe” than imprisonment of several months or years (and of course, many prisoners will be beaten or raped in prison). Yet I think if you gave people being convicted for theft a choice, most of them would choose the physical punishment instead of jail time.
I’m reminded of the perennial objections to Torture vs Dust Specks to the effect that torture is a sacred anti-value which simply cannot be evaluated on the same axis as non-torture punishments (such as jail time, presumably), regardless of the severities involved..
There’s a post on Overcoming Bias about this here.
The key quote, “Incarceration destroys families and jobs, exactly what people need to have in order to stay away from crime.” If we had wanted to create a permanent underclass, replacing corporal punishment with prison would have been an obvious step in the process.
Obviously that’s not why people find imprisonment so preferable to torture, though; TheOtherDave’s “sacred anti-value” explanation is correct there. It would be interesting to know exactly how a once-common punishment became seen as unambiguously evil, though, in the face of “tough on crime” posturing, lengthening prison sentences, etc.
Maybe it’s a part of human hypocrisy: we want to punish people, but in a way that doesn’t make our mirror neurons feel their pain. We want people to be punished, without thinking about ourselves as the kind of people who want to harm others. We want to make it as impersonal as possible.
So we invent punishments that don’t feel like we are doing something horrible, and yet are bad enough that we would want to avoid them. Being locked behind bars for 20 years is horrible, but there is no speficic moment that would make an external observer scream.
It is, incidentally, not obvious to everyone that the desire to create a stable underclass didn’t drive our play a significant role in our changing attitudes towards prisons… in fact, it’s not even obvious to me, though I agree that they didn’t play a significant role in our changing attitudes towards torturing criminals.
Because corporal punishment is an ancient display of power; the master holding the whip and the servant being punished for misbehavior. It’s obviously effective, and undoubtedly more humane than incarceration, but it’s also anathema to the morality of the “free society” where everyone is supposed to be equal and thus no-one can hold the whip.
(Heck, even disciplining a child is considered grounds to put the kid in foster care; if you want corporal punishment v incarceration, that’s a hell of a dichotomy. And for every genuinely abused kid CPS saves, how many healthy families get broken up again?)
The idea is childish and unrealistic, but nonetheless popular because it plays on the fear and resentment people feel towards those above them. And in a democracy, popular sentiment is difficult to defeat.
Don’t look at it from the perp point of view, look at it from an average-middle-class-dude or a suburban-soccer-mom point of view.
If there’s a guy who, say, committed a robbery in your neighborhood, physical punishment may or may not deter him from future robberies. You don’t know and in the meantime he’s still around. But if that guy gets sent to prison, the state guarantees that he will not be around for a fairly long time.
That is the major advantage of prisons over fines and/or physical punishments.
On the other hand, making people spend long periods of time in a low-trust environment surrounded by criminals seems to be a rather effective way of elevating recidivism when they do get out, so the advantage as implemented in our system is on rather tenuous footing.
And of course, the prison system comes with the major disadvantage that imprisoning people is a highly expensive punishment to implement.
I am not arguing that prisons are the proper way to deal with crime. All I’m saying is that arguments in favor of imprisonment as the preferred method of punishing criminals exist.
That’s only an advantage if the expected cost to society of keeping him in prison is less than the expected cost (broadly construed) to society of him keeping on robbing.
The relevant part: “look at it from an average-middle-class-dude or a suburban-soccer-mom point of view”.
They do have political power and they don’t do expected-cost-to-society calculations.
I guess I just hadn’t interpreted “point of view” close enough to literally.
This is totally obvious, I’m not sure why you felt you needed to point that out.
The point of my comment is that it is interesting that prison isn’t viewed as cruel, even though it’s obviously more harsh than alternatives. Obviously there are other reasons people prefer prison as a punishment for others.
well, short of death.
Death is an existential punishment :-/
Dunno about that—peak-end rule.
It’s not about harshness but about the concept of the important for physical integrity for human dignity.
Isn’t freedom important for human dignity? It seems that any kind of punishment infringes on human dignity to some extent. Also, remember that prisoners are often subject to beatings and rape by other prisoners or guards—something which is widely known.
According to the standard moral doctrine it’s not as central as bodily integrity. The state is allowed to take away freedom of movement but not bodily integrity or force people to work as slaves.
That’s a feature of the particular way a prison is run.
There is a “standard moral doctrine”??
Yes, I consider things like the UN charter of human rights the standard moral doctrine.
Video playback speed was mentioned on the useful habits repository thread a few weeks ago and I asked how I could do the same. Youtube’s playback speed option is not available on all videos. Macs apparently have a plug-in you can download, I don’t own a mac so that’s not helpful. You could download the video then play it back, but that wastes time. I just learned a solution that works across all OS’ with out the need to download the video first.
copy youtube url, ctrl v on vlc mainscreen
Less Wrong and its comments are a treasure trove of ethical problems, both theoretical and practical, and possible solutions to them (the largest one to my knowledge; do let me know if you are aware of a larger forum for this topic). However, this knowledge is not easy to navigate, especially to an outsider who might have a practical interest in it. I think this is a problem worth solving and one possible solution I came up with is to create a StackExchange-style service for (utilitarian, rationalist) ethics. Would you consider such a platform for ethical questions to be useful? Would you participate?
Possible benefits:
Making existing problems and their answers easier to navigate through the use of tagging and a stricter question-answer format.
Accumulation of new interesting problems.
The closest I have found is http://philosophy.stackexchange.com/questions/tagged/ethics, which doesn’t appear to be very active and it being a part of a more traditional philosophy forum might be a hindrance.
Edit: a semi-relevant example.
An interesting concept I haven’t seen mentioned on LW before: deconcentration of attention.
Seems slightly pseudosciencey, but perhaps valuable.
This is a game I like to play with myself actually. I sit and observe my surroundings, consciously removing labels from the objects in my visual field until it’s clear that everything is one big continuity of atoms. It’s fun and brings back for me that childlike feeling of seeing thing for the first time again. I have to be in the right frame of mind to do it and it’s much harder when in a man-made environment (where everything is an object) than in nature.
But I’ve never had a word for it before, so thanks.
Actually, I’d be interested to hear what other mental games LWers play to amuse themselves.
Some more games I play:
‘Fly arounds,’ where I visualize my perspective moving around the room, zooming out of the walls of the building I’m in, and exploring/getting new views on places I know. It’s fun to ‘tag’ an imaginary person and see what their perspective moving through an average day would be.
‘People watching,’ where I pick a person walking by and try to read their actions and relationships with the people they’re with. They then get a full backstory and life.
‘Contingency.’ What would happen if a car drove through the door right now/that guy pulled a gun/I suddenly realized that I am actually Jason Bourne? This xkcd puts it best.
I feel that these last two are pretty common.
I have a half written post about the cultural divisions in the environmentalist movement that I intend to put on a personal blog in the nearish future. (Tl;Dr there “Green” groups who advocate different things in a very emotional/moral way vs. “Scientific” environmentalists)
I’ve been thinking about comparisons between the structure of that movement and how future movements might tackle other potential existential risks, specifically UFAI. Would people be interested in a post here specifically discussing that?
If you haven’t yet read Neal Stephenson’s Zodiac, I recommend it.
As an aside, I find it convenient to think of a significant part of environmentalism as purely religious movement.
Thats a good analogy. By recycling plastic bottles you are displaying your virtue, whatever the extent of the practical consequences.
Is there anything you’ve learnt that’s particular about groups trying to tackle x-risk in particular? If not, you could just make a post describing what you’ve learnt about groups that challenge big problems. Generality at no extra cost.
Political and social movements as a whole are so massive and varied that I don’t think I could really give much non-trivial analysis. I’m not sure there’s really a separate category of ‘big problem’ that can be separated out, all movements think their problem is big, and all big problems are composed of smaller problems.
I make the comparison between UFAI and environmentalism because its probably the only major risk that presently is really in public consciousness,* so provides a model of how people will act in response. E.g. the solutions that technical experts favour may not be the ones that the public support even if they agree on the problem.
*A few decades ago nuclear weapons might have also been analogous, but, whether correctly or not, the public perception of their risk has diminished.
Yes. As I see, a lot of Greens are Misanthropes. Do you cover this aspect?
From what I can tell, it’s actually a teeny-tiny number of people, but they get disproportional media coverage for reasons that should be obvious considering the interests of those doing the covering.
FWIW, while I’ve not met many misanthropic greens in real life, about half of the greens I’ve met on the Internet range from mildly to extremely misanthropic.
Sometimes the whole internet seems to be filled by misanthropic people, so I am not sure how much evidence this is about misanthropy of greens.
I wouldn’t say misanthropic, maybe more a matter of scope insensitivity and an overromanticised view of the ‘natural’ state of the world. But I think they genuinely believe it would make humans better off, whereas truly misanthropic greens wouldn’t care.
Just thinking… could it be worth doing a website providing interesting parts of settled science for laypeople?
If we take the solid, replicated findings, and remove the ones that laypeople don’t care about (because they have no use for them in everyday life)… how much would be left? Which parts of human knowledge would be covered most?
I imagine a website that would first provide a simple explanation, and then a detailed scientific explanation with references.
Why? Simply to give people idea that this is science that is useful and trustworthy—not the things that are too abstract to understand or use, and not some new hypotheses that will be disproved tomorrow. Science, as a friendly and trustworthy authority. To get some respect for science.
Wikipedia seems close enough to what you’re describing … and improving Wikipedia (plenty of science pages are flagged as “this is hard to understand for non-specialists) seems like the easiest way to move it closer.
The wikipedia contains millions of topics, so the subset of “settled science” is lost among them. Creating a “Settled Science” portal could be an approximation.
As an example of where my idea differs from the wikipedia approach: the wikipedia Science portal displays a link to article about Albert Einstein. Yes, Albert Einstein was an important scientist, but his personal biography is not science. So one difference would be that the “settled science encyclopedia” would not include Einstein or any other scientist (except among the references). Only the knowledge, which could be also used on a different planet with different history and different names and biographies of the scientists.
Also, in wikipedia you have a whole page about a topic. Some parts of the page may be settled science, other parts are not; but both parts are on the same page, in the same encyclopedia. It would be cognitively easier for a reader to know “if it is on SettledScienceEncyclopedia.com″, it is settled science.
EDIT: I agree that improving scientific articles on wikipedia, not just making them more correct but also more accessible to wide public, is a worthy goal.
It could be worth doing but it’s a hard task.
Take a subject like evolution.The fact that evolution happens is setteled science for a long time. On the other hand if you take a school book on evolution that was written 30 years ago there a good chance that it has examples of how one species is related to another species that got overturned when we got genome data.
People used to respect Science, as an abstract mysterious force which Scientists could augur and even use to invoke the odd miracle. In a way, people in the nineteenth and early twentieth centuries saw Scientists in a similar way to how pre-Christian Europe saw priests; you need one on hand when you make a decision, and contradict them at your peril, but ultimately they’re advisers rather than leaders.
That attitude is mostly gone now, but it could be useful to bring it back. Ordinary people are not going to provide useful scientific insights or otherwise helpfully (1) participate in the process, so keeping them out of the way and deferential is going to be more valuable then trying to involve them. There seems to be a J curve between 100% scientific literacy and old-school Science-ism, and it seems to me at least that climbing back up to an elitist position is the option most likely to actually work in our lifetimes.
If anything, the more easily lay people can lay their hands on scientific materials the worse the situation is; the Dunning-Kruger effect and a lack of actual scientific training / mental ability means that laypeople are almost certain to misinterpret what they read in ways which disagree with the actual scientific consensus. Just look at the huge backlash against biology and psychometry these days; most of the people I’ve argued with in person or online have no actual qualifications but feel entitled to opinions on the issues because they stumbled through an article on pub-med and know the word methodology.
Is this true? It pattern matches to a generic things-were-better-in-the-old-days complaint and I’m not sure how one would get a systematic idea of how much people trusted science & scientists 100-200 years ago.
(Looking at the US, for instance, I only find results from surveys going back to the late 1950s. Americans’ confidence in science seems to have fallen quite a lot between 1958 and 1971-2, probably mostly in the late 1960s, then rebounded somewhat before remaining stable for the last 35-40 years. I note that the loss of trust in science that happened in the 1960s wasn’t science-specific, but part of a general loss of confidence experienced by almost all institutions people were polled about.)
Citizen science seems like evidence against this idea.
I disagree. I strongly disapprove of treating scientists as high priests of mystical higher knowledge inaccessible to mere mortals.
The average science PhD is two standard deviations out from the population mean in terms of intelligence, has spent ~8-10 years learning the fundamental background required to understand their field, and is deeply immersed in the culture of science. And these are the ‘newbs’ of the scientific community; the scrappy up-and-comers who still need to prove themselves as having valuable insights or actual skills.
So yes, for all practical purposes the barrier to genuine understanding of scientific theories and techniques is high enough that a layman cannot hope to have more than a cursory understanding of the field.
And if we want laymen to trust in a process they cannot understand, the priest is the archetypal example of mysterious authority.
First, there is no logical connection between your first paragraph and the second one and I don’t see any reason for that “so, yes”.
Second, that claim is, ahem, bullshit. I’ll agree that someone with low IQ “cannot hope to have more than a cursory understanding”, but for such people this statement is true for much more than science. High-IQ laymen are quite capable of understanding the field and, often enough, pointing out new approaches which have not occurred to any established scientists because, after all, that’s not how these things are done.
No, I don’t want laymen to trust in a process they cannot understand.
How high is “high-IQ” and how low is “low IQ” in your book?
Someone with an above-average IQ of 115-120, like your average undergrad, visibly struggles with 101 / 201 level work and is deeply resistant to higher-level concepts. Actually getting through grad school takes about a 130 as previously mentioned, and notable scientists tend to be in the 150+ range. So somewhere from 84-98% of the population is disqualified right off the bat, with only the top 2-0.04% capable of doing really valuable work.
And that’s assuming that IQ is the only thing that counts; in actuality, at least in the hard sciences, there is an enormous amount of technical knowledge and skill that a person has to learn to provide real insight. I cannot think of a single example in the last 50 years which fits your narrative of the smart outsider coming in and overturning a well-established scientific principle, although I would love to hear of one if you know any.
So no more trusting chemotherapy to treat your cancer? The internet to download your music, or your iPod to play it? A fixed wing aircraft to transport you safely across the Atlantic? Must be tough even just driving to work, now that your car is mostly computer-controlled and made of materials with names that sound like alphabet soup.
Almost every aspect of modern life, even for a polymathic genius, is going to be at least partially mysterious; the world of our tools and knowledge is far too complex for the human mind to fully grasp.
Where did you get those numbers?
Not reality. 41% of people in the US are enrolled in college (in 2010) Source. If we assumed that the US has representative IQ and use a 15 SD IQ scale, then the top 41% of IQs are all people with IQ of at least 103.41. I calculated that average IQ of a the top 41% of the population on wolfram alpha. (It is easy, because by definition, IQ follows a normal distribution.) I got 114.2.
If US citizens between 18 and 24 are representative of the entire population in terms of IQ, it is literally impossible for the average IQ of an undergrad student to be 115 or higher.
Hmm. I’m not 95% confident of then number I gave, but I haven’t been able to turn up anything disconfirming.
I did a bunch of research on the heritability of IQ last year for a term paper and I repeatedly saw the claim that university students tend to be 1sd above the local population mean, although that may not apply in a place with more liberal admissions practices like the modern US. More research below, and I’ll edit in some extra stuff tomorrow when my brain isn’t fried.
Some actual data here (IQs estimated from SAT scores, ETS data as of 2013)
Surprisingly, at least looking at science / engineering / math majors, it looks like people are smarter than I would have guessed; Physics majors had the highest average at 133 with Psychology majors pulling up the rear with 114, and most of them are clustered around 120 − 130. For someone who deals with undergrads, that is frankly shockingly high.
Outside of the sciences, even the “dumbest” major, social work, managed a 103 and a lot of the popular majors are in the 105-115 range. Another big surprise here too; Philosophy majors are really damn bright with a 129 average, right up under Math majors. Never would have guessed that one.
Still, it’s obvious that the 115-120 figure I gave was overly optimistic. Once I look at some more data I will amend my initial post so that it better reflects reality.
Naive hypothesis: Given the Flynn effect, and that college students are younger than the general population, could that explain the difference? That Coscott’s conditional “If US citizens between 18 and 24 are representative of the entire population in terms of IQ” is false?
IQ tests are at least supposed to be normed for the age group in question, in order to eliminate such effects, but I don’t know how it’s done for the estimates in question.
I think that is likely.
I don’t have specific ranges in mind, but I think I’d call grad-student level sufficiently high-IQ.
Not necessarily overturning a principle, but rather opening up new directions to expand into. How about Woz, Jobs, Gates, all that crowd? They were outsiders—all the insiders were at IBM or, at best, at places like Xerox PARC.
Of course, but you don’t trust a process you don’t understand. You trust either people or the system built around that process. If your doctor gives you a pill to take, you trust your doctor, not the biochemistry which you don’t understand. If you take a plane across the Atlantic, you trust the system that’s been running commercial aviation for decades with the very low accident rate.
They were outsiders of business companies, not of science. It’s not like Gates never learned math at school, and then miraculously proved Fermat theorem in his dreams. It’s more like he took mostly some else’s work, made a few smart business decisions, and became extra rich.
It’s impractical for every single person to understand every single scientific theory. Even the domain of ‘settled science’ is far larger than anyone could hope to cover in their lifetime.
It’s true that scientific authority is no substitute for evidence and experiment, but as Elezier pointed out in one of the streams (I can’t find the link right now), it’s not like scientific authority is useless for updating beliefs. If you have to make a decision, and are stuck in choosing between the scientific consensus opinion and a random coin toss, the scientific consensus opinion is a far far better choice, obviously.
‘Trust’, in this context, doesn’t mean 100% infallible trust in scientific authority. If you take the alternative route and demand that everyone be knowledgeable in everything they make choices in, you wind up in situations like the current one we’re having with climate change, where scientists are pretty much screaming at the top of their lungs that something has to be done, but it’s falling on deaf political ears partly because of the FUD spreaded by anti-science groups casting doubt on scientific consensus opinion.
Funny that you mention that.
I consider myself a reasonably well educated layman with a few functioning brain cells. I’ve taken an interest in the global warming claims and did a fair amount of digging (which involved reading original papers and other relevant stuff like Climategate materials). I’ll skip through all the bits not relevant to this thread but I’ll point out that the end result is that my respect for “climate science” dropped considerably and I became what you’d probably describe as a “climate sceptic”.
Given the rather sorry state of medical science (see Ioannidis, etc.), another area I have some interest in, I must say that nowadays when people tell me I must blindly trust “science” because I cannot possibly understand the gnostic knowledge of these high priests, well, let’s just say I’m not very receptive to this idea.
Regardless of whether you personally agree with the consensus on climate change, the fact is that most politicians in office are not scientists and do not have the requisite background to even begin reading climate change papers and materials. Yet they must often make decisions on climate change issues. I’d much prefer that they took the consensus scientific opinion rather than making up their own ill-formed beliefs. If the scientific opinion turns out to be wrong, I will pin the full blame on the scientists, not the decision makers.
And, as I’m saying, this generalizes to all sorts of other issues. I feel like I’m repeating myself here, but ultimately a lot of people find themselves in situations where they must make a decision based on limited information and intelligence. In such a scenario, often the best choice is to ‘trust’ scientists. The option to ‘figure it out for yourself’ is not available.
In general I would agree with you. However, as usual, real life is complicated.
The debate about climate has been greatly politicized and commercialized. Many people participating in this debate had and have huge incentives, (political, monetary, professional, etc.) to bend the perceptions in their favor. Many scientists behaved… less than admirably. The cause has been picked up (I might even say “hijacked”) by the environmental movement which desperately needed a new bogeyman, a new fear to keep the money flowing. There has been much confusion—some natural and some deliberately created—over which questions exactly are being asked and answered. Some climate scientists decided they’re experts on economics and public policy and their policy recommendations are “science”.
All in all it was and is a huge and ugly mess. Given this reality, “just follow the scientific consensus” might have been a good prior, but after updating on all the evidence it doesn’t look like a good posterior recommendation in this particular case.
Imagine, you have something like this back in 1900.
Do you remember how settled was that the Universe is slowing down at its expansion? The only thing wasn’t settled was the slowing rate—is it big enough to stop one day and reverse. 20 years ago.
Just now, they discuss Big Bang. Settled long ago.
I am not saying your idea isn’t good. It is, but the controversy is imminent.
What would this do that Wikipedia and encyclopaedias don’t do?
Wikipedia contains plenty of scientific claims that are open to be overturned by new experiments.
I am sitting on an unpublished and (depending on how much I want to do) potentially almost complete puzzle game, thus far entirely my own work, and I need to decide what to do with it. I wrote most of it starting almost 4 years ago, and mostly stopping a year after that, as a way to teach myself to program. I’ve revisited it a few times since then, performing lots of refactoring and optimization as my coding skills improved, and implementing a couple of new ideas as I thought them up. Currently the game mechanics are pretty polished. With a few weeks of bug fixes I would say publishable. I’ve made and tested 40 levels. Because they are short, I would like to make 2 or 3 times as many before publishing. I estimate that this would take several months at the rate I am currently able to devote free time to it. Lastly, the artwork, sound effects, and music are sorely lacking. I would need to commission an artist skilled at 3D modeling, rigging, skinning, and animation to make at least 2 human models (1 male, 1 female), and one giant spider model, with about 20 animations each (the human models can share skeletons and animations). I could use something like this for music, and something like this for sound effects. The code is already in place to play sound and music. I have written a complicated storyline, but I am not confident it is good writing. I have not gotten a million words of bad fiction out of the way. Integrating it into the game would take a lot of coding time (though I have laid some of the groundwork already), and I think it might be better to make it Yet Another Puzzle Game With No Storyline. If I was to include it, I estimate it would take 9 months at my current rate of time spent on this project per time lived. I would also want to make a tutorial out of several intro levels (have temporary overlays “Press these keys to run” and such). It’s using the Unity Game Engine (Currently the free version), meaning I can publish to quite a lot of platforms without much work.
I would like to get the opinion of someone with relevant knowledge, whether it is worth trying to sell this, and how much further work I should put into it first (funging against finishing grad school in computer engineering faster, and ultimately either hardware engineering work for some big corporation plus high-risk, high expected dollar investment on the side (if I can learn to do it well), or working in startups directly). I’m mostly optimizing for expected dollars, because after I ensure a comfortable enough existence for myself (I don’t intend to have kids) I want to use the rest for effective altruism.
I can provide an alpha version of the game or partial storyline notes on request.
My friend did an extremely simple Unity game (with nice graphics and music), added AdMob advertising, put an Android version as a free game on Google Play, and gets about 20 dollars a month (during the recent half of the year, and the number seems stable). That’s the only data point I have.
I suppose your game would be better (but I don’t really know what the players value), so… let’s make a wild guess that it could make 50 dollars a month during the following 5 years. That’s like 5×12×50 = 3000 dollars total. Maybe! If you need 9 months to finish it (beware the planning fallacy!), it is 300 dollars per month of work. I don’t know how much time during the month you would spend coding. Discounting for the planning fallacy and the uncertainty of outcome, let’s make it, say, 100 dollars per month of work.
Of course, besides money you get some additional benefits such as feeling good and having a nice item in your portfolio (probably irrelevant for most jobs you consider).
If the payoff is that low, it’s not worth working in the storyline (which is what would take 9 months (Edit: typo)). I’m already making a decent wage as a TA. It could still be worth publishing roughly as-is. But I’m hoping I can get away with publishing to PC/Mac/Linux and charging a few dollars per player.
You can publish it on google play now, as it is… and if you later decide so, edit the storyline, add a level or two, and sell it on PC later.
The advantage is that a) you get some money now, and b) when the final version is ready, you will already have a few fans, which will be more likely to buy it. (Another advantage is that if your game has some bugs or other problems, you can use the feedback to polish the game before you start charging players. I suspect a paying customer will be more angry about bugs.)
From what you say, it sounds like it would be quite a while before ad revenue from a free game would pay back what I spent on commissioning 3D artists.
An ad banner like in AdMob would interfere with gameplay quite a lot. The control scheme is designed for full keyboard (but would work well with a game controller with joysticks). It would take significant work to translate it to a tablet screen (a cell phone screen is definitely too small). Maybe this kind of annoyance would be a feature if I was trying to sell a full version that was ad-free alongside it, but my game is complicated and I expect will take some getting into it, and I think this would just drive most people away and earn it 1-star ratings.
I’m not that worried about bugs that would significantly damage user experience in gameplay. I’ve been playing it for a while myself (Until Minecraft, it was my favorite game to play while listening to debates). The remaining few ones are basically just results of things I’ve added recently, like smooth camera transitions when you’re playing as a spider and you crawl on a wall. (which has caused the camera to wiggle a little bit under some conditions, I think it’s due to numerical instability in the way I made it rotate to follow the character) The bugs I would expect to take time to fix are the ones that only show up on other platforms than the one I’ve played on (PC), and I can find those by looking through the way my game interacts with the operating system (saving user-created files, loading them, browsing for them, changing screen resolution, accessing preferences files). It’s not necessary to play through the game to find them. The outside view says “There will be more bugs than you expect, and it doesn’t take much to ruin user experience.” To which I respond that I have published software before (not that I own, but that I developed during internships) and I have some feel for how bad bugs popping up is, and that I would take that “feel” into acccount when testing it thoroughly on different platforms before release, and I don’t expect that to take more than a couple of weeks.
The gaining fans thing is a good and important point. I might be able to do that with a Humble Indie Bundle,which has the advantage of a precedent that is pretty much accepted where basically giving it away for free ends when the humble bundle ends, you don’t have to create a “deluxe” version of the game to justify it not being free anymore.
As far as feedback about things besides bugs (level difficulty is a concern), I bet I can find people willing to test a beta version and give feedback for the privelege of playing it early, or (at worst) in return for playtesting their own games (if I ask around at my school’s gamebuilders club, whose meeting I’m planning to attend next week to demo my game and get their opinion on the same question I asked here (“how viable do you think this is commercially?”)). I have looked at the games they are making online. They appear to be a lot less complicated and polished than mine, and will not take much work to play as much as I expect them to maybe expect in return for playing some levels of mine. I have played many games, and never sent an email to a developer giving them feedback. I wouldn’t expect much feedback if I just published a game, even if I included a message saying “please send feedback.”
It’s not going to be worth spending nine months making a complicated storyline that players will press A to skip. Save it for an RPG.
Would would be worth doing, if you can do it well, is to take elements of a storyline that set a tone, and integrate it into the game to provide a unique setting (eg Braid, Binding of Isaac). But don’t do a convoluted plot that pops up between levels.
I think I will take this advice. I have code to let the player read “memories” of other characters scattered throughout the levels, which I can provide a little text for. And I like my backstory and setting more than I like the story that I came up with for the player to play through. Edit: Double post, sorry. It looked like it wasn’t submitting my comment so I copied the text opened a new tab, checked to see that the comment wasn’t there, and then pasted, but apparently the other comment was just late to show up.
I think I will take this advice. I have code to let the player read “memories” of other characters scattered throughout the levels, which I can provide a little text for. And I like my backstory and setting more than I like the story that I came up with for the player to play through.
Much to my surprise, Richard Dawkins and Jon Stewart had a fairly reasonable conversation about existential risk on the Sept. 24, 2013 edition of The Daily Show. Here’s how it went down:
STEWART: Here’s my proposal… for the discussion tonight. Do you believe that the end of our civilization will be through religious strife or scientific advancement? What do you think in the long run will be more damaging to our prospects as a human race?
In reply, Dawkins says Martin Rees (of CSER) thinks humanity has a 50% chance of surviving the 21st century, and one cause for such worry is that powerful technologies could get into the hands of religious fanatics. Stewart replies:
STEWART: …[But] isn’t there a strong probability that we are not necessarily in control of the unintended consequences of our scientific advancement?… Don’t you think it’s even more likely that we will create something [for which] the unintended consequence… is worldwide catastrophe?
DAWKINS: That is possible. It’s something we have to worry about… Science is the most powerful to do whatever you want to do. If you want to do good, it’s the most powerful way to do good. If you want to do evil, it’s the most powerful way to do evil.
STEWART: …You have nuclear energy and you go this way and you can light the world, but you go this [other] way, and you can blow up the world. It seems like we always try [the blow up the world path] first.
DAWKINS: There is a suggestion that one of the reasons that we don’t detect extraterrestrial civilizations is that when a civilization reaches the point where it could broadcast radio waves that we could pick up, there’s only a brief window before it blows itself up… It takes many billions of years for evolution to reach the point where technology takes off, but once technology takes off, it’s then an eye-blink — by the standards of geological time — before...
STEWART: …It’s very easy to look at the dark side of fundamentalism… [but] sometimes I think we have to look at the dark side of achievement… because I believe the final words that man utters on this Earth will be: “It worked!” It’ll be an experiment that isn’t misused, but will be a rolling catastrophe.
DAWKINS: It’s a possibility, and I can’t deny it. I’m more optimistic than that.
STEWART: … [I think] curiosity killed the cat, and the cat never saw it coming… So how do we put the brakes on our ability to achieve, or our curiosity?
DAWKINS: I don’t think you can ever really stop the march of science in the sense of saying “You’re forbidden to exercise your natural curiosity in science.” You can certainly put the brakes on certain applications. You could stop manufacturing certain weapons. You could have… international agreements not to manufacture certain types of weapons...
And then the conversation shifted back to religion. I wish Dawkins had mentioned CSER’s existence.
And then later in the (extended, online-only) interview, Stewart seemed unsure as to whether consciousness persisted after one’s brain rotted, and also unaware that 10^22 is a lot bigger than a billion. :(
Jon’s what I call normal-smart. He spends most of his time watching TV, mainly US news programs, and they’re quite destructive to rational thinking, even if the purpose is for comedic fodder and to discover hypocrisy. He’s very tech averse, letting the guests he has on the show come in with information he might use, trusting (quite good) intuition to fit things into reality. As such, I like to use him as an example of what more normal people feel about tech / geek issues.
Every time he has one of these debates, I really want to sit down as moderator so I can translate each side, since they often talk past each other. Alas, it’s a very time restricted format, and I’ve only seen him fact check on the fly once (Google, Wikipedia).
The number thing was at least partly a joke, along the lines of “bigger than 10 doesn’t make much sense to me”—scope insensitivity humor. I’ve done similar before.
I’m beginning to think that we shouldn’t be surprised by reasonably intelligent atheists having reasonable thoughts about x-risk. Both of the two reasonably intelligent, non-LWer atheists I talked to in the past few weeks about LW issues agreed with everything I said on them and said that it all seemed sensible and non-surprising. Most LW users started out as reasonably intelligent atheists. Where did the “zomg everyone is so dumb and only LW can think” meme originate from, exactly? Is there any hard data on this topic?
The Relationship Escalator—an overview of assumptions about relationships, and exceptions to the assumptions. The part that surprised me was the bit about the possibility of dialing back a relationship without ending it.
Poll Question: What are communities are you active in other than Less Wrong?
Communities that you think are closely related to Less Wrong are welcome, but I am also wondering what other completely unrelated groups you associate with. How do you think such communities help you? Are there any that you would recommend to an arbitrary Less Wronger?
Contra dance. Closely correlated with LessWrong; also correlated with nerdy people in general. I would recommend it to most LessWrongers; it’s good even for people who are not generally good at dancing, or who have problems interacting socially. (Perhaps even especially for those people; I think of it as a ‘gateway dance.’)
Other types of dance, like swing dance. Also some correlation with LessWrong, somewhat recommended but this depends more on your tastes. Generally has a higher barrier to entry than contra dancing.
I did that for a while. It was popular at mathcamp so I started, but I haven’t done it recently. Maybe I’ll start again.
I’m going to second Contra Dance. It’s really fun and easy to start while having a decent learning curve such that you don’t hit a skill ceiling fast. Plus you meet lots of people and interact with them in a controlled, friendly, cooperative fun fashion.
I am actually planning on having a contra dance at my wedding.
My local hackerspace, and broadly the US and European hacker communities. This is mainly because information security is my primary focus, but I find myself happier interacting with hackers because in general they tend not only to be highly outcome-oriented (i.e., inherently consequentialist), but also pragmatic about it: as the saying goes, there’s no arguing with a root shell. (Modulo bikeshedding, but this seems to be more of a failure mode of subgroups that don’t strive to avoid that problem.) The hacker community is also where I learned to think of communities in terms of design patterns; it’s one of the few groups I’ve encountered so far that puts effort into that sort of community self-evaluation. Mostly it helps me because it’s a place where I feel welcome, where other people see value in the goals I want to achieve and are working toward compatible goals. I’d encourage any instrumental rationalist with an interest in software engineering, and especially security, to visit a hackerspace or attend a hacker conference.
Until recently I was also involved in the “liberation technology” activism community, but ultimately found it toxic and left. I’m still too close to that situation to evaluate it fairly, but a lot of the toxicity had to do with identity politics and status games getting in the way of accomplishing anything of lasting value. (I’m also dissatisfied with the degree to which activism in general fixates on removing existing structures rather than replacing them with better ones, but again, too close to evaluate fairly.)
The only two communities I am currently active in right now (other than career/family communities) are Less Wrong and Unitarian Universalism.
In the past had a D&D group that I participated very actively in. I think that the people I played D&D with in high school had a very big and positive effect on my development.
I think that I would like to and am likely to develop a local community of people to play strategy board games in the future.
Do you mean online communities or IRL?
Both
I’m active in UK competitive debating (mainly real life, but I also run some discussion forums).
[Good question. Its interesting to see the variety of people’s responses.]
I’m pretty active in lots of social activist/environmentalist/anarchist groups. I sometimes join protests for recreational reasons.
Could you give examples?
I’m active in Toastmasters and martial arts (mostly the community of my specify school). Overall Toastmasters seems pretty effective at its stated goals of improving public speaking and leadership skills. Its also fun (at least for me). Additionally, both force me to actually interact with other people, which is nice and not something that the rest of my live provides.
I’m active in (though not really a member of) the “left-libertarian” community, associated with places like Center for a Stateless Society (though I myself am not an anarchist) and Bleeding Heart Libertarians. I’m also a frequent reader and occasional commenter on EconLog.
Less related, I’m an active poster on GameFAQs and on a message board centered around the Heroes of Might and Magic game series.
I also used to be active on GameFAQs. For about a year in 2004 it was most of my internet activity, specifically the Pikmin boards. That was a long time ago though when I was a high school freshman.
Orthogonal to LW, I’m very active in my university’s Greek community, serving as VP of a fraternity. It’s been excellent social training and I’ve had a very positive experience.
I was wondering if anyone had any opinions/observations they would be would be willing to share about Unitarian Universalism. My fiancee is an atheist and a Unitarian Universalist, and I have been going to congregation with her for the last 10 months. I enjoy the experience. It is relaxing for me, and a source of interesting discussions. However, I am trying to decide if my morality has a problem with allying myself this community. I am leaning towards no. I feel like they are doing a lot of good by providing a stepping stone out of traditional religion for many people. I am however slightly concerned about what effect this community might have on my future children. I would love to debate this issue with anyone who is willing, and I think that would be very helpful for me.
The UU “Seven Principles and Purposes” seem like a piece of virtue ethics. If you don’t mind this particular brand of it, then why not.
From Wikipedia:
If you discard the ornamental fluff in this “philosophy” and “focus on making this life better for all of us”, then it’s as good a guideline as any.
As I said in responding to another comment, this is the part of UU that I relate to. However, the problem is that while UUs might be slightly above average rationality, “we can use reason when we can” means that beliefs come from thinking for yourself as opposed to reading e.g. the bible, and the stuff they come up with by thinking for themselves is usually not all that great by my standards. I am worried that I am giving UU too much credit because they happen to use the word “reason,” when in reality they mean something very different than what I mean.
They are just humans, aren’t they? I am afraid that at this moment it is impossible to assemble a large group of people who would all think on LW-level. Not including obvious bullshit, or at least not making it a core of group beliefs, is already a pretty decent result for a large group of humans.
Perhaps one day CFAR will make a curricullum that can replicate rationality quickly (at least on suitable individuals) and then we can try to expand rationality to mass level. Until then, having a group without obviously insane people in power is probably the best you can get.
You already reflected on this, so just: don’t emotionally expect what is not realistic. They are never going to use reason as you define it. But the good news is that they will not punish you for using reason. Which is the best you can expect from a religious group.
I found this comment very helpful. Thanks.
You inspired me to google whether there are UU in Slovakia. None found, although there are some in the neighbor countries: Czech, Hungary.
I wonder whether it would be possible to create a local branch here, to draw people, who just want to feel something religious but don’t want to belong to a strict organization, away from Catholicism (which in my opinion has huge negative impacts on the country). There seem to be enough such people here, but they are not organized, so they usually stay within the churches of their parents.
The problem is, I am not the right person to start something like this, because I don’t feel any religious need; for me the UU would be completely boring and useless. I am not sure if I could pretend interest at least for long enough to collect a group of people, make them interested in the idea, put them into contact with neighbor UUs, and then silently sneak away. ;-)
Also, I suspect the religion is not about ideas, but about organized community. (For example, the only reason you are interested in UU is because your fiancee is. And your fiancee probably has similar reasons, etc.) Starting a new religious community where no support exists, would need a few people willing to sacrifice a lot of time and work—in other words, true believers. Later, when the community exists, further recruitment should be easier.
Well, at least this is the first social engineering project I feel I could have higher than 1% chance of doing successfully, if I decided to. (Level 3 of Yudkowsky Ambition Scale in a local scope?)
Here are some things you should know:
Unitarian Universalism is different from Unitarianism. UU is basically a spin-off of Unitarianism from when they combined with Universalism in 1961 in North America. As a result, there are very few UU churches outside of NA.
Unitarianism is on average more Christian than UU, and there exist some UU congregations that also have a Christian slant. (The one I was talking about is not one of them) I have also heard that some UU churches are considerably more tolerant of everything other than Christianity than they are of Christianity. (Probably because their members were escaping Christianity) The views change from congregation to congregation because they are decided from the bottom up from the local congregants.
The UUA has free resources, such as transcribed sermons you could read, for people who wanted to start a congregation.
I think I gain some stuff from it that is not directly from my fiancee. I don’t know if it is enough to continue going on my own. It is a community that roughly follows strategy 1 of the belief signalling trilemma, which I think is nice to be in some of the time. The sermons are usually way too vague, but have produced interesting thoughts when I added details to them on my own and then analyzed my version. There is also (respectful) debating, which I think I find fun regardless of who I am debating with. I like how it enables people to share significant highs or lows in their life, so the community can help them. There are pot-lucks and game nights, and courses on philosophy and religions. There is also singing, which I am not so crazy about, but my fiancee loves.
What do you mean and what do they mean by “reason”? If you are not sure, maybe it’s something to ask at the next meeting.
They are reaching many of the wrong conclusions. I think this might be because their definition of “use reason” is just to think about their beliefs, which is not enough. When I say “use reason,” I mean thinking about my beliefs in a specific way. That specific way is something that I think a lot of us have roughly in common on less wrong, and it would take to long to describe all the parts of it now. To point out a specific example, one UU said to me “There are some mysteries we can never get answers to, like what happens when we die,” and then later “I am a firm believer in reincarnation, because I have had experiences where I felt my past lives.” I never questioned to her that she had those experiences, and argued a bit and was able to get her to change her first statement, because reincarnation experiences were evidence against it, which I thought was an improvement. However, not noticing how contradictory these beliefs were is not something I would call “reason.”
Perhaps what is bothering me is a difference in cognitive ability, and UUs version of “reason” is as much as I can expect from the average person. Or, perhaps these are people who are genuinely interested in being rational, and would be very supportive of learning how, but have not yet learned. It could also be that they just want to say that they are using “reason.”
Do you guys discuss Effective Altruism? It could be one way to inject a bit more reason.
Not much. That is a good idea. I was considering hosting a workshop on rationality through the church. If I ever go through with it, that will probably be part of it. My parents’ UU church had a class on what QM teaches us about theology and philosophy.
I’m not really invested enough in the question to debate it, but I know plenty of atheists (both with and without children) who are active members of UU churches because they get more of the things they value from a social community there than they do anywhere else, and this seems entirely sensible to me. What effects on your future children are you concerned about?
I am concerned that they will treat supernatural claims as reasonable. I consider myself rational enough to be able to put up with some of the crazy stuff many UU individuals believe (beliefs not shared by the community). I am worried that my children might believe them, and even more worried that might not look at beliefs critically enough.
Yes, they will treat supernatural claims as reasonable, and expect you (and your kids) to treat them that way as well, at least in public, and condemn you (and your kids) for being rude if you (they) don’t.
If you live in the United States, the odds are high that your child’s school will do the same thing.
My suggestion would be that you teach your children how to operate sensibly in such an environment, rather than try to keep them out of such environments, but of course parenting advice from strangers on the Internet is pretty much worthless.
I actually do not think that is true. They will treat supernatural claims as reasonable, but would not condemn me for not treating them as reasonable. They might condemn me for being avoidably rude, but I don’t even know about that.
We actually plan on homeschooling, but that is not for the purpose of keeping kids out of an insane environment as much as trying to teach them actually important stuff.
I do, however, agree with your advice.
If your elementary-schooler goes around insistently informing the other little kids that Santa isn’t real, you will likely be getting an unhappy phone call from the school, never mind the religious bits that the adults actually believe.
Good thing we are homeschooling then!
What’s your moral system? If you get value from the community it’s probably more moral to focus your efforts on donating more for bed nets than on the effect that you have on the world through being a member of that community.
Wouldn’t it be nice if I understood that?
I think it is not productive to analyze anything as being moral by comparing it to working for money for bed nets. Most everything fails.
I think I might have made a mistake in saying this was a moral issue. I think it is more of an identity issue. I the the consequences for the world of me being Unitarian are minimal. Most of the effect is on me. I think the more accurate questions I am trying to answer are:
Are Unitarians good under my morals? Do their shared values agree with mine enough that I should identify as being one?
I think the reason this is not a instrumental issue for me, and rather an epistemic issue, is because I believe the fact that I will continue to go to congregation is already decided. It is a fun bonding time which sparks lots of interesting philosophical discussion. If I were not in my current relationship, I would probably bring that question back on the table.
I realize that this does not change the fact that the answer is heavily dependent on my moral system, so I will try to comment on that with things that are specific to UU.
I generally agree with the 7 principles of UU, with far more emphasis on “A free and responsible search for truth and meaning.” However, these principles are not particularly controversial, and I think most people would agree with most of them. The defining part of UU, I think, is the strategy of “Let’s agree to disagree on the metaethics and metaphysics, and focus on the morals themselves which are what matters.” I feel like this could be a good thing to do some of the time. Ignore the things that we don’t understand and agree on, and work on making the world better using the values we do understand and agree on. However, I am concerned that perhaps the UU philosophy is not just to ignore the metaethics and metaphysics temporarily so we can work together, but rather to not care about these issues and not be bothered by the fact that we appear confused. This I do not approve of. These are important questions, and you don’t know if what you don’t know can’t hurt you.
Why are metaphysics important?
Why are metaethics important?
They are important because they are confusing. Of all the things that might possibly cause a huge change to my decision making, I think understanding open questions about anthropic reasoning is probably at the top of the list. I potentially lose a lot by not pushing these topics further.
For most people I don’t think that meta ethical considerations have a huge effect on their day to day decision making.
Metaphysics seems interesting. Do you think that you might start believing in paranormal stuff if you spend more effort on investigating metaphysical questions? What other possible changes in your metaphysical position could you imagine that would have a huge effects on your decision making?
Going to UU won’t stop you from discussing those concepts on LessWrong.
I’m personally part of diverse groups and don’t expect any one group to fulfill all my needs.
I do not think that I will start believing in paranormal stuff. I do not know what changes might arise from changes in my metaphysical position. I was not trying to single out these things as particularly important as much as I am just afraid of all things that I don’t know.
This is good advice. My current picture of UU is that it has a lot of problems, most of which are not problems for me personally, since I am also a rational person and in LW. I think UU and LW are the only groups which I am actively a part of other than my career. I wonder what other viewpoints I am missing out on.
I’m seeing a lot of comments in which it is implicitly assumed that most everyone reading lives in a major city where transportation is trivial and there is plenty of memetic diversity. I’m wondering if this assumption is generally accurate and I’m just the odd one out, or if it’s actually kinda fallacious.
(I can’t seem to figure out poll formatting. Hm.)
A city of ~200,000 people if you include the outlying rural areas, in which you can go from the several block wide downtown to farmland in 4-5 miles in the proper directions. Fifteen minutes from another city of 60,000 which is very much a state college town. Forty minutes away from a city of nearly 500,000 people.
Granted the city of ~200,000 has a major university and a number of biotech companies.
Its somewhat inaccurate in my case (I live in the suburbs of a semi-major city).
A lot of the CFAR/MIRI core lives in Berkeley.
I think living in a big city is the standard that most people here consider normal. It’s like living in the first world. We know that there are people from India who visit but we still see being from the first world as normal.
When you have the choice between living in a place with memetic diversity or not living in such a place the choice seems obvious.
I’m back in school studying computer science (with a concentration in software engineering), but plan on being a competent programmer by the time I graduate, so I figure I need to learn lots of secondary and tertiary skills in addition to those that are actually part of the coursework. In parallel to my class subjects, I plan on learning HTML/CSS, SQL, Linux, and Git. What else should be on this list?
Preliminaries: Make sure you can touch type, being able to hit 50+ wpm without sweat makes it a lot easier to whip up a quick single-screen test program to check up something. Learn a text editor with good macro capabilities, like Vim or Emacs, so you can do repetitive structural editing of text files without having to do every step by hand. Get into the general habit of thinking that whenever you find yourself doing several repetitive steps by hand, something is wrong and you should look into ways to automate the loop.
Working with large, established code bases, like Vladimir_Nesov suggested, is what you’ll probably end up doing a lot as a working programmer. Better get used to it. There are many big open-source projects you can try to contribute to.
Unit tests, test-driven development. You want the computer to test as much of the program as possible. Also look into the major unit testing frameworks for whatever language you’re working on.
Build systems, rigging up a complex project to build with a single command line command. Also look into build servers, nightly builds and the works. A real-world software project will want a server that automatically builds the latest version of the software every night and makes noise to the people responsible if it won’t build, or if an unit test fails.
Oh, and you’ll want to know a proper command line for that. So when learning Linux, try to do your stuff in the command line instead of sticking to the GUI. Figure out where the plaintext configuration files driving whatever programs you use live and how to edit them. Become suspicious about software that doesn’t provide plaintext config files. Learn about shell scripting and onliners, and what the big deal in Unix about piping output from one program to the next is.
Git is awesome. After you’ve figured out how to use it on your own projects, look into how teams use it. Know what people are talking about when they talk about a Git workflow. Maybe check out Gerrit for a collaborative environment for developing with Git. Also check out how bug tracking systems and how those can tie into the version control.
For the social side of software development, Peopleware is the classic book. Producing Open Source Software is also good.
Know some full stack of web development. If you want a web domain running a neat webapp, how would you go about getting the domain, arranging for the hosting, installing the necessary software on the computer, setting up the web framework and generating the pages that do the neat thing? Can you do this by rolling your own minimal web server instead of Apache and your own minimal web framework instead of whatever out of the box solution you’d use? Then learn a bit about the out of the box web server and web framework solutions.
Have a basic idea about the JavaScript ecosystem for frontend web development.
Look into cloud computing. It’s new enough not to have made it into many curricula yet. It’s probably not going to go away anytime soon. How would you use it, why would you want to use it, when would you not want to use it? Find out why map-reduce is cool.
Learn how the Internet works. Learn why people say that the Internet was made by pros and the web was made by amateurs. Learn how to answer the interview question “What happens between typing an URL in the address field and the web page showing up in the browser” in as much detail as you can.
Look into the low-level stuff. Learn some assembly. Figure out why Forth is cool by working through the JonesForth tutorial. Get an idea how computers work below the OS level. The Elements of Computing Systems describes this for a toy computer. Read up on how people programmed a Commodore 64, it’s a lot easier to understand than a modern PC.
Learn about the difference between userland and kernel space in Linux, and how programs written (in assembly) right on top of the kernel work. See how the kernel is put together. See if you can find something interesting to develop in the kernel-side code.
Learn out how to answer the interview question “What happens between pressing a key on the keyboard and a letter showing up on the monitor” in as much detail as you can.
Write a simple ray-tracer and a simple graphics program that does something neat with modern OpenGL and shaders. If you want to get really crazy with this, try writing a demoscene demo with lots of graphical effects and a synthesized techno soundtrack. If you want even crazier, try to make it a 4k intro.
Come up with a toy programming language and write a compiler for it.
Write a toy operating system. Figure out how to make a thing that makes a PC boot off the bare iron, prints “Hello world” on the screen and doesn’t do anything beyond that. Then see how far you can get in making the thing do other things.
Also this list looks pretty good.
Regarding touch-typing, do you find yourself reaching ‘top speed’ often while programming?
It’s not really about typing large amounts of text quickly, it’s basically about
(1) not having to pay attention to the keyboard, your fingers should know what do without taking up mindspace; and
(2) your typing being able to keep up with your thinking—the less your brain has to stop and wait for fingers to catch up, the better.
Yes, this is a critical skill. Especially when someone is learning programming, it is so sad to see their thinking interrupted all the time by things like: “when do I find the ‘&’ key on my keyboard?”, and when the key is finally found, they already forgot what they wanted to write.
This part is already helped by many development environments, where you just write a few symbols and press Ctrl+space or something, and it completes the phrase. But this helps only with long words, not with symbols.
It’s not the top speed, it’s the overhead. It is incredibly irritating to type slowly or make typos when you’re working with a REPL or shell and are tweaking and retrying multiple times: you want to be thinking about your code and all the tiny niggling details, and not about your typing or typos.
For a decent summary, here’s a pretty well-written survey paper on cloud computing.. It’s three years old now, but not outdated.
It’s a good start, but I notice a lack of actual programming languages on that list. This is a very common mistake. A typical CS degree will try to make sure that you have at least basic familiarity with one language, usually Java, and will maybe touch a bit on a few others. You will gain some superpowers if you become familiar with all or most of the following:
A decent scripting language, like Python or Ruby. The usual recommendation is Python, since it has good learning materials and an easy learning curve, and it’s becoming increasingly useful for scientific computing.
A lisp. Reading Structure and Interpretation of Computer Programs will teach you this, and a dizzying variety of other things. It may also help you achieve enlightenment, which is nice. Seriously, read this book.
Something low-level, usually C.
Something super-low-level: an assembly language. You don’t have to be good at writing in it, but you should have basic familiarity with the concepts. Fun fact: if you know C, you can get the compiler to show you the corresponding assembly.
You should take the time to go above-and-beyond in studying data structures, since it’s a really vital subject and most CS graduates’ intuitive understanding of it is inadequate. Reading through an algorithms textbook in earnest is a good way to do this, and the wikipedia pages are almost all surprisingly good.
When you’re learning git, get a GitHub account, and use it for hosting miscellaneous projects. Class projects, side projects, whatever; this will make acquiring git experience easier and more natural.
I’m sure there’s more good advice to give, but none of it is coming to mind right now. Good luck!
Sorry if I wasn’t clear. I intended the list to include only skills that make you a more valuable programmer that aren’t explicitly taught as part of the degree. Two Java courses (one object-oriented) are required as is a Programming Languages class that teaches (at least the basics of) C/C++, Scheme, and Prolog. Also, we must take a Computer Organization course that includes Assembly (although, I’m not sure what kind). Thanks for the advice.
In school you are typically taught making small projects. Make a small algorithm, or a small demonstration that you can display an information in an interactive user interface.
In real life (at least in my experience), the applications are typically big. Not too deep, but very wide. You don’t need complex algorithms; you just have dozens of dialogs, hundreds of variables and input boxes, and must create some structure to prevent all this falling apart (especially when the requirements keep changing while you code). Also you have a lot of supporting functionality in a project (for example: database connection, locking, transactions, user authentification, user roles and permissions, printing, backup, export to pdf, import from excel, etc.). Again, unless you have structure, it falls apart. And you must take good care of many things that may go wrong (such as: if the user’s web browser crashes, so the user cannot explicitly log out of the system, the edited item should not remain locked forever).
To be efficient at this, you also need to know some tools for managing projects. Some of those tools are Java-specific, so your knowledge of Java should include them; they are parts of the Java ecosystem. You should use javadoc syntax to write comments; JUnit to write unit tests; Maven to create and manage projects, some tools to check your code quality, and perhaps even Jenkins for continuous integration. Also the things you already have on your list (HTML, CSS, SQL, git) will be needed.
To understand creating web applications in Java, you should be able to write your own servlet, and perhaps even write your own JSP tag. Then all the frameworks are essentially libraries built on this, so you will be able to learn them as needed.
As an exercise, you could try to write a LessWrong-like forum in Java (with all its functionality; of course use third-party libraries where possible); with javadoc and unit tests. If you can do that, you are 100% ready for the industry (the next important skill you will need is leading a team of people who don’t have all of these skills yet, and then you are ready for the senior position). But that can take a few months of work.
There is another aspect of working on big projects that seems equally important. What you are talking about I’d call “design”, the skill of organizing the code (and more generally, the development process) so that it remains intelligible and easy to teach new tricks as the project grows. It’s the kind of thing reading SICP and writing big things from scratch would teach.
The other skill is “integration”, ability to open up an unfamiliar project that’s too big to understand well in a reasonable time, and figure out enough about it to change what you need, in a way that fits well into the existing system. This requires careful observation, acting against your habits, to conform to local customs, and calibration of the sense of how well you understand something, so that you can judge when you’ve learned just enough to do your thing right, but no less and not much more. Other than on a job, this could be learned by working a bit (not too much on each one, lest you become comfortable) on medium/large open source projects (implementing new features, not just fixing trivial bugs), possibly discarding the results of the first few exercises.
I’ve TAed a class like the Programming Languages class you described. It was half Haskell, half Prolog. By the end of the semester, most of my students were functionally literate in both languages, but I did not get the impression that the students I later encountered in other classes had internalized the functional or logical/declarative paradigms particularly well—e.g., I would expect most of them to struggle with Clojure. I’d strongly recommend following up on that class with SICP, as sketerpot suggested, and maybe broadening your experience with Prolog. In a decade of professional software engineering I’ve only run into a handful of situations where logic programming was the best tool for the job, but knowing how to work in that paradigm made a huge difference, and it’s getting more common.
I know actuaries have huge tables of probabilites of death at any given age based on historical data. Where can I find more detailed data for cause of death? Can someone point me to similar tables for major life events such as probabilites of being robbed, laid off, being in an accident of some kind, getting divorced and so on?
I am becoming a believer in being prepared and even if there is no cost-effective preventative measure, being mentally prepared for an event is very beneficial too in my experience.
I’ve been using this.
CDC has every death certificate, although they only let you look at aggregated information.
Oh wow, a highly motivated person can do significant original mortality research via their online tool. You can generate cause of death graphs for almost any sort of cohort you might care about.
Let me know if you find anything useful. I’m working on a project (though I haven’t done anything on it since making that post).
fubarobfusco’s reply to that post might be useful to you too.
It seems to be pretty well decided that (as opposed to directly promoting Less Wrong, or Rationality in general), spreading HPMoR is a generally good idea. What are the best ways to go about this, and has anyone undertaken a serious effort?
I came to the conclusion, after considering creating some flyers to post around our meetup’s usual haunts, that online advocacy would be much more efficient and cost effective. Then, after thinking that promotion on large sites with high signal to noise is mostly useless, realized that sharing among smaller communities that you are already a part of (game/specific interest forums, Facebook groups, etc.) might increase likelihood of a clickthrough, due to an even modest amount of social clout and in-group effect (as opposed to creating an account just to spam). And, posting (and bumping) is a very trivial inconvenience—but if you are still held back by the effort of creating a blurb, I’m happy to provide the one I used.
When it comes to typical online forums signatures are a good way to promote things. Take a quote of HPMOR and attach a link to it.
This got me to read it. Quote was about only wanting to rule the world to get more books or something to that effect.
Of course, you should only do this where the forum has made the foolish choice to allow signatures. (One of the things I appreciate about Reddit/LW compared to forums is how they strongly discourage signatures.)
Convince me of this claim that you think is well decided.
I am not convinced that from the viewpoint of a non-rationalist that HPMoR doesn’t have many of the same problems as Spock. I can see many people reading the book, feeling that HP is too “evil,” and deciding that “rationality” is not for them.
Also, EY said “Authors of unfinished stories cannot defend themselves in the possible worlds where your accusation is unfair.” This should swing both ways. If it turns out that HP goes crazy because he was being meta and talking to himself too much, then spreading HPMoR is probably not as good an idea.
Why is “downvoted” so frequently modified by “to oblivion”? Can we please come up with a new modifier here? This is already a dead phrase, a cliche which seems to get typed without any actual thought going into it. Wouldn’t downvoting “to invisibility” or “below the threshold” or even just plain “downvoting”, no modifier, make a nice change?
I prefer ‘to oblivion’ over all your suggested alternatives. Why do you think it should change?
Slang vocabulary tends to become more consistent and repetitive over time in my experience. New phrases will appear and then go to fixation until everyone uses them. The only answer is to try to be as creative as possible in your own word choices.
Is the problem of measuring rationality related to the problem of measuring programming skill? Both are notoriously hard, but I can’t tell if they’re hard for the same reason...
I think they’re different, though with some overlap.
Rationality applies to a much wider range of subjects, and involves dealing with much more uncertainty.
Petrov Day: http://lesswrong.com/lw/jq/926_is_petrov_day/
Does “Don’t judge me for X” mean “Don’t reduce my status in your mind to account for X”?
I think it means “Don’t treat me as a stranger about whom all you knew was x”
I think it means “Don’t update your opinion of me on the basis of evidence X”.
A personal anecdote I’d like to share which relates to the recent polyphasic sleep post ( http://lesswrong.com/lw/ip6/polyphasic_sleep_seed_study_reprise/ ): My 7 year old son who always tended to sleep long and late seems to have developed segmented sleep by himself in the last two weeks. He claims to wake e.g. at 3:10 AM gets dressed, butters his school bread—and gets to bed again—in our family bed. It’s no joke. He lies dressed in bed and his satchel is packed. And the interesting thing is: He is more alert and less bad tempered than before. He doesn’t do afternoon naps though—at least none that I know of.
What can have caused this? Maybe the seed was that our children were always allowed to come into the family bed in the night (but only in the night) which they did often.
I remember reading somewhere (sorry, no link) that waking up at the midnight, and then going to sleep again after an hour or so, was considered normal a few hundred years ago. Now this habit is gone, probably because we make the night shorter using artificial lights.
Yes. I know. See e.g. http://en.wikipedia.org/wiki/Segmented_sleep I knew that beforehand. That was the reason I wasn’t worried when my children woke up at night and crawled into our family bed (some other parents seem to worry.about the quality of their childrens sleep).
But I’m surprised that he actually segmented and that it went this far. I understood that artificial lighting—and we have enough of that—suppresses this segmentation.
Perhaps it is not the light per se, but the fact that when you stay awake at evening, and wake up on alarm clock in the morning, the body learns to give up the segmented sleep to protect itself from the sleep deprivation. Maybe the time interval for your children between going to sleep and having to wake up is large enough.
Possibly. But he has been a late riser always and he doesn’t really go to sleep earler than before. In fact he get earler than before. But maybe his sleep pattern just changes due to normal development.
My older son (9 years) also sometimes gets up in the night to visit the family bed. But I guess he is not awake long. He likes to build things and read or watch movies (from our file server) until quite late in the evening (often 10 PM). We allow that because he has no trouble getting up early.
Robin Williams is transhumanism friendly.
Do I have a bias or useful heuristic? If a signal is easy to fake, is it a bias to assume that it is disingenuous or is it an useful heuristic?
I read Robert Hanson’s post about why there are so many charities specifically focusing on kids and he basically summed it up as signalling to seem kind, for potential mates, being a major factor. There were some good rebuttals in the comment sections but whether or not signalling is at play is not the point, I’m sure to a certain degree it is, how much? I don’t know. The point is that I automatically dismiss the authenticity of a signal if the signal is difficult to authenticate. In this example it is possible for people to both, signal that they care about children for a potential mate, as well as actually really caring about children ( e.g. innate emotional response).
EDIT: Just to be clear, this is a question about signalling and how I strongly associate easy to fake signals with dishonest signalling, not about charities.
That’s like asking whether someone is a freedom fighter or a terrorist.
Every heuristic involves a bias when you use it in some contexts.
Yes, but does it more often yield a satisfactory solution across many contexts if yes, then I’d label it a useful heuristic and if it is often wrong I would label it a bias.
You’re not using your words as effectively as you could be. Heuristics are mental shortcuts, bias is a systematic deviation from rationality. A heuristic can’t be a bias, and a bias can’t be a heuristic. Heuristics can lead to bias. The utility of a certain heuristic might be evaluated based on an evaluation of how much computation using the heuristic saves versus how much bias using the heuristic will incur. Using a bad heuristic might cause an individual to become biased, but the heuristic itself is not a bias.
I agree with your last sentence. The important thing should be how much good does the charity really do to those children. Are they really making their lives better, or is it merely some nonsense to “show that we care”?
Because there are many charities (at least in my country) focusing on providing children things they don’t really need; such as donating boring used books to children in orphanages. Obviously, “giving to children in orphanages” is a touching signal of caring, and most people don’t realize that those children already have more books than they can read (and they usually don’t wish to read the kind of books other people are throwing away, because honestly no one does). In this case, the real help to children in orphanages would be trying to change the legislation to make their adoption easier (again, this is an issue in my country, in your part of the world the situation may be different), helping them avoid abuse, or providing them human contact and meaningful activities. But most people don’t care about the details, not even enough to learn those details.
I suspect there’s also some sentimentality about books in play.
Yes, throwing a book away is nearly like burning it. Giving it to an orphanage is completely guilt free.
This depends on what you mean by “care”, i.e., they care about children in the sense that they derive warm fuzzies from doing things that superficially seem to help them. They don’t care in the sense that they aren’t interested in how much said actions actually help children (or whether they help them at all).
I think that most people just never question the effectivity of the charities they donate to. It’s a charity for xxx, of course it helps xxx!
And yet they question the effectivity of the things they do for themselves.
Well, because that’s in near mode.
If I do something for myself, and there is no obvious result, I see that there is no obvious result, and i disappoints me. If I do something for other people, there is always an obvious result: I feel better about myself.
This is more or less the distinction I was going for.
Why isn’t this equally true for doing things for oneself?
Because other people reward you socially for doing things for other people. If you do something good for person A, it makes sense for a person A to reward you—they want to reinforce the behavior they benefit from. But it also makes sense for an unrelated person B to reward you, despite not benefiting from this specific action—they want to reinforce the general algorithm that makes you help other people, because who knows, tomorrow they may benefit from the same algorithm.
The experimental prediction of this hypothesis is that the person B will be more likely to reward you socially for helping person A, if the person B believes they belong to the same reference class as person A (and thus it is more likely that an algorithm benefiting A would also benefit B).
Now who would have a motivation to reward you for helping yourself? One possibility is a person who really loves you; such person would be happy to see you doing things that benefit you. Parents or grandparents may be in that position naturally.
Another possibility is a person who sees you as a loyal member of their tribe, but not a threat. For such person, your success is a success of the tribe is their success. They benefit from having stronger allies; unless those allies becoming strong changes their position within the tribe. So one would help members of their tribe who are significantly weaker… or perhaps even significantly stronger… in either case the tribe becomes stronger and the relative position within the tribe is not changed. The first part is teachers helping their students, or tribe leaders helping their tribe except for their rivals; the second part is average tribe members supporting their leader.
Again, the experimental prediction would be that when you join some “tribe”, the people stronger than you will support you at the beginning, but then will be likely to stab you in the back when you reach their level.
Now, how to use this knowledge for your success in the real life. We are influenced by social rewards whether we want it or not. One strategy could be trying to reward myself indirectly—for example make a commitment that when I make something useful for myself, I will reward myself by exposing myself to a friendly social interaction. Second strategy is to find company of people who love me, by using “do they reward me for helping myself?” as a filter. (Problem is how to tell a difference between these people, and those that reward me for being a weak member of their tribe, and will later backstab me when I become stronger.) Third strategy is to find company of people much stronger than me with similar values. (And not forget to switch to even stronger people when I become strong.) Another strategy could be to join a group that feels far from the victory… a group that is still in the “conquering the world” mode, not in the “sharing the spoils” mode. (Be careful when the group reaches some victories.)
Anecdotal verification: one of my friends said that when he was running out of money, it made sense for him to buy meals for other people. Those people didn’t reciprocate, but third parties were more likely to help him.
Then I guess people from CFAR should go to some universities and give lectures about… effective altruism. (With the expected result that the students will be more likely to support CFAR and attend their seminars.) Or I could try this in my country when recruiting for my local LW group.
I guess it also explains why religious groups focus so much on charity. It is difficult to argue against a group that many people associate with “helping others”, even if other actions of the group hurt others. The winning strategy is probably making the charity 10% of what you really do, but 90% of what other people associate with you.
EDIT: Doing charity is the traditional PR activity of governments, U.N., various cults and foundations. I feel like reinventing the wheel again. The winning strategies are already known and fully exploited. I just didn’t recognize them as viable strategies for everyone including me, because I was successfully conditioned to associate them with someone else.
Among other things, charity is a show of strength.
Sure. For example if you are donating money, you display your ability to make more money than you need. And if you donate someone else’s money (like a church that takes money from state), you display your ability to take money from people, which is even more impressive.
wow this is an insanely better version of my comment.
Because it’s considered good to even try to help someone else so you care less about outcomes. EG donating to charity is a good act regardless of whether you check to see if your donation saved a life. On the other hand, doing something for yourself that has no real benefits is viewed as a waste of time.
How comes practitioners of (say) homoeopathy haven’t all gone bankrupt then?
Just because you question something, doesn’t mean you reach the correct answer.
I am wondering what a PD tournament would look like if the goal was to maximize the score of the group rather than the individual player. For some payoff matrices, always cooperate trivially wins, but what if C/D provides a greater net payoff than C/C, which in turn is higher than D/D? Does that just devolve to the individual game? It feels like it should, but it also feels like giving both players the same goal ought to fundamentally change the game.
I haven’t worked out the math; the thought just struck me while reading other posts.
The game you are talking about should not be called PD.
The solution will be for everyone to pick randomly, (weighted based on the difference in C/C and D/D payoff) until they get a C/D outcome, and then just picking the same thing over and over. (This isn’t a unique solution, but it seems like a Schelling point to me.)
The Prisoner’s Dilemma is technically defined as requiring that this not be the case, precisely so that one doesn’t ahve to consider the case (in iterated games) where the players agree to take turns cooperating and defecting. You are considering a related but not identical game. Which is of course totally fine, just saying.
If you allow C/D to have a higher total than CC, then it seems there is a meta-game in coordinating the taking-turns—“cooperating” in the meta-game takes the form of defecting only when it’s your turn. Then, the players maximise both their individual scores and the group score by meta-cooperating.
[LINK] A day in the life of an NPC. http://www.npccomic.com/2011/10/19/beautiful-day/?utm_source=PW%2B&utm_medium=468x60&utm_content=Beauty&utm_campaign=PW%2B468x60%2BBeauty%2B
Ilya Shkrob’s In The Beginning is an attempt to reconcile science and religion. It’s the best such attempt that I’ve seen, better than I thought possible. If you enjoy “guru” writers like Eliezer or Moldbug, you might enjoy this too.
Is there a summary available?
I haven’t found one, so I’ll try to summarize here:
“Prokaryotic life probably came to Earth from somewhere else. It was successful and made Earth into a finely tuned paradise. (A key point here is the role of life in preserving liquid water, but there are many other points, the author is a scientist and likes to point out improbable coincidences.) Then a tragic accident caused individualistic eukaryotic life to appear, which led to much suffering and death. Evolution is not directionless, its goal is to correct the mistake and invent a non-individualistic way of life for eukaryotes. Multicellularity and human society are intermediate steps to that goal. The ultimate goal is to spread life, but spreading individualistic life would be bad, the mistake has to be corrected first. Humans have a chance to help with that process, but aren’t intended to see the outcome.”
The details of the text are more interesting than the main idea, though.
Hold on, is he trying to imply that prokaryotes aren’t competitive? Not only does all single-celled life compete, it competes at a much faster pace than multicellular life does.
Yeah, I know. I don’t agree with the text, but I think it’s interesting anyway.
What makes it interesting?
Based on that summary, I’d say that it’s interesting because it draws on enough real science to be superficially plausible, while appealing to enough emotional triggers to make people want to believe in it enough that they’ll be ready to ignore any inconsistencies.
Superficially plausible: Individuals being selfish and pursuing their own interest above that of others is arguably the main source of suffering among humans, and you can easily generalize the argument to the biosphere as a whole. Superorganisms are indeed quite successful due to their ability to suppress individualism, as are multi-celled creatures in general. Humans do seem to have a number of adaptations that make them more successful by reducing individualistic tendencies, and it seems plausible to claim that even larger superorganisms with more effective such adaptations could become the dominant power on Earth. If one thinks that there is a general trend of more sophisticated superorganisms being more successful and powerful, then the claim that “evolution is not directionless” also starts to sound plausible. The “humans have a chance to help with that process but aren’t intended to see the outcome” is also plausible in this context, since a true intelligent superorganism would probably be very different from humanity.
“Evolution leads to more complex/intelligent creatures and humans are on top of the hierarchy” is an existing and widely believed meme that similarly created a narrative that put humans on top of the existing order, and this draws on that older meme in two ways: it feels plausible and appealing for many of the same reasons why the older meme was plausible, and anyone who already believed in the old meme will be more inclined to see this as a natural extension of the old one.
Emotional triggers: It constructs a powerful narrative of progress that places humans at the top of the current order, while also appealing to values related to altruism and sacrificing oneself for a greater whole, and providing a way to believe that things are purposeful and generally evolving towards the better.
The notion of a vast superorganism that will one day surpass and replace humanity also has the features of vastness and incomprehensibility, two features which Keltner and Haidt claim form the heart of prototypical cases of awe:
The more I think of it, the more impressive the whole thing starts to feel like, in the “memeplex that seems very effectively optimized for spreading and gaining loyal supporters” sense.
I’d add slow-to-moderated paced, low-pitched sounds to the list of vastness indicators.
I’m not sure about music with fast heavy bass rhythm, though that may also be a sort of vastness.
Sounds like an attempt to reconcile, not science and religion in general, but specifically science and the Christian concepts of the Fall and original sin; or possibly some sort of Gnosticism.
(Aleister Crowley made similar remarks about individuality as a disease of life in The Book of Lies, but didn’t go so far as to attribute it to eukaryotes.)
Well the relevant story (God banishing Adam and Eve from the Garden of Eden) is in Genesis, so it’s in the Torah as well. Gnostics considered the Fall a good thing—it freed humanity from the Demiurge’s control.
Holy crap that’s easily the stupidest thing I’ve read this week.
Downvoted for insult + not giving a reason.
I don’t mean to say your conclusion is wrong, but I’d like to point out that if Eliezer’s ideas were summed up as one paragraph and posted to some other website, many people there would respond using the same thought process that you used. Anyway, a text can be wrong and still worth reading. I think the text I linked to is very worth reading. If you get halfway through and still think that it’s stupid, let me know—I’ll be pretty surprised.
I like this. Like all good religion, it’s an idea which feels true and profound but is also clearly preposterous.
It reminds me of some concepts in animes I liked, like the Human Instrumentality Project in Neon Genesis Evangelion and the Ragnarok Connection in Code Geass.
Why was so hot, back then?
Large amphibians survived, so it couldn’t have been that hot.
On the other hand the event extinguished more species than the comet that killed the dinosaurs. Maybe those amphibians just had a good strategy for dealing with the heat.
It’s “settled”, that it was hot. ;-)
However! If it was cool enough on places far from Siberia, then it’s obvious that this lava lake caused high temperatures around it. Not the “Global Warming caused by CO2 buildup, 250 million years ago”.
Then, big amphibians could survive in Antarctica, for example.
Amphibians were always fresh water creatures. And if the oceans were hot because of this super-volcano, some distant ponds and lakes could be—just warm.
Hm, the trouble is that this doesn’t account for the insulating effect of air, or a thin cool surface layer. A layer of air can reflect a lot of radiated heat right back into its source. Dare I say you might need something like a climate model to decide this?
In those climate models we have, CO2 is the most important player.
All those models (about 60 of them) have failed to predict non-higher temperatures since 1997, despite more CO2 in the air.
We would need a model without Arrhenius, if you ask me.