Open Thread, March 16-31, 2012
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
- 31 Mar 2012 9:44 UTC; 2 points) 's comment on Learning the basics of probability & beliefs by (
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
In the notes for the current chapter of HPMOR, we have the following:
I greatly enjoy programming, and am currently employed at about half that doing tech support, where my only time to actively program is in bash scripts. I followed the link to the quixey challenge, and while I was not solving them in under a minute, I am consistently solving the practice problems. My question is this: now what?
I have no experience in actual development, beyond the algorithm analysis classes I took 6 years ago. I have a family of 6, and live in the KCMO area- how do I make the jump into development, from no background? Anyone have any experience in that transition?
I don’t, but you might want to check out communities like Slashdot (http://slashdot.org) or Stack Overflow (http://stackoverflow.com) if you don’t get responses here.
A meta-anthropic explanation for why people today think about the Doomsday Argument: observer moments in our time period have not solved the doomsday argument yet, so only observer moments in our time period are thinking about it seriously. Far-future observer moments have already solved it, so a random sample of observer moments that think about the doomsday argument and still are confused are guaranteed to be on this end of solving it.
(I don’t put any stock in this. [Edit: this may be because I didn’t put any stock in the Doomsday argument either.])
You have reduced the DA to an absurdity, which comes from the DA itself. Clever.
Any self referencing is quite a dangerous thing for a statement. If something can be self referenced it is often prone to some paradoxical consequences what invalidates it.
If the conditions of this argument were true, it would annul the Doomsday Argument, thus bringing about its own conditions!
Yes, that’s my favorite thing about it and the reason I considered it worthy of posting. (It only works if everyone knows about it, though.)
The moon and sun are almost exactly the same size as seen from Earth, because in worlds where this is not the case, observers pick a different interesting coincidence to hold up as non-anthropic in nature.
What?
Meta-anthropics is fun!
But if even a tiny fraction of future observers thinks seriously about the hypothesis despite knowing the solution...
My current guess is that having the knows-the-solution property puts them in a different reference class. But if even a tiny fraction deletes this knowledge...
Isn’t this true about any conceivable hypothesis?
Yes, but most hypotheses don’t take the form, “Why am I thinking about this hypothesis?” and so your comment is completely irrelevant.
To elaborate: the doomsday argument says that the reason we find ourselves here rather than in an intergalactic civilization of trillions is because such a civilization never appears. I give a different explanation which relies on the nature of anthropic arguments in general.
A notion I got from reading the game company discussion—how much important invention comes from remembering what you wanted before you got used to things?
I didn’t want to put this as a discussion post in its own right, since it’s not really on topic, but I suspect it might be of use to people. I’d like a “What the hell do you call this?” thread. It’s hard to Google a concept, even when it might be a well-established idea in some discipline or other. For example:
Imagine you’re playing a card game, and another player accidentally exposes their cards just before you make some sort of play. You were supposed to make that play in ignorance, but you now can’t. There are several plays you could make which would have been beyond suspicion had you made them in ignorance, but if you make them now, they will be seen as suspect and opportunist in light of your opponent’s blunder, so you feel obliged to make a less favourable play that is at least beyond reproach.
Is there a term to describe this? It, and various other social analogues, seem to crop up quite a lot in various guises, but I don’t have a satisfactory label for it.
Alternatively, does anyone else have a “what the hell do you call this?” they want to throw out to the crowd?
The English Stack Exchange is a great site for getting answers to “what is a word or short phrase for … ?” questions.
That heuristic where, to make questions of fact easier to process internally, you ask “what does the world like if X is true? what are the consequences and testable predictions of X?” rather than just “is X true?” which tends to just query your inner Google and return the first result, oftentimes after a period of wait that feels like thinking but isn’t.
I want to know what to call that heuristic.
Making beliefs pay rent?
What is it called when you meet two acquaintances and begin to introduce them to each other, only to realize that you have forgotten both of their names?
How difficult would it be to code the user displays so that they also show average karma per comment, or better yet a karma histogram? Would that significantly increase the time it takes the site to load?
Because the number of quotes already used is increasing, and the number of LW users is increasing, I propose that the next quotes thread should include a new rule: use the search feature to make sure your quote has not already been posted.
For a balance, once every two years there could be a thread for already posted quotes. Like “choose the best quote ever”, to filter the best from the best.
Then the winning quotes could randomly appear on the LW homepage.
It’s already considered bad form to repeat a quote. I thought this was one of the listed rules, but since it isn’t (at least in the current thread) I agree that it should be added.
No repeats should be in the rules, but a posting on the rationality quotes pages is not and should not be a certification that the posters has investigated and is confident that there is no repeat.
If I had to investigate that hard before posting on that thread, I’d never do it because it wouldn’t be worth the investment of time. And the real consequences for repeating a rule are so low. In short:
Good rule.
Bad rule, as phrased.
It certainly should be a certification that poster copied some keywords from the quote into the search box and pressed enter.
If you are referring specifically to the literal meaning of ‘sure’ then fine. If you refer to the more casual meaning of “yeah, I checked this with search” then I disagree and would suggest that you implement the “it’s not worth it for you” contingency.
I’ve always found the search engine quite clunky, and of questionable reliability. I think an actually explicit social norm will solve most of the problem. That said, I won’t be put out if posting rationality quotes is not worth my effort.
So far as I know, the rule is just that a quote shouldn’t have appeared in a quotes thread, but if it’s appeared elsewhere, it’s ok to post it in a quotes thread.
A cached thought: We need a decent search engine, and the more posts and comments accumulate, the more we need it.
I don’t. Posting rationality quotes is one of the few things new members can do effectively, and new members are the least liable to know of any social norms. That’s why I said make the search feature explicit. Also, it’s good at finding quotes, since exact words are used, if at all possible (which is why it’s not called “Rationality Paraphrases”).
I suspect most of our disagreement is about how bad it is for there to be repeats. At the level of bad I assign, making the norm explicit is sufficient to diminish the problem sufficiently. You think the downside is a bit worse, so you support a more intrusive, but more effective, solution.
I want to post some new decision theory math in the next few days. The problem is that it’s a bit much for one post, and I don’t like writing sequences, and some people don’t enjoy seeing even one mathy post, never mind several. What should I do? Compress it into one post, make it a sequence, keep it off LW, or something else?
I for one often don’t do more than skim mathy posts, but I think they’re important and I’m glad people make them. (So my vote is for either one post or a sequence, and it sounds like you’re leaning towards the former.)
Edit:
The reasons I often skim mathy posts (probably easy to guess, but included for completeness):
The math is often above my level.
They take more time and attention to read than non-mathy ones.
-- Neal Stephenson, Reamde
Those people need to learn to live with seeing math if they want to be on a site trying its best to refine human rationality.
Post it please.
I already have: 1, 2.
Yay ^_^
My preference would be for one post per major idea, however short or long that ends up.
Please keep posting mathy stuff here, I find it extremely interesting despite not having much of a math background.
Hi. Long time reader, first time poster (under a new name). I posted once before, than quit because I am not good at math and this website doesn’t offer many examples of worked out problems of Bayes theorem.
I have looked for a book or website that gives algebraic examples of basic Bayesian updates. While there are many books that cover Bayes, all require calculus, which I have not taken.
In a new article by Kaj_Sotala, fallacies are interpreted in the light of Bayes theorem. I would like to participate in debates and discussion where I can identify common fallacies and try to calculate them using Bayesian methods, which may not require calculus but simple algebra and basic logic of probability.
However, if someone could simply create an article with a few worked examples of Bayesian updating, that would still be very helpful. I have read the explanations but I am just not very good at math. I have passed (A’s and B’s) in college trig, algebra, and precal, but I flunked out of calculus. Maybe in the future when I am more financially secure I could spend the time to really understand more complicated Bayesian updates.
Right now, I feel like there is a real need to simply have some basic worked out examples. Not long explanations, just problems with the math worked out. Preferably non calculus based problems.
My favorite explanation of Bayes’ Theorem barely requires algebra. (If you don’t need the extended explanation, just scroll to the bottom, where the problem is solved.)
That is a good article. I am looking for dozens of examples, all worked out in subtly different ways, just like a mini-textbook. OR a chapter in a regular text book. I’ll probably have to just do it myself.
I decided to finally start reading the The Hanson-Yudkowsky AI-Foom Debate. I am not sure how much time I will have but I will post my thoughts along the way as replies to this comment. This also an opportunity for massive downvotes :-)
In The Weak Inside View Eliezer Yudkowsky writes that it never occured to him that his views about optimization ought to produce quantitative predictions.
Eliezer further argues that we can’t use historical evidence to evaluate completely new ideas.
Not sure what he means by “loose qualitative conclusions”.
He says that he can’t predict how long it will take an AI to solve various problems.
Argh...I am getting the impression that it was a really bad idea to start reading this at this point. I have no clue what he is talking about.
I don’t know what the law of ‘Accelerating Change’ is and what exogenous means and what ontologically fundamental means and why not even such laws can break down beyond a certain point.
Oh well...I’ll give up and come back to this when I have time to look up every term and concept and decrypt what he means.
Some context:
He means that, because the inside view is weak, it cannot predict exactly how powerful an AI would foom, exactly how long it would take for an AI to foom, exactly what it might first do after the foom, exactly how long it will take for the knowledge necessary to make a foom, and suchlike. Note how three of those things I listed are quantitative. So instead of strong, quantitative predictions like those, he sticks to weak general qualitative ones: “AI go foom.”
He means, in this example anyway, that the reasoning “historical trends usually continue” applied to Moore’s Law doesn’t work when Moore’s Law itself creates something that affects Moore’s Law. In order to figure out what happens, you have to go deeper than “historical trends usually continue”.
I didn’t know what exogenous means when I read this either, but I didn’t need to to understand. (I deigned to look it up. It means generated by the environment, not generated by organisms. Not a difficult concept.) Ontologically fundamental is a term we use on LW all the time; it means at the base level of reality, like quarks and electrons. The Law of Accelerating Change is one of Kurzweil’s inventions; it’s his claim that technological change accelerates itself.
Indeed, if you’re not even going to try to understand, this is the correct response, I suppose.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations. And giving up on understanding rather than asking for explanations.
He’s not really giving up, he’s using a Roko algorithm again.
In retrospect I wish I would have never come across Less Wrong :-(
This is neither a threat nor a promise, just a question: do you estimate that your life would be improved if you could somehow be prevented from ever viewing this site again? Similarly, do you estimate that your life would be improved if you could somehow be prevented from ever posting to this site again?
I am trying this for years now but just giving up sucks as well. So I’ll again log out now and (try) not come back for a long time (years).
My intuitive judgement of the expected utility of reading what Eliezer Yudkowsky writes is low enough that I can’t get myself to invest a lot of time on it. How could I change my mind about that? It feels like reading a book on string theory, there are no flaws in the math but you also won’t learn anything new about reality.
ETA That isn’t the case for all people. I have read most of Yvain’s posts for example because I felt that it is worth it to read them right away. ETA2 Before someone is going to nitpick, I haven’t read posts like ‘Rational Home Buying’ because I didn’t think it would be worth it. ETA3 Wow I just realized that I really hate Less Wrong, you can’t say something like 99.99% and mean “most” by it.
I thought it might help people to see exactly how I think about everything as I read it and where I get stuck.
I do try, but I got the impression that it is wrong to invest a lot of time on it at this point when I haven’t even learnt basic math yet.
Now you might argue that I invested a lot of time into commenting here, but that was rather due to a weakness of will and psychological distress than anything else. Deliberately reading the Sequences is very different here, because it takes an effort that is high enough to make me think about the usefulness of doing so and decide against it.
When I comment here it is often because I feel forced to do it. Often because people say I am wrong etc. so that I feel forced to reply.
I don’t know if it’s something you want to take public, but it might make sense to do a conscious analysis of what you’re expecting the sequences to be.
If you do post the analysis, maybe you can find out something about whether the sequences are like your mental image of them, and even if you don’t post, you might find out something about whether your snap judgement makes sense.
In Engelbart As UberTool? Robin Hanson talks about a dude who actually tried to apply recursive self-improvement to his company. He is till trying (wow!).
It seems humans, even groups of humans, are not capable of fast recursive self-improvement. That they didn’t take over the world might be partly due to strong competition from other companies that are constantly trying the same.
What is it that is missing that doesn’t allow one of them to prevail?
Robin Hanson further asks what would have been a reasonable probability estimate to assign to the possibility of a company taking over the world at that time.
I have no idea how I could possible assign a number to that. I would just have said that it is unlikely enough to be ignored. Or that there is not enough data to make a reasonable guess either way. I don’t have the resources to take every idea seriously and assign a probability estimate to it. Some things get just discounted by my intuitive judgment.
I would guess that the reason is people don’t work with exact numbers, only with approximations. If you make a very long equation, the noise kills the signal. In mathematics, if you know “A = B” and “B = C” and “C = D”, you can conclude that “A = D”. In real life your knowledge is more like “so far it seems to me that under usual conditions A is very similar to B”. A hypothetical perfect Bayesian could perhaps assign some probability and work with it, but even our estimates of probabilities are noisy. Also, the world is complex, things do not add to each other linearly.
I suspect that when one tries to generalize, one gets a lot of general rules with maybe 90% probabilities. Try to chain dozen of them together, and the result is pathetic. It is like saying “give me a static point and a lever and I will move the world” only to realize that your lever is too floppy and you can’t move anything that is too far and heavy.
In Fund UberTool?, Robin Hanson talks about a hypothetical company that applies most of its resources to its own improvement until it would burst out and take over the world. He further asks what evidence it would take to convince you to invest in them.
This post goes straight to the heart of Pascal’s mugging, vast utilities that outweigh tiny probabilities. I could earn a lot by investing in such a company if it all works as promised. But should I do that? I have no idea.
What evidence would make me invest money into such a company? I am very risk averse. Given my inability to review mathematical proofs, and advanced technical proofs of concept, I’d probably hesitant and fear that they are bullshitting me.
In the end I would probably not invest in them.
By “a hypothetical company that applies most of its resources to its own improvement” do you mean a tech company? Because that’s exactly what tech companies do, and they seem to be pretty powerful, if not “take over the world” powerful. And I do invest in those companies.
In Friendly Teams Robin Hanson talks about the guy who tried to get his company to undergo recursive self-improvement and how he was a really smart fellow who saw a lot of things coming.
Robin Hanson further argues that key insights are not enough but that it takes many small insights that are the result of a whole society of agents.
Robin further asks what it is that makes the singleton AI scenario more reasonable if does not work out for groups of humans, not even remotely. Well, I can see that people would now say that an AI can directly improve its own improvement algorithm. I suppose the actual question that Robin asks is how the AI will reach that point in the first place. How is it going to acquire the capabilities that are necessary to improve its capabilities indefinitely.
f.lux and sleep aid follow-up: About a month or two ago, I posted on the open thread about some things I was experimenting with to get to bed regularly at a decent hour. Here are the results:
f.lux: I installed f.lux on my computer. This is a program that through the course of the day, changes your display from blue light to red light, on the theory that the blue light from your computer keeps you awake. When I first installed it, the red tint to my screen was VERY noticeable. Anecdotally, I ended up feeling EXTREMELY tired right after installing it, and fell asleep within about half hour, despite the fact that it was only 8:30p.
Now, the red tint to my screen at night is barely noticeable, which is good, but it doesn’t make me feel super-tired like it did the first night I installed it. I didn’t keep any quantitative measurements of my sleep patterns, so I can’t tell if f.lux has helped or not, but I think its possible effects may be negated by the extremely bright lights I have in my room. I definitely feel more tired when I turn those off, which I don’t normally do. (But I should...)
Verdict: I would recommend installing f.lux if you are having trouble getting yourself to bed on time, especially if you do a lot of computer usage at night. The results are iffy, but the cost is minimal (maybe 2 minutes of your time to download, and then it’s on your computer and you never have to worry about it again.)
Sleep Aid: I also tried taking a sleep aid (Diphenhydramine HCl- it’s non-addictive and OTC) about half an hour before I thought I should start going to bed. This worked great! It is much easier to think “I should go to bed soon-ish. I’ll take a sleep aid now,” than to think “I should stop everything I’m doing and get ready for bed NOW.” Once you’ve taken the pill, you’re pretty much committed to going to bed soon, plus it makes you nice and tired.
Unfortunately, I also ended up starting to have really bad headaches within a couple of days, so I stopped using them. When I used the pill at night, I would have a headache the next day. If I didn’t use the pill, I wouldn’t have a headache. I intend to try this again at some point with melatonin. I might also try again with the same pills, just to make sure that the first trial didn’t just happen to coincide with me being sick.
Verdict: It’s definitely effective, so give it a try, but recognize that they might cause headaches.
I have been wondering for a while: what’s the source for the A Human’s Guide to Words sequence? I mean EY had to come up with that somehow and unlike with the probability and cognitive science stuff, I have no idea what kind of books inspired A Human’s Guide to Words. What are the keywords here?
Eliezer had read Language in Thought and Action prior to writing this sequence, and he might have gotten some of it from Steven Pinker or the MIT Encyclopedia of the Cognitive Sciences as well.
If I understand correctly, it was partially inspired by general semantics.
Activity on these seems to be dying down, so my own reply to this comment is a poll.
Upvote this comment if you prefer the status quo of two open threads per month. Downvote it if you prefer to go back to one open thread per month.
Karma balance: do the opposite of whatever you did for the parent comment. (Not that it matters much, since this is a sock-puppet account!)
The Unreasonable Effectiveness of Data talk by Peter Norvig.
I’m often walking to somewhere and I notice that I have a good amount of thinking time, but that I find my head empty. Has anyone any good ideas on useful things to occupy my mind during such time? Visualisation exercises, mental arithmetic, thinking about philosophy?
It depresses me a little, how much easier it is to make use of nothing but a pen and paper, than it is to make use of when that is removed and one has only one’s own mind.
How often do you think in words, and how often in visuals, sounds, and so on? Do you normally think by picturing things, or engaging in an internal monologue, or what? Or is the distribution sort of even?
I’d say something like internal monologue, for thinking anyway (this may be internally sounded, I know that I think word-thoughts in my own voice, but I regularly think much faster than I could possibly speak, until I realise that fact, when the voice becomes slow and I start repeating myself, and then get annoyed at my brain for being so distracting).
For calculating or anything vaguely mathematical I use abstractly spatial/visual sorts of thoughts—abstract meaning I don’t have sufficient awareness of the architecture of my brain to tell you accurately what I even mean. Generally I’m not very visual, but I would say I use a spatial sort of visual awareness quite often in thought. If this makes sense.
Does this imply something about the sorts of tasks I could do that were most useful? I’m intrigued by the reasons you have for requesting the data you did. :)
I requested that data because for some reason, in my own experience, I’ve noticed the tendency you mentioned in your previous post as being strongest when I’m trying to avoid the internal monologue way of thinking.
If I try to avoid using words in my thought process, I often find myself walking around empty-headed for some reason. It’s as if it’s a lot harder to start a non-verbal thought, or something. I don’t know.
When walking around with a lot of thinking time on my hands, I’ve found a lot of success keeping myself occupied by simply saying words to myself and then seeing where it takes me. For example, I may vocalize in my head “epistemology”, or “dark arts”, or something like that, and then see where it takes me (making sure to start verbalizing my thought process if I stall at any point).
Maybe I’m on a different topic though. Are you simply asking what you should spend your time thinking about, and I’m going into the topic of how to start a thought process (whatever it is)? This seems like an unlikely interpretation though because you said the problem is not having a pen and paper, which suggests to me that you know what to think about, but end up not doing anything if all you can’t write or draw.
Sorry if this is pretty messy. I wanted to respond to this, but didn’t have much time.
I see, that’s interesting. That feels recognisable: I think when I hear my own voice/internal monologue, it brings to memory things I’ve already said or talked about, so I dwell on those things rather than think of fresh topics. So I think of the monologue itself as being the source of the stagnant thinking, and shut it down hoping insight will come to me wordlessly. Having said all that about having an internal monologue, I now think I do have a fair number of non-verbal thoughts, but these still use some form of mental labelling to organise concepts as I think about them.
That sounds an interesting experiment to do, next time I need to travel bipedally I’ll get on to checking out those default conceptual autocompletes* that I get from different words. Thanks!
*Hoping I haven’t been presumptious in my use of technical metaphors—in the course of writing this I’ve had to consciously reign in my desire to use programming metaphors for how my brain seems to work.
I suppose among the questions I was interested in, was indeed what I should spend my time thinking about. I had the idea that there must be high-computational-requiring and low-requisite-knowledge-requiring mental tasks, akin to how one learning electronics might spend time extrapolating the design of a one-bit adder with a pen and paper and requisite knowledge of logic gates. But crucially, without a pen and paper. So in what area can I use my pre-existing knowledge to productively generate new ideas or thoughts without a pen and paper. Possibly advancing in some sense my ‘knowledge’ of those areas at the same time.
Sidenote: I like reading detailed descriptions of people’s thought-processes like this, because of the interleaved data on what they pay attention to when thinking; and especially when there isn’t necessarily a point to it in the sequences-/narrative-/this post has a lesson related to this anecdote-style, and when it’s just describing the mechanics of their thought stream for the sake of understanding another brain. For some reason it feels like a rich source of data for me, and I would like to see more of it. Particularly because it feels to give insight on a slightly lower level than cognitive biases themselves. I sometimes think I use my micro-thought processes to evade or disrupt the act of changing my mind simply because they have the advantage of being on a lower level. A level that interacts with feelings, of which I seem to have many. Alternately, my desire for detailed descriptions of people’s thought-processes might be down to my personality and not be something generally useful.
I’m an undergraduate student majoring in computer science. What career and subsequent studies should I aim for in order to be able to solve interesting and useful problems?
Did you folk see this one?
The Problem with ‘Friendly’ Artificial Intelligence - Adam Keiper and Ari N. Schulman.
Wow, something has gone horribly wrong if this is outsiders’ perception of FAI researchers.
The article Tim linked is a reply to another article that only quotes some of CFAI, so it’s possible that the author was only exposed to the quotations from CFAI in that article.
Universal power switch symbols are counter-intuitive. A straight line ends. It doesn’t go anywhere. It should mean “stop.” A circle is continuous and should mean “on”. A line penetrating a circle has certain connotations that means keep it going (or coming) but definitely not “standby”. How can we change this?
Polyamory: if anyone is interested in my notes ( http://dl.dropbox.com/u/5317066/2012-gwern-polyamory.txt ), I’ve updated them with a big extract from Anapol 2010 - apparently she noticed a striking frequency of Asperger’s in polyamory circles. Of course LW has never been accused of hosting very many of those...
The Poly-English Dictionary may need updating.
I think I just got converted. I’m willing to sleep with lots of people so long as it means I get to hang out with lots of nerds and discuss fantasy books. Hang on… how many females are there in this community? 3?
Are there any good examples of what would be considered innate human abilities (cognitive or otherwise) that are absent or repressed in an entire culture?
For example, are there examples of culture-wide face-blindness/prosopagnosia? Are there examples of cultures that can’t apply the Gaze heuristic, or can’t subitize?
This is for reasoning about criticisms to universal grammar, in particular the lack of recursion in the Pirahã language, so that one is kind of begging the question. The closest I can come up with at the moment (which really isn’t very close at all) is the high incidence of perfect pitch amongst native speakers of tonal languages.
A vague discussion of AI risks has just broken out at http://marginalrevolution.com/marginalrevolution/2012/03/amazing-bezos.html#comments Marginal Revolution gets a lot of readers who are roughly in the target demographic for LW—anyone fancy having a go at making a sensible comment in that thread that points people in the right direction?
Richard Carrier’s book looks like it’s going to spread the word of Bayes. To the theists, too. And there’s a media-friendly academic fight in progress. Just the thing!
Any recommendations for books/essays on contemporary hermeneutics whose authors are aware of Schellingian game theory and signalling games? Google Scholar has a few suggestions but not many and they’re hard to access.
Would it be useful to make a compressed version of the Sequences, at the ratio one Sequence into one article which is approximately one article into one paragraph? It would provide an initial information for people who would like to read the Sequences, but do not have enough time. Each paragraph would be followed by a “read more” hyperlink to the original article.
There are summary posts like this, but if you’re thinking about a more coherent presentation, “one article into one paragraph” probably won’t work.
A proposal: make public an anonymised dataset of all Karma activity over an undisclosed approximate three-month period from some point in the past 18 months.
What I would like is a list of anonymised users, a list of posts and comments in the given three-month period (stripped of content and ancestry, but keeping a record of authorship), and all incidents of upvotes and downvotes between them that took place in the given period. This is for purposes of observing trends in Karma behaviour, and also sating my curiosity about how some sort of graph-theoretic-informed equivalent of Karma (kind of like Google PageRank) might work. I would also be curious to see what other data-types might make of it.
What good reasons are there for not making this data available?
Someone has to go to the trouble of pulling it from the database I would personally be prepared to pay up to $13.50 for your time and effort. I would also be surprised if someone hasn’t at least snook a peak at this data already, because it’s kind of interesting.
Violation of LW user privacy The biggie, really. It’s possible that a tenacious individual could use this data to deduce the voting habits of specific users. I’ve been thinking about how I might go about doing this if given the data in question, which informed the “approximate three months at some point in the past eighteen months” time frame. Without timestamps or details of comment ancestry, and without knowing the exact length of the snapshot period, I suspect anyone trying to extract this information would struggle enormously.
I am fascinated in how people would try and accomplish this, though, so please tell me how you’d go about it. My personal method would be to scrape the site to build up a record of post and comment authorship over time. Any given period would then have a “fingerprint” of authors to number of posts that you could compare against the dataset. This becomes harder, but not impossible, with a time period of unspecified length. This could be mitigated by the data being deliberately sabotaged prior to publication, in such a way that confounds this method while still keeping the broader trends available for analysis.
Any other concerns people would have with this? Alternatively, any awesome things they’d like to do with the data?
Is the LW database structure available? If yes, you could prepare some SELECT queries and ask admins to run them for you and send you the result.
Anonymization: Replace user ids with “f(id+c)” where “f” is a hash function and “c” is a constant that will be modified by the admin before running you script. Replace times of karma clicks with “ym(time+r)” where “r” is a random value between 0 and 30 days, and “ym” is a function that returns only month and year. Select only data from recent year and only from users who are were active during the whole year (made at least one vote in the first and last months of the time period). Would such data be still useful to you?
My day job is DB admin and development. In the unlikely event of LW back-end admin-types being comfortable running a query sent in by some dude off the site, I wouldn’t be comfortable giving it to them. The effort of due diligence on a foreign script is probably greater than that required to put it together.
The data I want correspond to:
the IDs (i.e. primary key, not the username) of all the users
the IDs (PK) and authorship (user ID) of all posts and comments in a contiguous ~3 month period
the adjacency of users and posts as upvotes and downvotes over this period (I assume this is a single junction table)
If I were providing this data, I would also scramble the IDs in some fashion while maintaining the underlying relationships, as consecutive IDs could provide some small clue as to the identity and chronology of users or posts. While this is pretty straightforward, the mechanism for such scrambling should not be known to recipients of the data.
Is there a term in many-party game theory for a no-win, no-lose scenario; that is where by sacrificing a chance of winning you can prevent losing (neutrality or draw)?
I don’t know any game theory terms, but in law, there’s the high-low agreement, where the plaintiff agrees that the maximum exposure is X, and the defendant agrees that the minimum exposure is Y (a lower number). It aims to reduce the volatility of trial.
Jane McGonigal’s new project SuperBetter may be useful to you as an incentive framework for self-improvement.
I’ve been using the Epic Win iPhone app as an organizer, task reminder and somewhat effective akrasia-defeater for about a year now, and think it has helped me quite a bit. SuperBetter is similar, but has more aspects, and is not portable (for now). I anticipate that I will prefer Epic Win’s simplicity and accessibility to SuperBetter.
The Essence Of Science Explained In 63 Seconds
A one minute piece of Feynman lecture candy wrapped in reasonable commentary. Excellent and most importantly brief intro level thinking about our physical world. Apologies if it has been linked to before, especially since I can’t say I would be surprised if it was.
Is it possible to increase our computational resources by putting ourselves in a simulation run in such a way as to not require as much quantum wave function collapse to produce a successive computational state?
Something I would quite like to see after looking at this post: a poll of LW users’ stances on polarised political issues.
There are a whole host of issues which we don’t discuss for fear of mindkilling. While I would expect opinion to be split on a lot of politically sensitive subjects, I would be fascinated to see if the LW readership came down unilaterally on some unexpected issue. I’d also be interested to see if there are any heavily polarised political issues that I currently don’t recognise as such.
Why would this be a bad idea?
I would be astonished if one result of such a poll was not quite a lot of discussion of the polarized political issues that we don’t discuss for fear of mindkilling. Whether that’s a bad thing or not depends on your beliefs about such discussion, of course.
Also, if what you’re interested in is (a) issues where we all agree, and (b) issues you don’t think of as polarized political issues in the first place, it seems a poll is neither necessary nor sufficient for your goals. For any stance S, you can find out whether S is in class (a) by writing up S and asking if anyone disagrees. And no such poll will turn up results about any issue the poll creator(s) didn’t consider controversial enough to include in the poll.
That said, I’d be vaguely interested (not enough to actually do any work to find out) in how well LW users can predict how popular various positions are on LW, and how well/poorly accuracy in predicting the popularity of a position correlates with holding that position among LW users.
How I imagined it going:
0) Prohibit actual discussion of the subjects in question, with the understanding that comments transgressing this rule would be downvoted to oblivion by a conscientious readership (as they generally are already)
1) Request suggestions for dichotomies that people believe would split popular opinion. Let people upvote and downvote these on the basis of whether they’d be fit for the purpose of the poll.
2) Take the most popular dichotomies and put them in a poll, with a “don’t care” and “wrong dichotomy” option, which I hope are fairly self-explanatory.
2) a) To satisfy your curiosity on how well LW users can predict the beliefs of other LW users, also have a “what do you think most LW users would pick as an answer to this question?” option.
3) Have people vote, and see what patterns emerge.
Does anyone know much about general semantics? Given the very strong outside view similarities between it and less wrong. Not to mention the extent to which it directly influenced the sequences it seems like it’s history could provide some useful lessons. Unfortunately, I don’t really know that much about it.
EDIT: disregard this comment, I mistook general semantics for, well, semantics.
I’m no expert on semantics but I did take a couple of undergrad courses on philosophy of language and so forth. My impression was that EY has already taken all the good bits, unless you particularly feel like reading arguments about whether a proposition involving “the current king of France” can have a truth value or not. (actually, EY already covered that one when he did rubes and bleggs).
In a nutshell, the early philosophers of language were extremely concerned about where language gets its meaning from. So they spent a lot of time talking about what we’re doing when we refer to people or things, eg. “the current king of France” and “Sherlock Holmes” both lack real-world referents. And then there’s the case where I think your name is John and refer to you as such, but your name is really Peter, so have I really succeeded in referring to you? And at some point Tarski came up with “snow is white” is a true proposition if and only if snow is white. And that led into the beginning of modern day formal/compositional semantics, where you have a set of things that are snow, and a set of things that are white, and snow is white if and only if the set of things that are snow overlaps completely with the set of things that are white.
I see. Do you know much about the history of it as a movement? While I do have some interest in the actual content of the area I was mostly looking at it as a potentail member of the same refernce class as LW. Specially, I was wondering if its history might contain lessons that are generally useful to any organization that is trying to improve peoples’ thinking abilities. Particularly those that have formed a general philosophy based off of insights gained from cross-disiplinary study.
My apologies, I went off in completely the wrong direction there. I don’t know too much of it as a movement, other than that all the accounts of it I’ve seen make it sound distinctly cultish, and that the movement was carried almost entirely by Korzybski and later by one of his students.
I was and am very influenced by Stuart Chase’s The Tyranny of Words—what I took away from it is to be aware that you never have the complete story, and that statements frequently need to be pinned down as to time, place, and degree of generality.
Cognitive psychology has a lot of overlap with general semantics—I don’t know whether there was actual influence or independent invention of ideas.
I just thought of a way to test one of my intuitions about meta-ethics, and I’d appreciate others thoughts.
I believe that human morality is almost entirely socially constructed (basically an anti-realist position). In other words, I think that the parts of the brain that implement moral decision-making are incredibly plastic (at least at some point in life).
Independently, I believe that behaviorism (i.e. the modern psychological discipline descended from classical conditioning and operant conditioning) is just decision theory plus an initially plastic punishment/reward system.
In short, if behaviorism makes false predictions of human behavior—the same error in different eras and cultures—then that seems like evidence that my plasticity based meta-ethics theory is wrong.
Does anyone see any holes in that logic? Is anyone aware of examples in which behaviorism has failed to accurately predict in the ways I have described?
I think have seen offers to help edit LW post, but can’t remember were. Does anyone know what I may be thinking of?
People have irrational beliefs. When people come to lesswrong and talk about them, many say “oops” and change their mind. However, often they keep their decidedly irrational beliefs despite conversation with other Lesswrongers who often point out where they went wrong, and how they went wrong, and perhaps a link to the Sequence post where the specific mistake is discussed in more detail.
Some examples:
http://lesswrong.com/user/Jake_Witmer/
This guy was told he was being Mindkilled. Many people explained to him what was wrong with his thinking, and why it was wrong, and how it was wrong, and what he could do, and all manner of helpful advice and discussion. He rejected it, left the site and hasn’t been seen since.
Another: http://lesswrong.com/user/911truther/
Not much to say. Eliezer the Wise and Always Correct himself declared him a troll.
Another: http://lesswrong.com/user/sam0345/
Generally pretty irrational dude. Asked to leave lesswrong by the powerful and great Eliezer because his comments were so bad.
Another, different example:
http://lesswrong.com/lw/1lv/the_wannabe_rational/
MrHen had great insights into rationality and seemed to be a well upvoted member of lesswrong. He also believed in God. He was around in 2010 and again in 2011 for a bit, and hasn’t posted in a while now.
Perhaps a more controversial example:
http://lesswrong.com/user/Mitchell_Porter/?count=50&after=t1_5tl5
Around Feb 3rd Mitchell Porter brought a debate about colour, the mind, dualism, and similar thoughts. I’m actually not sure if this was resolved, but there seemed to be some small consensus (kinda) that he was talking the Crackpot Offer. This was suggested to him.
Mitchell Porter is still around, and is an active user that seems to have lots of useful insights to many things. he is very well upvoted.
To any of these people, I am sorry for mentioning you guys like this if you are offended or anything like that.
So why am I bringing this up?
Well, people fail at being rational all the time. However, there are countless examples like these, from people who turned up, got insanely downvoted, then left, and regular users who otherwise get lots of karma and are very rational.
The main thing I wanted to do was just POINT IT OUT and see if anyone wants to comment on the fact that this happens, in LessWrong, surely the place where they are MOST likely to see why and how they are wrong.
What does this mean that so many people do not? What does it mean that such failures happen so often that I could choose random examples off the top of my head? I mean, some of the things it means are obvious, but this pains me and I need it to discussed somewhere because I find it important and I think that more people should be aware that this happens and should make more concerned, perhaps vapid comments about it.
Also, thinking of upgrading to discussion post. Tell me if that’s a bad idea.
If you have read this, please tell me what you think.
Half the people you listed were insanely rude at pretty much every single comment they posted.
Jake Witmer was pretty much accusing of communism everyone who downvoted him.
911truther deliberately chose a provocative name and kept wailing in every single post about the downvotes he received (which of course caused him to get more downvotes).
sam0345′s main problem wasn’t that he was irrational, it was that he was an ass all the time.
But I don’t even know why you chose to list the above as belonging to the same category with decent people like Mitchell_Porter and MrHen, people who don’t follow assholish tactics, and are therefore generally well received and treated as proper members of the community, even if occasionally downvoted (whether rightly or wrongly). As you yourself saw.
The main problem with half the people you listed was that they were assholes, not that they were wrong. If people enjoy being assholes, if their utility function doesn’t include a factor for being nice at people, how do you change that with mere unbiasing? Not caring about how whether you treat others nicely or nastily has to do with empathy, not with intellectual power.
The rudeness wouldn’t help with the downvotes, I can understand that.
But the factor that I was pointing out, and the common factor for my grouping them together was the lack of being able to say “oops”. I am sorry, I didn’t make it very clear. Thus why I listed the assholes with nice people.
MrHen left LessWrong believing in a God, and Mitchell_Porter (as far as I can tell) still believes dualism needs to be true if colour exists (or whatever his argument was, I’m embarrasing myself by trying to simplify it when I had a poor understanding of what he was trying to say). They were/are also great rationalists apart from that, and they both make sure to be very humble in general while on the site.
The other 3 were often rude, but the main reason I pointed them out was their lack of ability to say “oops” when their rational failings were pointed out to them. Unlike the other two, these 2 them proceeded to act very douchey until friven from the site, but their first posts are much less abrasive and rude.
In general though, if they aren’t going to work out they are wrong at LessWrong, where are they going to?
Some of these people may work it out with time, and it may be unreasonable to expect them to change their mind straight away.
But this should show at least how difficult it is for an irrational person to attempt to become more rational; it’s like having to know the rules to play the rules.
What does it take to commit to wanting rationality from a beginning of irrationality?
These examples show the existence of people on LessWrong who aren’t rational, and while that isn’t a surprise, I feel like the Lesswrong community should be perhaps learn from the failings of some of these people, in order to better react to situations like this in the future, or something. I don’t know.
In any case, thank you for replying.
Compartmentalization.
Bold statment that somehow still seems true: Most LessWrongers probably have a belief of comparable wrongness. MrHen is just unlucky.
The argument is that for dualism not to be true, we need a new ontology of fundamental quantum monads that no-one else quite gets. :-) My Chalmers-like conclusion that the standard computational theory of mind implies dualism, is an argument against the standard theory.
Deciding that being less wrong than I am now is valuable, realizing that doing what I’ve been doing all along is unlikely to get me there, and being willing to give up familiar habits in exchange for alternatives that seem more likely to get me there. These are independently fairly rare and the intersection of them is still more so.
This doesn’t get me to wanting “rationality” per se (let alone to endorsing any specific collection of techniques, assumptions, etc., still less to the specific collection that is most popular on this site), it just gets me looking for some set of tools that is more reliable than the tools I have.
I’ve always understood the initial purpose of LW to be to present a specific collection of tools such that someone who has already decided to look can more easily settle on that specific collection (which, of course, is endorsed by the site founder as particularly useful), at-least-ostensibly in the hope that some of them will subsequently build on it and improve it.
Getting someone who isn’t looking to start looking is a whole different problem, and more difficult on multiple levels (practical, ethical, etc.).
You need some intial luck. It’s like human mind is a self-modifying system, where the rules can change the rules, and again, and again. Thus human mind is floating around in a mindset space. The original setting is rather fluid, for evolutionary reasons—you should be able to join a different tribe if it becomes essential for your survival. On the other hand, the mindset space contains some attractors; if you happen to have some set of rules, these rules keep preserving themselves. Rationality could be one of these attractors.
Is the inability to update one’s mind really so exceptional on LW? One way of not updating is “blah, blah, blah, I don’t listen to you”. This happens a lot everywhere on the internet, but for these people probably LW is not attractive. The more interesting case is “I listen to you, and I value our discussion, but I don’t update”. This seems paradoxical. But I think it’s actually not unusual… the only unusual thing is the naked form—people who refuse to update, and recognize that they refuse to update. The usual form is that people pretend to update… except that their updates don’t fully propagate. In other words, there is no update, only belief in update. Things like: yeah I agree about Singularity and stuff, but somehow I don’t subscribe for cryopreservation; and I agree human lives are valuable and there are charities which can save hundred human lifes for every dollar sent to them, but somehow I didn’t send a single dollar yet; and I agree that rationality is very important and being strategic can increase one’s utility, and then I procrastinate on LW and other web sites and my everyday life goes on without any changes.
We are so irrational that even our attempts to become rational are horribly irrational, and that’s why they often fail.
Absolutely nothing. Your sample is a selection bias of all the worst examples you can think of. Please don’t make a discussion post about this.
Not really. He had major problems with his tone though.
Recommendations for a book/resource on comparative religion/mythology, ideally theory-laden and written by someone with good taste for hermeneutics? Preferably something that doesn’t assume that gods aren’t real. (I’m approaching the subject from the Gaimanian mythological paradigm, i.e. something vaguely postmodern and vaguely Gods Need Prayer Badly, but that perspective is only provisional and I value alternative perspectives.)
I mean, the classic is Jospeh Cambell and The Hero with a Thousand Faces. There’s also The Masks of God and other books by him.
It’s not book-length, but Eric S. Raymond’s Dancing With the Gods treats them as, at least, intersubjectively real.
I’ve read it. ESR is… a young soul, hard for me to learn from.
Thanks yo, will read.
What’s your empirical definition of god here?
Not what you’re asking for, but possibly interesting: A World Full of Gods: An Inquiry into Polytheism, a polytheistic theology. The author said it was the first attempt at such.
This review has enough quotes that you should be able to see whether you want to read it.
[Weird irrational rant]
A week and a half ago, I either caught some bug or went down with food poisoning. Anyway, in the evening I suddenly felt like shit and my body temperature jumped to 40C. My mom gave me some medicine and told me to try and get some sleep. My state of mind felt a bit altered, and I started praying fervently to VALIS. My Gnostic faith has been on and off for the last few years, but in that moment, I was suddenly convinced that it was a test of some sort, and that the fickle nature of reality would be revealed to me if I wouldn’t waver in my belief. I felt that it’s a point where my life could change, possibly for the better.
Therefore, I thought of and swore three oaths: an oath of scholarship—to obtain both rational and subjective (“spiritual”) knowledge, and use it to search for truth; an oath of compassion—to treat all deserving beings with kindness and fairness, and to oppose evil with a healing word rather than hatred; and an oath of evangelism—to seek out fellow nutjobs who would be interested in this woo, and try to convert them. Hence this comment.
I kept praying for two hours or so, then slept for two more hours, and when I woke up I felt completely normal. The doctor came by later that day and found nothing wrong with me. I need to reflect on the whole thing more thoroughly. Anyway, I now believe with more certainty than before that there’s a benevolent entity (which I’ll keep calling VALIS, although she’s better known as St. Sophia) acting in the simulation around us, and that it influences minds and events subtly, helping a fallen spark from outside the simulation that is within us to break free of its bondage.
If you felt that this comment is worthless, yeah, I guess it’s hardly in line with LW’s goals. But maybe someone will feel sympathetic. Hmm, perhaps I should really have a serious discussion with Will_Newsome about it all. From what I’ve seen of his posts on Catholicism, he seems to hold the opposing view, worshipping what the Gnostics would call the Demiurge. But at least he’d ponder these matters seriously.