Please consider a few gremlins that are weighing down LW currently:
Eliezer’s ghost—He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the “owner” of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.
the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we’re told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com, which means that not only has discussing politics not harmed SSC, it may actually be drawing in people who care about genuine insights in this extremely complex space that is of very high interest.
the “original content”/central hub approach (related to #1) -- This should have been an aggregator since day 1. Instead it was built as a “community blog”. In other words, people had to host their stuff here or not have it discussed here at all. This cost us Robin Hanson on day 1, which should have been a pretty big warning sign.
The codebase, this website carries tons of complexity related to the reddit codebase. Weird rules about responding to downvoted comments have been implemented in there, nobody can make heads or tails with it. Use something modern, and make it easy to contribute to. (telescope seems decent these days).
Brand rust. Lesswrong is now kinda like myspace or yahoo. It used to be cool, but once a brand takes a turn for the worse, it’s really hard to turn around. People have painful associations with it (basilisk!) It needs burning of ships, clear focus on the future, and as much support as possible from as many interested parties, but only to the extent that they don’t dillute the focus.
In the spirit of the above, I consider Alexei’s hints that Arbital is “working on something” to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future (that still suffers from problem #1 AFAICT) is exactly what I would do if I wanted to maintain the status quo for a few more years.
Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i’m not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.
I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it’s something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.
Re: 1, I vote for Vaniver as LW’s BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he’s been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)
Anyone want to join me in this, or else make a counterproposal?
Agree with both the sole dictatorship and Vaniver as the BDFL, assuming he’s up for it. His posts here also show a strong understanding of the problems affecting less wrong on multiple fronts.
Who is empowered to set Vaniver or anyone else as the BDFL of the site? It would be great to get into a discusion of “who” but I wonder how much weight there will be behind this person. Where would the BDFL’s authority eminate from? Would he be granted, for instance, ownership of the lesswrong.com domain? That would be a sufficient gesture.
I’m empowered to hunt down the relevant people and start conversations about it that are themselves empowered to make the shift. (E.g. to talk to Nate/Eliezer/MIRI, and Matt Fallshaw who runs Trike Apps.).
I like the idea of granting domain ownership if we in fact go down the BDFL route.
An additional additional point is that the dictator can indeed quit and is not forced to kill themselves to get out of it. So it’s actually not FL. And in fact, it’s arguably not even a dictatorship, as it depends on the consent of the governed. Yes, BDFL is intentionally outrageous to make a point. What’s yours?
I’ll second the suggestion that we should consider other options. While I know Vaniver personally and believe he would do an excellent job, I think Vaniver would agree that considering other candidates too would be a wise choice. (Narrow framing is one of the “villians” of decision making in a book on decision making he suggested to me, Decisive.) Plus, I scanned this thread and I haven’t seen Vaniver say he is okay with such a role.
I think Vaniver would agree that considering other candidates too would be a wise choice.
I do agree; one of the reasons why I haven’t accepted yet is to give other people time to see this, think about it, and come up with other options.
(I considered setting up a way for people to anonymously suggest others, but ended up thinking that it would be difficult to find a way to make it credibly anonymous if I were the person that set it up, and username2 already exists.)
I’m concerned that we’re only voting for Vaniver because he’s well known
Also because he already is a moderator (one of a few moderators), so he already was trusted with some power, and here we just saying that it seems okay to give him more powers. And because he already did some useful things while moderating.
Do we know anyone who actually has experience doing product management? (Or has the sort of resume that the best companies like to see when they hire for product management roles. Which is not necessarily what you might expect.)
I do. I was a product manager for about a year, then founder for a while, and am now manager for a data science team, where part of my responsibilities are basically product management for the things related to the team.
That said, I don’t think I was great at it, and suspect most of the lessons I learned are easily transferred.
Edit: I actually suspect that I’ve learned more from working with really good product managers than I have from doing any part of the job myself. It really seems to be a job where experience is relatively unimportant, but a certain set of general cognitive patterns is extremely important.
I’ve done my fair bit of product management, mostly on resin.io and related projects (etcher.io and resinos.io) and can offer some help in re-imagining the vision behind lw.
On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like “discussions on any topic, but with extremely high intellectual standards”. Some ideas:
In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. “agree with the conclusion but disagree with the argument”, or “accurate points, but ad-hominem tone”.
A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
Site erring on the side of being over-opinionated. It doesn’t necessarily need to be the community hub
Votes from highly-voted users count for more.
Integration with predictionbook or something similar, to show a user’s track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on “here is how I solved a problem I or other people were struggling with”
No main/discussion split. There are probably other divisions that make sense (e.g. by topic), but this mostly causes a lot of confusion
Better notifications around new posts, or new comments in a thread. Eg I usually want to see all replies to a comment I’ve made, not just the top level
Built-in argument mapping tools for comments
Shadowbanning, a la Hacker News
Initially restricted growth, e.g. by invitation only
“Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. “agree with the conclusion but disagree with the argument”, or “accurate points, but ad-hominem tone”.”—this seems complex and better done via a comment
Some sort of emoticon could work, like what Facebook does.
Personally, I find the lack of feedback from an upvote or downvote to be discouraging. I understand that many people don’t want to take the time to provide a quick comment, but personally I think that’s silly as a 10 second comment could help a lot in many cases. If there is a possibility for a 1 second feedback method to allow a little more information than up or down, I think it’s worth trying.
Integration with predictionbook or something similar, to show a user’s track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
This would be a top recommendation of mine as well. There are quite a few prediction tracking websites now: PredictionBook, Metaculus, and Good Judgement Open come to mind immediately, and that’s not considering the various prediction markets too.
I’ve started writing a command line prediction tracker which will integrate with these sites and some others (eventually, at least). PredictionBook and Metaculus both seem to have APIs which would make the integration rather easy. So integration with LessWrong should not be particularly difficult. (The API for Metaculus is not documented best I can tell, but by snooping around the code you can figure things out...)
It’s a thumbs-up that is in the lower left corner of a comment or post (next to a thumbs-down). It looks like the top of these two thumbs-ups (or the bottom one after you’ve clicked it):
If you don’t see it, it may be that they’ve turned off voting for new or low-karma accounts.
Ya, that must be it. I’ve been on here for like 3 years (not with this account though) but only after the diaspora. Really excited that things are getting posted again. One major issue with such a system is that I now feel pressure to post popular content. A major feature of this community is that nothing is dismissed out of hand. You can propose anything you want so long as it’s supported by a sophisticated argument. The problem with only giving voting privileges to >x karma accounts is that people, like myself, will feel a pressure to post things that are generally accepted.
Now to be clear I’m not opposed to such a filter. I’ve personally noticed that for example, slatestarcodex doesn’t have the same consistently high quality comments as lesswrong. For example people will have comments like “what’s falsification?”etc. So I acknowledge that such a filter might be useful. At the same time however I’m pointing out one potential flaw with such a filter, that it lends itself to creating an echo-chamber.
I think you’re right that wherever we go next needs to be a clear schelling point. But I disagree on some details.
I do think it’s important to have someone clearly “running the place”. A BDFL, if you like.
Please no. The comments on SSC are for me a case study in exactly why we don’t want to discuss politics.
Something like reddit/hn involving humans posting links seems ok. Such a thing would still be subject to moderation. “Auto-aggregation” would be bad however.
Sure. But if you want to replace the karma system, be sure to replace it with something better, not worse. SatvikBeri’s suggestions below seem reasonable. The focus should be on maintaining high standards and certainly not encouraging growth in new users at any cost.
I don’t believe that the basilisk is the primary reason for LW’s brand rust. As I see it, we squandered our “capital outlay” of readers interested in actually learning rationality (which we obtained due to the site initially being nothing but the sequences) by doing essentially nothing about a large influx of new users interested only in “debating philosophy” who do not even read the sequences (Eternal November). I, personally, have almost completely stopped commenting since quite a while, because doing so is no longer rewarding.
doing essentially nothing about a large influx of new users interested only in “debating philosophy” who do not even read the sequences (Eternal November).
This is important. One of the great things about LW is/was the “LW consensus”, so that we don’t constantly have to spend time rehashing the basics. (I dunno that I agree with everything in the “LW consensus”, but then, I don’t think anyone entirely did except Eliezer himself. When I say “the basics”, I mean, I guess, a more universally agreed-on stripped down core of it.) Someone shows up saying “But what if nothing is real?”, we don’t have to debate them. That’s the sort of thing it’s useful to just downvote (or otherwise discourage, if we’re making a new system), no matter how nicely it may be said, because no productive discussion can come of it. People complained about how people would say “read the sequences”, but seriously, it saved a lot of trouble.
There were occasional interesting and original objections to the basics. I can’t find it now but there was an interesting series of posts responding to this post of mine on Savage’s theorem; this response argued for the proposition that no, we shouldn’t use probability (something that others had often asserted, but with much less reason). It is indeed possible to come up with intelligent objections to what we consider the basics here. But most of the objections that came up were just unoriginal and uninformed, and could, in fact, correctly be answered with “read the sequences”.
That’s the sort of thing it’s useful to just downvote (or otherwise discourage, if we’re making a new system), no matter how nicely it may be said, because no productive discussion can come of it.
When it’s useful it’s useful, when it’s damaging it’s damaging, It’s damaging when the sequences don’t actually solve the problem. The outside view is that all too often one is directed to the sequences only to find that the selfsame objection one has made has also been made in the comments and has not been answered. It’s just too easy to silently downvote, or write “read the sequences”. In an alternative universe there is a LW where people don’t RTFS unless they have carefully checked that the problem has really been resolved, rather than superficially pattern matching. And the overuse of RTFS is precisely what feeds the impression that LW is a cult...that’s where the damage is coming from.
Unfortunately, although all of that is fixable, it cannot be fixed without “debating philosophy”.
ETA
Most of the suggestions here have been about changing the social organisation of LW, or changing the technology. There is a third option which is much bolder than than of those: redoing rationality. Treat the sequences as a version 0.0 in need of improvement. That’s a big project which will provide focus, and send a costly signal of anti-cultishness, because cults don’t revise doctrine.
I’m not sure so what you mean. Developing Sequences 0.1 can be done with the help of technology, but it can’t be done without community effort, and without a rethink of the status of the sequences.
I think the basilisk is at least a very significant contributor to LW’s brand rust. In fact, guilt by association with the basilisk via LW is the reason I don’t like to tell people I went to a CFAR workshop (because rationality → “those basilisk people, right?”)
Reputations seem to be very fragile on the Internet. I wonder if there’s anything we could do about that? The one crazy idea I had was (rot13′d so you’ll try to come up with your own idea first): znxr n fvgr jurer nyy qvfphffvba vf cevingr, naq gb znxr vg vzcbffvoyr gb funer perqvoyr fperrafubgf bs gur qvfphffvba, perngr n gbby gung nyybjf nalbar gb znxr n snxr fperrafubg bs nalbar fnlvat nalguvat.
Ooh, your idea is interesting. Mine was to perngr n jro bs gehfg sbe erchgngvba fb gung lbh pna ng n tynapr xabj jung snpgvbaf guvax bs fvgrf/pbzzhavgvrf/rgp, gung jnl lbh’yy xabj jung gur crbcyr lbh pner nobhg guvax nf bccbfrq gb univat gb rinyhngr gur perqvovyvgl bs enaqbz crbcyr jvgu n zrtncubar.
As opposed to what? Memorising the One true Philosophy?
The quotes signify that they’re using that specifically as a label; in context, it looks like they’re pointing to the failure mode of preferring arguments as verbal performance to arguments as issue resolution mechanism. There’s a sort of philosophy that wants to endlessly hash out the big questions, and there’s another sort of philosophy that wants to reduce them to empirical tests and formal models, and we lean towards the second sort of philosophy.
Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?
Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?
Yes. It seems to me that both of those factors drive discussions, and most conversations about philosophical problems can be easily classified as mostly driven by one or the other, and that it makes sense to separate out conversations where the difficulty is natural or manufactured.
I think a fairly large part of the difference between LWers and similarly intelligent people elsewhere is the sense that it is possible to differentiate conversations based on the underlying factors, and that it isn’t always useful to manufacture difficulty as an opportunity to display intelligence.
What I have in mind there is basically ‘approaching philosophy like a scientist’, and so under some views you could chalk up most scientific discoveries there. But focusing on things that seem more ‘philosophical’ than not:
How to determine causality from observational data; where the perception that humans have free will comes from; where human moral intuitions come from.
Approaching philosophy as science is not new. It has had a few spectacular successes, such as the wholesale transfer of cosmology from science to philosophy, and a lot of failures, judging by the long list of unanswered philosophical questions (about 200, according to wikipedia). It also has the special pitfall of philosophically uninformed scientists answering the wrong question:-
How to determine causality from observational data;
What causality is is the correct question/.
where the perception that humans have free will comes from;
Whether humans have the power of free will is the correct question.
where human moral intuitions come from.
Whether human moral intuitions are correct is the correct question.
Oh, if you count that one as a question, then let’s call that one solved too.
Whether humans have the power of free will is the correct question.
Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.
Whether human moral intuitions are correct is the correct question.
Disagree for roughly the same reason; the question of where the word “correct” comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.
Oh, if you count that one as a question, then let’s call that one solved too.
Solved where?
Whether humans have the power of free will is the correct question.
Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.
How can philosophers be systematically wrong about the nature of their questions? And what makes you right?
Of course, inasmuch as you agree with Y., you are going to agree that the only question to be answered is where the perception comes for, but this is about truth, not opinion: the important point is that he never demonstrated that.
Whether human moral intuitions are correct is the correct question.
Disagree for roughly the same reason; the question of where the word “correct” comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.
if moral intuitions come from God, that might underpin correctness, but things are much less straightforward in naturalistic explanations.
On one level, by the study of dynamical systems and the invention of differential equations.
On a level closer to what you meant when you asked the question, most of the confusing things about ‘causality’ are actually confusing things about the way our high-level models of the world interact with the world itself.
The problem of free will is a useful example of this. People draw this picture that looks like [universe] → [me] → [my future actions], and get confused, because it looks like either determinism (the idea that [universe] → [my future actions] ) isn’t correct or the intuitive sense that I can meaningfully choose my future actions (the idea that [me] → [my future actions] ) isn’t correct.
But the actual picture is something like [universe: [me] → [my future actions] ]. That is, I am a higher-level concept in the universe, and my future actions are a higher-level concept in the universe, and the relationship between the two of them is also a higher-level concept in the universe. Both determinism and the intuitive sense that I can meaningfully choose my future actions are correct, and there isn’t a real conflict between them. (The intuitive sense mostly comes from the fact that the higher level concept is a lossy compression mechanism; if I had perfect self-knowledge, I wouldn’t have any uncertainty about my future actions, but I don’t have perfect self-knowledge. It also comes from the relative importance of decision-making as a ‘natural concept’ in the whole ‘being a human’ business.)
And so when philosophers ask questions like “When the cue ball knocks the nine ball into the corner pocket, what are the terms of this causal relation?” (from SEP), it seems to me like what they’re mostly doing is getting confused about the various levels of their models, and mistaking properties of their models for properties of the territory.
That is, in the territory, the wavefunction of the universe updates according to dynamical equations, and that’s that. It’s only by going to higher level models that things like ‘cause’ and ‘effect’ start to become meaningful, and different modeling choices lead to different forms of cause and effect.
Now, there’s an underlying question of how my map came to believe the statement about the territory that begins the previous paragraph, and that is indeed an interesting question with a long answer. There are also lots of subtle points, about stuff like that it’s interesting that we don’t really need an idea of counterfactuals to describe the universe and the dynamical equations but we do need an idea of counterfactuals to describe higher-level models of the universe that involve causality. But as far as I can tell, you don’t get the main point right by talking about causal relata and you don’t get much out of talking about the subtle points until you get the main point right.
To elaborate a bit on that, hopefully in a way that makes it somewhat clearer why I find it aggravating or difficult to talk about why my approach on philosophy is better, typically I see a crisp and correct model that, if accepted, obsoletes other claims almost accidentally. If you accept the [universe: [me] → [my future actions] ] model of free will, for example, then nearly everything written about why determinism is correct / incorrect or free will exists / doesn’t exists is just missing the point and is implicitly addressed by getting the point right, and explicitly addressing it looks like repeating the point over and over again.
This is also where the sense that they’re wrong about questions is coming from; compare to Babbage being surprised when a MP asked if his calculator would give the right output if given the wrong inputs. If they’re asking X, then something else is going wrong upstream, and fixing that seems better than answering that question.
Oh, if you count that one as a question, then let’s call that one solved too.
Solved where?
On one level, by the study of dynamical systems and the invention of differential equations.
Nope. On most of the detailed questions a philosopher might want to ask about causality , physics comes down firmly on both sides. Physics is not monolothic.
Does causality imply determinism?
(In)determinism is an open question in physics. Note that “differential equations” are used in both classical (deterministic by most accounts) and quantum (indeterminstic by most accounts) physics.
Must causes precede effects?
Perhaps not, if timeless physics, or the theory of closed timelike curves, is correct.
Is causality fundamental?
It is in causal dynamic triangulation, and a few other things. otherwise not.
Both determinism and the intuitive sense that I can meaningfully choose my future actions are correct, and there isn’t a real conflict between them.
Which may be true or false depending on whatever “meaningfully” means. If “meaningful” means choosing between more than one possible future, as required by libertarian free will, then determinism definitely excludes meaningful choice, since it excludes the existence of more than one possible future.
The main problem here is vagueness: you didn’t define “free will” or “meaningful”. Philosophers have known for a long time that people who think free will is compatible with determinism are defining it one way, and people who
think it is not are defining it another way. If you had shown that the libertarian version of free will is compatible
with determinism, you would have shown something momentous , but you actually haven’t shown anything because you haven’t defined “free will” or “meaningful”.
Incidentally, you have also smuggled in the idea that the universe actually is, categorically, deterministic. (Compatibilism is usually phrased hypothetically). As noted, that is actually an open question.
The intuitive sense mostly comes from the fact that the higher level concept is a lossy compression mechanism;
Explaining the feeling of having free will, is a third definition, something different yet again. You don’t see much about in mainstream philosophical literature because the compatibility between a false impression of X and the non-existence of X is too obvious to be worth pointing out—not because it is some great insight that philosophers have never had because they are too dumb.
Having a false impression of X is the least meaningful version of X, surely!
That is, in the territory, the wavefunction of the universe updates according to dynamical equations, and that’s that. It’s only by going to higher level models that things like ‘cause’ and ‘effect’ start to become meaningful, and different modeling choices lead to different forms of cause and effect.
So is causality entirely high level or does it have a fundamental basis?
To elaborate a bit on that, hopefully in a way that makes it somewhat clearer why I find it aggravating or difficult to talk about why my approach on philosophy
I find it aggravating to keep pointing out to people that they haven’t in any way noticed the real problem. It seems to you that you have solved the problem of free will just because you are using concepts in such a vague way that you can;t get a handle on the real problem.
(In)determinism is an open question in physics. Note that “differential equations” are used in both classical (deterministic by most accounts) and quantum (indeterminstic by most accounts) physics.
For the human level, it is irrelevant whether quantum physics is lawfully deterministic or lawfully following a quantum random number generator. It is still atoms boucing according to equations, except that in one case those equations include a computation of a random number. If every atom is secretly holding a coin that it flips whenever it bounces off another atom, from the human level it makes no difference.
People are often mesmerized by the word “indeterministic”, because they interpret it as “that means magic is possible, and my thoughts actually could be changing the physical events directly”. But that absolutely doesn’t follow. It the atoms flips a coin whenever it bounces off another atom, that is still completely unrelated to the content of my thoughts.
Quantum experiments that show how particles follow some statistical patterns when moving through two slits, still don’t show any connection between the movement of the particle and the human thought. So this is all a huge red herring.
If you don’t understand how whether the atom is flipping a truly random coin when bouncing off another atom, or whether it only follows a computation that doesn’t include a random coin is completely irrelevant for debating human “free will”, then you are simply confused about the topic.
Maybe this will help:
Imagine that a master has two slaves. The first slave receives a command “today, you will pick cotton the whole day”. The second slave receives a command “today in the morning, your foreman will flip a coin—if it lands head, you will pick cotton the whole day; if it lands tails, you will clean the stables the whole day”. Is the second slave any more “free” than the first one? (Just because until the foreman flips the coin he is unable to predict what he will be doing today? How is that relevant to freedom? If the foreman instead of a coin uses a quantum device and sends an electron through two slits, does that make the difference?)
People are often mesmerized by the word “indeterministic”, because they interpret it as “that means magic is possible, and my thoughts actually could be changing the physical events directly”.
Perhaps laypeople are that confused, but what we are talking about is Yudkwosky versus professional philosophy.
Philosophers have come up with a class of theory called “naturalistic libertarian free will”, which is based on appealing to physical indeterminism to provide a basis for free will, without appeals to magic. (eg Robert Kane’s).
But that absolutely doesn’t follow. It the atoms flips a coin whenever it bounces off another atom, that is still completely unrelated to the content of my thoughts.
You speak as though your thoughts are distinct from the physical behaviour of your brain...but you don’t actually believe. Plugging in your actual belief that thoughts are just a high-level description of fine-grained neural processing, then the question of Fw becomes the following:
“How can a physical information-processing system behave in a way that is, seen from the outside indeterminstic (unpredictable in principle) and also, within reasonable limits, rational, intelligent and agentive.
(ie from the outside we might want to preserve the validity of “X did Y because they thought it was a good idea” but only as a high-level descritption, and without thoughts appearing in the fundamental ontology).
That is the problem that naturalistic FW addresses.
If you don’t understand how whether the atom is flipping a truly random coin when bouncing off another atom, or whether it only follows a computation that doesn’t include a random coin is completely irrelevant for debating human “free will”, then you are simply confused about the topic.
Do the reading I’ve done before calling me confused. You guys would sound a lot more rational f you could get into the habit of saying “I know of no good argument for Y” instead of “Y is wrong and anyone who believes it is an idiot”.
Imagine that a master has two slaves. The first slave receives a command “today, you will pick cotton the whole day”. The second slave receives a command “today in the morning, your foreman will flip a coin—if it lands head, you will pick cotton the whole day; if it lands tails, you will clean the stables the whole day”. Is the second slave any more “free” than the first one? (Just because until the foreman flips the coin he is unable to predict what he will be doing today? How is that relevant to freedom? If the foreman instead of a coin uses a quantum device and sends an electron through two slits, does that make the difference?)
The usual fallacy: you are assuming that the coin flip is in the driving seat, but actually no part of brain has to act on any particular indeterminstic impulse. If an algorithm contains indeterminsitc function calls embedded in determinstic code, you can’t strip out the deterministic code and still be able to predict what it does.
You speak as though your thoughts are distinct from the physical behaviour of your brain...but you don’t actually believe.
More like: my thoughts are implemented by the interaction of the atoms in my brain, but there is no meaningful relation between the content of my thoughts, and how the atoms in my brain flipped their coins.
[...] is it a reasonable stipulation to say that flipping the switch does not affect you in any [in-principle experimentally detectable] way? All the particles in the switch are interacting with the particles composing your body and brain. There are gravitational effects—tiny, but real and [in-principle experimentally detectable]. The gravitational pull from a one-gram switch ten meters away is around 6 * 10-16 m/s2. That’s around half a neutron diameter per second per second, far below thermal noise, but way above the Planck level.
My point is that technically there is an interaction between the content of my thoughts and how the individual atoms in my brain flip their coins (because the “concent of my thoughts” is implemented by positions and movements of various atoms in my brain), but there is still no meaningful correlation. It’s not like thinking “I want to eat the chocolate cake now” systematically shifts the related atoms in my brain to the left side, and thinking “I want to keep the chocolate cake for tomorrow” systematically shifts the related atoms in my brain to the right side.
If the atoms in my brains would receive different results from flipping their coins, could it change the content of my thoughts? Sure. Some thought impulses carried by those atoms could arrive a few nanoseconds sooner, some of them a few nanoseconds later, some of them could be microscopically stronger or microscopically weaker. According to chaos theory, at some moment later, an imaginary butterfly in my mind could flap its wings differently, and it could make the difference between whether my desire to eat the cake wins over the plan to put it in the fridge, if the desires are sufficiently balanced. On the other hand, the greater imbalance between these two desires (and the shorter time interval for changes to chaotically propagate through the system), the smaller chance of the imaginary butterfly to change the outcome.
But my point is, again, that there is no meaningful correlation between the coin flips and the resulting thoughts and actions. Suppose you have two magical buttons: if you press one of them, you can make all my cake-decision-related atoms receive a head on their coins, if you press the other, you can make them all receive tails. You wouldn’t even know which one to press. Maybe neither would produce the desired butterfly.
The conclusion is that while technically how the atoms flip their coins has some relation with the content of my thoughts, the relation is meaningless. Expecting it to somehow explain the “free will” means searching for the answer in the wrong place, simply because that’s where the magical quantum streetlight is.
“How can a physical information-processing system behave in a way that is, seen from the outside indeterminstic (unpredictable in principle) and also, within reasonable limits, rational, intelligent and agentive.
The aspects that are “unpredictable in principle” are irrelevant to whether it seems rational and agentive.
A stone rolling down the hill is technically speaking “unpredictable in principle”, because there is the “Heisenberg’s uncertainty” about the exact position and momentum of its particles, and yet it doesn’t seem rational nor agentive. If this argument does not give “free will” to stones, it shouldn’t be used as an explanation of “free will” in humans, because it is not valid in general.
More like: my thoughts are implemented by the interaction of the atoms in my brain, but there is no meaningful relation between the content of my thoughts, and how the atoms in my brain flipped their coins.
There is a relationship between your brain state and your thoughts, which is that your thoughts are entirely
constituted by, and predictable from, your brain state. Moreover, the temporal sequence of your thoughts is constituted by and predictable from you the evolution of your brain state, whether it is determinsitic or indeterministc.
I see no grounds for saying that your thoughts lack a “meaningful” connection to your brain states in the indeterministic case only, … but then I don’t know that you mean by “meaningful”. Care to taboo it for me?
My point is that technically there is an interaction between the content of my thoughts and how the individual atoms in my brain flip their coins (because the “concent of my thoughts” is implemented by positions and movements of various atoms in my brain), but there is still no meaningful correlation. It’s not like thinking “I want to eat the chocolate cake now” systematically shifts the related atoms in my brain to the left side, and thinking “I want to keep the chocolate cake for tomorrow” systematically shifts the related atoms in my brain to the right side.
No. Its more like identity. You seem, to be saying that your thoughts aren’t non -physical things are causing physical brain states. That’s something. Specifically, it is a refutation of interactionist dualism...but, as such it doesn’t have that much to do with free will, as usually defined. If all libertarian theories were a subset of interactionist theories, you would be on to something,, but they are not.
The conclusion is that while technically how the atoms flip their coins has some relation with the content of my thoughts, the relation is meaningless.
Taboo meaningless, please.
Expecting it to somehow explain the “free will” means searching for the answer in the wrong place, simply because that’s where the magical quantum streetlight is.
Saying it is the wrong answer because it is the wrong answer is pointless. You need to find out what naturalistic libertarianism actually says, and then refute. It.
The aspects that are “unpredictable in principle” are irrelevant to whether it seems rational and agentive.
So much the better for naturalistic libertarianism , then. One of the standard counterargument to it is that the more
free you are , the less rational you would be.
A stone rolling down the hill is technically speaking “unpredictable in principle”, because there is the “Heisenberg’s uncertainty” about the exact position and momentum of its particles, and yet it doesn’t seem rational nor agentive.
Which would refute the claim that indeteminism alone is a sufficient condition for rationality and agency. But that claim is not made naturalistic libertarianism. Would it kill you to do some homework?
If this argument does not give “free will” to stones, it shouldn’t be used as an explanation of “free will” in humans, because it is not valid in general.
This is like saying that if physics does not result in consciousness in stones, we shouldn’t admit that it results in consciousness in humans.
I have no particular reason to think that we have libertarian free will. But we do make choices, and if those choices are indeterminate, then we have libertarian free will. If those choices are indeterminate, it will in fact be because of the indeterminacy of the underlying matter.
If your argument is correct, something more is needed for libertarian free will besides choices which are indeterminate. What is that extra component that you are positing as necessary for free will?
This is like saying that if physics does not result in consciousness in stones, we shouldn’t admit that it results in consciousness in humans.
My point exactly. If physics does not result in consciousness in stones, then “physics” is not an explanation of consciousness in humans.
And neither is “quantum physics” an explanation of free will in humans (as long as we use any definition of “free will” which does not also apply to stones).
What is that extra component that you are positing as necessary for free will?
Well, the philosophers are supposed to have some superior insights, so I am waiting for someone to communicate them clearly. Preferably without invoking quantum physics in the explanation.
My guess is that “free will” belongs to the realm of psychology. We can talk about when we mean when we feel that other people (or animals, or hypothetical machines) have “free will”, and what we mean when we feel that we have “free will”. That’s all there is about “free will”. Start with the experiences that caused us to create the expression “free will” in the first place, and follow the chain of causality backwards (what in the world caused us to have these experiences? how exactly does that work?). Don’t have a bottom line of “X, in principle” first.
So… what would make me feel that someone or something has a free will? I guess “not completely predictable”, “not completely random”, “seems to follow some goals” and “can somewhat adapt to changes in its environment” are among the key components, but maybe I forgot something just as important.
But whether something seems predictable or unpredictable to me, that is a fact about my ability to predict, not about the observed thing. I mean, if something is “unpredictable in principle”, that would of course explain my inability to predict it. But there are also other reasonable explanations for my inability to predict—some of them so obvious that they are probably low-status to mention—such as me not having enough information, or not having enough computing power. I don’t see the atoms in other people’s brains, I couldn’t compute their movements fast enough anyway, so I can’t predict other people’s thoughts or actions precisely enough. Thus, other people are “not completely predictable” to me.
I see no need to posit that this unpredictability exists “in principle”, in the territory. That assumption is not necessary for explaining my inability to predict. If there is no reason why something should exist in the territory, we should avoid talking about it like it necessarily exists there. The quantum physics is a red herring here. My inability to predict systems reaches far beyond what the Heisenberg’s uncertainty would make me concede. The vast majority of my inability to predict complex systems such as human brains—and therefore the vast majority of my perception of “free will”—is completely unrelated to quantum physics. (Saying that the quantum noise is the only thing that prevents me from reading the contents of your brain and simulating them in real time would be completely delusional. Probably no respected philosopher holds this position explicitly, but all that hand-waving about “quantum physics” is pointing suggestively in this direction. I am saying it’s a wrong direction.)
And how I believe in my own “free will”? Similarly, I can’t sufficiently observe and predict the working of my own brain either. (Again, the quantum noise is the least of my problems here.)
Adding to my previous comment, to explain the point about stones more fully:
I understand libertarian free will to mean, “the ability to make choices, in such a way that those choices are not completely deterministic in advance.”
We know from experience that people have the ability to make choices. We do not know from experience if they are deterministic in advance or not. And personally I do not know or care.
Your objection about the second part seems to be, “if the second part of the definition is satisfied, but only by reason of something which also exists in stones, that says nothing special about people.”
I agree, it says nothing special about people. That does not prevent the definition from being satisfied. And it is not satisfied by stones, since stones do not have the first part, whether or not they have the second.
My point exactly. If physics does not result in consciousness in stones, then “physics” is not an explanation of consciousness in humans.
Generic physics doesn’t even even account for toasters. You need to plug in structure.
And neither is “quantum physics” an explanation of free will in humans (as long as we use any definition of “free will” which does not also apply to stones).
An explanation all in itself. or a potential part of an explanation , including other things, such as structure.
My guess is that “free will” belongs to the realm of psychology. We can talk about when we mean when we feel that other people (or animals, or hypothetical machines) have “free will”, and what we mean when we feel that we have “free will”. That’s all there is about “free will”. Start with the experiences that caused us to create the expression “free will” in the first place, and follow the chain of causality backwards (what in the world caused us to have these experiences? how exactly does that work?). Don’t have a bottom line of “X, in principle” first
Tracing the feeling back might result in a mechanism that produces a false impression of freedom, or a mechanism
that results in freedom. What you are suggesting leaves the question open.
I see no need to posit that this unpredictability exists “in principle”, in the territory.
Who do yo think is doing that? The claim is hypothetical..that if indeterminism exists in the territory, then it could provide the basis for non-illusory FW. And if we investigate that, we can resolve the question you left open above.
This is all fine, for how you understand the idea of free will. And I personally agree that it does not matter whether the world is unpredictable in principle or not. I am just saying that people who talk about libertarian free will, define it as being able to make choices, without those choices being deterministic. And that definition would be satisfied in a situation where people make choices, as they actually do, and their choices are not deterministic because of quantum mechanics (which may or may not be the case—as I said, I do not care.) And notice that this definition of free will would not be satisfied by stones, even if they are not deterministic, because they do not have the choice part.
In the previous comment, you seemed to be denying that this would satisfy the definition, which would mean that you would have to define libertarian free will in an idiosyncratic sense.
Yes. Viliam is assuming that if you actions correspond to an non-deterministic physics, it is “randomness” rather than you who are responsible for your actions. But what would the world look like if you were responsible for your actions? Just because they are indeterminate (in this view) does not mean that there cannot be statistics about them. If you ask someone whether he wants chocolate or vanilla ice cream enough times, you will be able to say what percentage of the time they want vanilla.
Which is just the way it is if the world results from non-deterministic physics as well. In other worlds the world looks exactly the same. That is because it is the same thing. So there is no reason for Viliam’s conclusion that it is not really you doing it; unless you were already planning to draw that conclusion no matter the facts turned out to be.
I find it aggravating to keep pointing out to people that they haven’t in any way noticed the real problem. It seems to you that you have solved the problem of free will just because you are using concepts in such a vague way that you can;t get a handle on the real problem.
What process do you use to determine which problem is more ‘real’? That seems like our core disagreement, and we can probably discuss that more fruitfully.
The more you diverge from discussing the problem in the literature, the less you are really solving the age old problem of X, Y or Z, as opposed to a substitute of your own invention.
Of course there is also a sense in which some age old problem could be a pseudo problem—but the above reasoning still applies. To really show that a problem is a pseudo problem, you need to show that about the problem as stated and not, again, your own proxy.
To really show that a problem is a pseudo problem, you need to show that about the problem as stated and not, again, your own proxy.
I see, but it seems to me that people are interested in age old problems for three main reasons: 1) they have some conflicting beliefs, concepts, or intuitions, 2) they want to accomplish some goal that this problem is a part of, or 3) they want to contribute to the age old tradition of wrestling with problems.
My main claim is that I don’t care much about the third reason, but do care about the first two. And so if we have an answer for where an intuition comes from, this can often satisfy the first reason. If we have the ability to code up something that works, this can satisfy the second reason.
To give perhaps a cleaner example, consider Epistemology and the Psychology of Human Judgment, in which a philosopher and a psychologist say, basically, “for some weird reason epistemology as a field of philosophy is mostly ignoring modern developments in psychology, and so is focusing its attention on the definition of ‘justified’ and ‘true’ instead of trying to actually improve human decision-making or knowledge acquisition. This is what it would look like to focus on the latter.”
No, it does not. If you do not care about that age-old problem, you don’t have an obligation to show anything about it. You can just ignore the pseudo problem and deal with the actual problem you’re interested in.
Vaniver was saying that causality is entirely high level.
That cannot be the case, though, because it means that causality itself is caused by the low level, which is a contradiction.
The true meaning of cause is just “what has something else coming from it, namely when it can help to explain the thing that comes from it.” This cannot be reduced to something else, because the thing it was supposedly reduced to would be what causality is from, and would help to explain it, leading to a contradiction.
That cannot be the case, though, because it means that causality itself is caused by the low level, which is a contradiction.
Disagreed, because this looks like a type error to me. Molecular chemistry describes the interactions of atoms, but the interactions of atoms are not themselves made of atoms. (That is, a covalent bond is a different kind of thing than an atom is.)
Causality is what it looks like when you consider running a dynamical system forward from various starting points, and noting how the future behavior of the system is different from different points. This is deeply similar to the concept of ‘running a dynamical system’ in the first place, and so you might not want to draw a distinction between the two of them.
My point is that our human view of causality typically involves human-sized objects in it, whereas the update rules of the universe operate on a level much smaller than human-sized, and so the connection between the two is mostly opaque to us.
I’m not sure I understand what you are saying, and I am very sure that you either did not understand what I was saying, or else you misinterpreted it.
I was using “cause” in a very general sense, where it is almost, but not quite, equivalent to anything that can be helpful in explaining something. The one extra element that is needed is that, in some way, the effect comes “from” the cause. In the situation you are calling causality, it is true that you can say “the future behavior comes from the present situation and is somehow explained by it,” so there is a kind of causality there. But that is only one kind of causality, and there are plenty of other kinds. For example “is made out of” is a way of being an effect: if something is made out of something else, the thing that is made is “from” the stuff it is made out of, and the stuff helps to explain the existence of the thing.
My point is that if you use this general sense of cause, which I do because I consider it the most useful way to use the word, then you cannot completely reduce causality to something else, but it is in some respect irreducible. This is because “reducing” a thing is finding a kind of cause.
It looks to me like you’re saying something along the lines of ‘wait, reverse reductionism is a core part of causation because the properties of the higher level model are caused by the properties of the lower level model.’ I think it makes sense to differentiate between reductionism (and doing it in reverse) and temporal causation, though they are linked.
I agree with the point that if someone is trying to figure out the word “because” you haven’t fully explained it until you’ve unpacked each of its meanings into something crisp, and that saying “because means temporal causation” is a mistake because it obscures those other meanings. But I also think it’s a mistake to not carve out temporal causation and discuss that independent of the other sorts of causation.
Vaniver was saying that causality is entirely high level.
Maybe. But Yudkowsky sometimes writes as though it is fundamental.
That cannot be the case, though, because it means that causality itself is caused by the low level, which is a contradiction.
It would mean causality is constituted by the low level. Nowadays, causation means efficient causation, not material causation.
This cannot be reduced to something else, because the thing it was supposedly reduced to would be what causality is from, and would help to explain it, leading to a contradiction.
As before …efficient causation is narrower than anything that can explain anything.
I agree, it would not be a contradiction to think that you could explain efficient causality using material causality (although you still might be wrong.) But you could not explain material causality in the same way.
Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?
I’ve considered that view and found it wanting, personally. Not every problem can be solved right now with an empirical test or a formal model. However, most that can be solved right now, can be solved in such a way, and most that can’t be solved in such a way right now, can’t be solved at all right now. Adding more “hashing out of big questions” doesn’t seem to actually help; it just results in someone eventually going meta and questioning whether philosophy is even meant to make progress towards truth and understand anyway.
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
It is based on the open source platform that I’m building:
This platform will address most of the issues discussed in this thread. It can be used both like a publishing/discussion platform, and as a link aggregator, because it supports both twitter-like discussion, reddit-like communities, and medium-like long form articles.
This platform is in active development, and I’m very interested in your feedback. If LessWrong community needs any specific functionality that is not implemented yet—I will be happy to add it. Let me know what you think!
This is, I think, the largest social obstacle to reconstitution. Crossposting blog posts from the diaspora is a decent workaround, though—if more than a few can be convinced to do it.
Speaking as a writer for different communities, there are 2 problems with this:
Duplicate content: unless explicitly canonized via headers, Google is ambiguous about which version should rank for keywords. This hits small & upcoming authors like a ton of bricks, because by default, the LW version is going to get ranked (on basis of authority), and their own content will be marked both as a duplicate, and as spam, and their domain deranked as a result.
“An audience of your own”: if a reasonable reader can reasonably assume, that “all good content will also be cross-posted to LW anyways”, that strongly eliminates the reason why one should have the small blogger in their RSS reader / checking once a day in the first place.
The HN “link aggregator” model works, because by directly linking to a thing, you will bump their ranking; if it ranks up to the main page, it drives an audience there, who can be captured (via RSS, or newsletters); and therefore have limited downside of participation.
My willingness to cross post from Putanumonit will depend on the standards of quality and tone in LW 2.0. One of my favorite things about LW was the consistency of the writing: the subject matter, the way the posts were structured , the language used and the overall quality. Posting on LW was intimidating, but I didn’t necessarily consider it a bad thing because it meant that almost every post was gold.
In the diaspora, everyone sets their own standards. I consider myself very much a rationality blogger and get linked from r/LessWrong and r/slatestarcodex, but my posts are often about things like NBA stats or Pokemon, I use a lot of pictures and a lighter tone, and I don’t have a list of 50 academic citations at the bottom of each post. I feel that my much writing isn’t a good fit for G Wiley’s budding rationalist community blog, let alone old LW.
I guess what I’m saying is that there’s a tradeoff between catching more of the diaspora and having consistent standards. The scale goes from old LW standards (strictest) → cross posting → links with centralized discussion → blogroll (loosest). Any point on the scale could work, but it’s important to recognize the tradeoff and also to make the standards extremely clear so that each writer can decide whether they’re in or out.
I have been doing exactly this. My short-term goal is to get something like 5-10 writers posting here. So far, some people are willing, and some have some objections which we’re going to have to figure out how to address.
On (4), does anyone have a sense of how much it would cost to improve the code base? Eg would it be approximately $1k, $10k, or $100k (or more)? Wondering if it makes sense to try and raise funds and/or recruit volunteers to do this.
I think a good estimate is close to $10k. Expect to pay about $100/hr for developer time, and something like 100 hours of work to get from where we are to where we want to be doesn’t seem like a crazy estimate. Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.
If you can find volunteers who want to do this, we would love code contributions, and you can point them towards here to see what needs to be worked on.
I think you are underestimating this, and a better estimate is “$100k or more”. With an emphasis on the “or more” part.
Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.
Having “trouble to find people willing to do the work” usually means you are not paying enough to solve the problem. Market price, by definition, is a price at which you can actually buy a product or service, not a price that seems like it should be enough but you just can’t find anyone able and/or willing to accept the deal.
The problem with volunteers is that LW codebase needs too much highly specialized knowledge. Python and Ruby just to get a chance, and then study the code which was optimized for perfomance and backwards compatibility, at the expense of legibility and extensibility. (Database-in-the-database antipattern; values precomputed and cached everywhere.) Most of the professional programmers are simply unable to contribute, without spending a lot of time studying something they will never use again. For a person who has the necessary skills, $10k is about their monthly salary (if you include taxes), and one month feels like too short time to understand the mess of the Reddit code, and implement everything that needs to be done. And the next time, if you need another upgrade, and the same person isn’t available, you need another person to spend the same time to understand the Reddit code.
I believe in long term it would be better to rewrite the code from scratch, but that’s definitely going to take more than one month.
At one point I was planning on making a contribution. It was difficult just getting the code setup and there was very little documentation on the big picture of how everything was supposed to work. It is also very frustrating to run in a development mode. For example, on Mac you have to run it from within a disk image, the VM didn’t work, and setting up new user accounts for testing purposes was a huge pain.
I started trying to understand the code after it was set up, and it is an extremely confusing mess of concepts with virtually no comments, and I am fluent in web development with Python. After 4-6 hours I was making progress on understanding what I needed to make the change I was working on, but I wasn’t there yet. I realized that making the first trivial contribution would probably take another 10-15 hours and stopped. The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.
The issues list on GitHub represents at least several hundred hours of work. I think 3 or 4 contributors could probably do a lot of damage in a couple months of free time, if it weren’t quite so unenjoyable. $10K is definitely a huge underestimate for paying an outsider. I do think that a lot of valuable low-hanging fruit, like stopping karma abuses and providing better admin tools, could be done for $10-20K though.
The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.
Thanks for trying to work on that one!
setting up new user accounts for testing purposes was a huge pain.
This seems like the sort of thing that we should be able to include with whatever makes the admin account that’s already there; I was watching someone running a test yesterday and while I showed them the way to award accounts karma, I didn’t know of a way to force the karma cache to invalidate, and so they had to wait ~15 minutes to be able to actually make a post with their new test account.
These sorts of usability improvements—a pull request that just adds comments for a section of code you spent a few hours understanding, an improvement to the setup script that makes the dev environment better, are sorely needed and greatly appreciated. In particular, don’t feel at all bad about changing the goal from “I’m going to close out issue X” to “I’m going to make it not as painful to have test accounts,” since those sorts of improvements will lead to probably more than one issue getting closed out.
Maybe it would be easier to make contributions that rely on the code as little as possible—scripts running on separate pages, that woud (1) verify that the person running them is a moderator, and (2) connect to the LW database (these two parts would be common for all such scripts, so have them as two functions in a shared library) -- and then have a separate simple user interface for doing whatever needs to be done.
For example, make a script called “expose_downvotes” that displays a text field where the moderator can copy the comment permalink, and after clicking “OK” a list of usernames who downvoted the specific comment is displayed (preferably with hyperlinks to their user profiles). For the user’s convenience, the comment id is automatically extracted from the permalink.
Then the moderator would simply open this script in a second browser tab, copy link location from the “Permalink” icon at the bottom of a comment, click “OK”, done.
Compared with the solutions integrated into LW web page, this solutions is only slightly more complicated for the moderator, but probably much more simple for the developer to write. Most likely, the moderator will have the page bookmarked, so it’s just “open bookmark in a new tab, switch to old tab, right-click on the comment icon, copy URL, switch to new tab, click on the text field, Ctrl+V, click OK”. Still hundred times more simple (and thousand times faster!) than calling tech support, even assuming their full cooperation.
Each such script could be on a separate page. And they could all be linked together by having another function in the shared library which adds a header containing hyperlinks to all such scripts.
Having “trouble to find people willing to do the work” usually means you are not paying enough to solve the problem.
I had difficulties finding people without mentioning a price; I’m pretty sure the defect was in where and how I was looking for people.
I also agree that it makes more sense to have a small number of programmers make extensive changes, rather than having a large number of people become familiar with how to deal with LW’s code.
I believe in long term it would be better to rewrite the code from scratch, but that’s definitely going to take more than one month.
I will point out there’s no strong opposition to replacing the current LW codebase with something different, so long as we can transfer over all the old posts without breaking any links. The main reason we haven’t been approaching it that way is that it’s harder to make small moves and test their results; either you switch over, or you don’t, and no potential replacement was obviously superior.
I’m new and came here from Sarah Constantin’s blog. I’d like to build a new infrastructure for LW, from scratch. I’m in a somewhat unique position to do so because I’m (1) currently searching for an open source project to do, and (2) taking a few months off before starting my next job, granting the bandwidth to contribute significantly to this project. As it stands right now, I can commit to working full time on this project for the next three months. At that point, I will continue to work on the project part time and it will be robust enough to be used in an alpha or beta state, and attract devs to contribute to further development.
Here is how I envision the basic architecture of this project:
A server that manages all business logic (i.e. posting, moderation, analytics) and interfaces with the frontend (2) and database (3).
A standalone, modular frontend (probably built with React, maybe reusing components provided by Telescope) that is modern, beautiful, and easily extensible/composable from a dev perspective.
A database, possibly NoSql given the nature of the data that needs to be stored (posts, comments, etc). The first concern is security, all others predicated on that.
I will kickstart all three parts and bring them to a good place. After this threshold, I will need help with the frontend—this is not my forte and will be better executed by someone passionate about it.
I’m not asking for any compensation for my work. My incentive is to create a project that is actually immediately useful to someone; open-sourcing it and extending that usability is also nice. I also sympathize with the LW community and the goals laid out in this post.
I considered another approach: reverse-engineer HackerNews and use that as the foundation to be adapted to LW’s unique needs. If this approach would be of greater utility to LW, I’d be happy to take it.
I considered another approach: reverse-engineer HackerNews and use that as the foundation to be adapted to LW’s unique needs
Currently HackerNews and LW both run on the Reddit code base. On of the problems is that Reddit didn’t design their software to be easily adopted to new projects. That means it’s not easily possible to update the code with new versions.
A database, possibly NoSql given the nature of the data that needs to be stored (posts, comments, etc).
I see various people volunteering for different roles. I’d be interested in providing design research and user experience support, which would probably only be needed intermittently if we have someone acting as a product manager. It might be nice to have someone in a light-weight graphic design role as well, and that can be freelance.
Like ananda, I’m happy to do this as an open-contribution project rather than paid. I’ll reach out to Vaniver via email.
Well, if someone would be willing me to pay for one year of full-time work, I would be happy to rewrite the LW code from scratch. Maybe one year is an overestimate, but maybe not—there is this thing known as planning fallacy. That would cost somewhat less than $100k. Let’s say $100k, and that included a reserves for occassionally paying someone else to help me with some specific thing, if needed.
I am not saying that paying me for this job is a rational thing to do; let’s just take this as an approximate estimate of the upper bound. (The lower bound is hoping that one day someone will appear and do it for free. Probably also not a rational thing to do.)
Maybe it was a mistake that I didn’t mention this option sooner… but hearing all the talk about “some volunteers doing it for free in their free time” made me believe that this offer would be seen as exaggerated. (Maybe I was wrong. Sorry, can’t change the past.)
I certainly couldn’t do this in my free time. And trying to fix the existing code would probably take just as much time, the difference being that at the end, instead of new easily maintainable and extensible code, we would have the same old code with a few patches.
And there is also a risk that I am overestimating my abilities here. I never did a project of this scale alone. I mean, I feel quite confident that I could do it in a given time frame, but maybe there would be problems with performance, or some kind of black swan.
I will point out there’s no strong opposition to replacing the current LW codebase with something different, so long as we can transfer over all the old posts without breaking any links.
I would probably try to solve it as a separate step. First, make the new website, as good as possible. Second, import the old content, and redirect the links. Only worry about the import when the new site works as expected.
Or maybe don’t even import the old stuff, and keep the old website frozen. Just static pages, without ability to edit anything. All we lose is the ability to vote or comment on a years-old content. At the moment of transition, open officially the new website, block the ability to post new articles on the old one, but still allow people to post comments on the old one for the following three months. At the end, all old links will work, read-only.
How is the LW codebase so awful? What makes it so much more complicated than just a typical blog, + karma? I feel like I must be missing something.
From a UI perspective it is text boxes and buttons. The data structure that you need to track doesn’t SEEM too complicated (Users have names, karma totals, passwords and roles? What am I not taking into account?
Age, mostly. My understanding is Reddit was one of the first of its kind, and so when building it they didn’t have a good sense of what they were actually making. One of the benefits of switching to something new is not just that it’s using technology people are more likely to be using in their day jobs, but also that the data arrangement is more aligned with how the data is actually used and thought about.
It’s a modified copy of an early Reddit codebase. Besides it has, um, founder effects X-/ -- for example the backend SQL database is used just as an engine behind a handcrafted key-value store...
Historically, the answers have been things like a desire to keep it in the community (given the number of software devs floating around), the hope that volunteer effort would come through, and me not having much experience with sites like those and thus relatively low affordance for that option. But I think if we pay for another major wave of changes, we’ll hire a freelancer through one of those sites.
(Right now we’re discussing how much we’re willing to pay for various changes that could be made, and once I have that list I think it’ll be easy to contact freelancers, see if they’re cheap enough, and then get done the things that make sense to do.)
[edit] I missed one—until I started doing some coordination work, there wasn’t shared knowledge of what sort of changes should actually be bought. The people who felt like they had the authority to design changes didn’t feel like they had the authority to spend money, but the people who felt like they had the authority to spend money didn’t feel like they had the authority to design changes, and both of them had more important things to be working on.
The people who felt like they had the authority to design changes didn’t feel like they had the authority to spend money, but the people who felt like they had the authority to spend money didn’t feel like they had the authority to design changes, and both of them had more important things to be working on.
This sort of leadership vacuum seems to be a common problem in the LW community. Feels to me like people can err more on the side of assuming they have the authority to do things.
I can code in python, but I have no web dev experience—I could work out what algorithms are needed, but I’m not sure I would know how to implement them, at least not off the bat.
Still, I’d be willing to work on it for less then $100 per hour.
If you’re working for $x an hour, do you think you would take fewer that 100/x times as long as someone who is experienced at web dev?
Fair pay would be $x an hour given that it takes me 100/x times as long as someone who is experienced at web dev. However in reality estimates of how long the work will take seem to vary wildly—for instance you and Viliam disagree by an order of magnitude.
The more efficient system might be for me to work with someone who does have some web dev experience, if there is someone else working on this.
Hi. I used to have an LW account and post sometimes, and when the site kinda died down I deleted the account. I’m posting back now.
We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we’re told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW.
Please do not start discussing politics without enforcing a real-names policy and taking strong measures against groupthink, bullying, and most especially brigading from outside. The basic problem with discussing politics on the internet is that the normal link between a single human being and a single political voice is broken. You end up with a homogeneous “consensus” in the “community” that reflects whoever is willing to spend more effort on spam and disinformation. You wanted something like a particularly high-minded Parliament, you got 4chan.
I have strong opinions about politics and also desire to discuss the topic, which is indeed boiling to a crisis point, in a more rationalist way. However, I also moderate several subreddits, and whenever politics intersects with one of our subs, we have to start banning people every few hours to keep from being brigaded to death.
I advise allowing just enough politics to discuss the political issues tangent to other, more basic rationalist wheelhouses: allow talking about global warming in the context of civilization-scale risks, allow talking about science funding and state appropriation of scientific output in the context of AI risk and AI progress, allow talking about fiscal multipliers to state spending in the context of effective altruism.
Don’t go beyond that. There are people who love to put an intellectual veneer over deeply bad ideas, and they raid basically any forum on the internet nowadays that talks politics, doesn’t moderate a tight ship, and allows open registration.
And in general, the watchword for a rationality community ought to be that most of the time, contrarians are wrong, and in fact boring as well. Rationality should be distinguished from intellectual contrarianism—this is a mistake we made last time, and suffered for.
I didn’t see anything in eagain’s comment that demanded that he[1] get to establish the framework and set the rules.
(It is easy, and cheap, to portray any suggestion that there should be rules as an attempt to get to set them. Human nature being what it is, this will at least sometimes be at least partly right. I don’t see that that means that having rules isn’t sometimes a damn good idea.)
I said exposed to the bright, glaring sunlight of factual rigor.
These words do not appear anywhere in your comment. Instead you said:
I advise allowing just enough politics to discuss the political issues tangent to other, more basic rationalist wheelhouses … Don’t go beyond that. There are people who love to put an intellectual veneer over deeply bad ideas, and they raid basically any forum on the internet
“Don’t go beyond that” seems to mean not allowing those politics and the bad-idea raiders. “Not allowing” does not mean “expose to sunlight”, it means “exclude”.
Perhaps he does. It wouldn’t exactly be an uncommon trait. However, there is a gap between thinking that some particular ideas are very bad and we’d be better off without them, and insisting on setting the rules of debate oneself, and it is not honest to claim that someone is doing the latter merely because you are sure they must be doing the former.
This thread is about setting the rules for discussions, isn’t it? Eagain is talking in the context of specifying in which framework discussing politics can be made to work on LW.
Yup. That is (I repeat) not the same thing as insisting that he get to establish the framework and set the rules.
(It seems to me that with at least equal justice someone could complain that you are determined to establish the framework and set the rules; it’s just that you prefer no framework and no rules. I don’t know whether that actually is your preference, but it seems to me that there’s as much evidence for it as there is for some of what you are saying about eagain’s mental state.)
Aren’t you? I mean, you’re not making concrete proposals yourself, of course; I don’t think I have ever seen you make a concrete constructive proposal about anything, as opposed to objecting to other people’s. But looking at the things you object to and the things you don’t, it seems to me that you’re taking a position on how LW’s discussions should be just as much as eagain is; you’re just expressing it by objecting to things that diverge from it, rather than by stating it explicitly.
Lumifer seems to object to things because he finds it enjoyable to object to things, and this is a good explanation for why he objects to things rather than making his own proposals. But this means that he is not necessarily taking a position on how discussion should be, since he would be likely to object to both a proposal and its opposite, just because it would still be fun to object.
I don’t think I have ever seen you make a concrete constructive proposal about anything, as opposed to objecting to other people’s.
Hmm. That sounds like a nice rule: anyone who spends all their posting efforts on objecting to other people’s ideas without putting forth anything constructive of their own shall be banned, or at least downvoted into oblivion.
You end up with a homogeneous “consensus” in the “community” that reflects whoever is willing to spend more effort on spam and disinformation.
I remark that this is not a million miles from what Eugine_Nier tried to do, and unfortunately he was not entirely unsuccessful. (Though he didn’t get nearly as far as producing a homogeneous consensus in favour of his ideas.)
Re: #2, it seems like most of the politics discussion places online quickly become dominated by one view or another. If you wanted to solve this problem, one idea is
Start an apolitical discussion board.
Gather lots of members. Try to make your members a representative cross-section of smart people.
Start discussing politics, but with strong norms in place to guard against the failure mode where people whose view is in the minority leave the board.
I explained here why I think reducing political polarization through this sort of project could be high-impact.
Re: #3, I explain why I think this is wrong in this post. “Strong writers enjoy their independence”—I’m not sure what you’re pointing at with this. I see lots of people who seem like strong writers writing for Medium.com or doing newspaper columns or even contributing to Less Wrong (back in the day).
Politics has most certainly damaged the potential of SSC. Notably, far fewer useful insights have resulted from the site and readership than was the case with LessWrong at it’s peak, but that is how Yvain wanted it I suppose.
The comment section has, according to my understanding become a haven for NRx and other types considered unsavoury by much of the rationalist community, and the quality of the discussion is substantially lower in general than it could have been.
Sure.
Codebase, just start over, but carry over the useful ideas implemented, such as disincentivizing flamewars by making responses to downvoted comments cost karma, zero initial karma awarded for posting, and any other rational discussion fostering mechanics which have become apparent since then.
I agree, make this site read only, use it and the wiki as a knowledge base, and start over somewhere else.
disincentivizing flamewars by making responses to downvoted comments cost karma
I think Hacker News has a better solution to that problem (if you reply to someone who replied to you, your reply gets delayed—the deeper the thread, the longer the delay).
I wonder if the correct answer is essentially to fork Hacker News, rather than Reddit (Hacker News isn’t open source, but I’m thinking about a site that takes Hacker News’s decisions as the default, unless there seems to be a good reason for something different.)
Well, there’s a vanilla version of HN that comes with the Arc distribution. It doesn’t look like any of the files in the Arc distribution have been modified since Aug 4, 2009. I just got it running on my machine (only took a minute) and submitted a link. Unsure what features are missing. Relevant HN discussion.
If someone knows Paul Graham, we might be able to get a more recent version of the code, minus spam prevention features & such? BTW, I believe Y Combinator is hiring hackers. (Consider applying!)
Arc isn’t really used for anything besides Hacker News. But it’s designed to enable “exploratory programming”. That seems ideal if you wanted to do a lot of hands-on experimentation with features to facilitate quality online discussion. (My other comment explains why there might be low-hanging fruit here.)
Hacker News was rewritten in something other than Arc ~2-3 years ago IIRC, and it was only after that that they managed to add a lot of the interesting moderation features.
There are probably better technologies to build an HN clone in today–Clojure seems strictly better than Arc, for instance–the parts of HN that are interesting to copy are the various discussion and moderation features, and my sense of what they are mostly comes from having observed the site and seeing comments here and there over the years.
Yes, I think Hacker News is plausibly the best general-purpose online discussion forum right now. It would not surprise me if it’s possible to do much better, though. As far as I can tell, most online discussion software is designed to maximize ad revenue (or some proxy like user growth/user engagement) rather than quality discussions. Hacker News is an exception because the entire site is essentially a giant advertisement to get people applying for Y Combinator, and higher-quality discussions make it a better-quality advertisement.
If I were NRx, I would feel very amused at the idea of LW people coming to believe that they need to invite an all-powerful dictator to save them from decay and ruin… :-D
LW has a BDFL already. He’s just not very interested and (many) people don’t believe he’s able to restore the website. We didn’t “come to believe” anything.
No, EY effectively doesn’t act as a BDFL. He doesn’t have the effective power to ban contributors. The last time I asked him to delete a post he said that he can’t for site political reasons.
The site is also owned by MIRI and not EY directly.
Lessee… He isn’t so much benevolent as he is absent. I don’t see him exercising any dictatorial powers and as to “for life”, we are clearly proposing that this ain’t so.
So it seems you’re just wrong. An “absentee owner/founder” is a better tag.
As a newbie, I have to say that I am finding it really hard to navigate around the place. I am really interested in rational thinking and the ways people can improve it, as well as persuation techniques to try to get people to think rationally about issues, since most of them fall to cognitive biases and bad illogical thinking.
I have found that writing about these concepts for myself really help in clarifying things, but sometimes miss a discussion on these topics, so that’s why I came here.
For me, some things that could help improve this site:
1) better organization and making it clearer to navigate
2) a set of easy to read newbie texts
3) ability to share interesting posts from other places and discussing them
I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it’s something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.
I didn’t delete my account a year ago because the site runs on a fork of Reddit rather than HN (and I recall that people posted links to outside articles all the time; what benefit would a HN-style aggregator add over either what we have now or our Reddit fork plus Reddit’s ability to post links to external sites?); I deleted it because the things people posted here weren’t good.
I think if you want to unify the community, what needs to be done is the creation of more good content and less bad content. We’re sitting around and talking about the best way to nominate people for a committee to design a strategy to create an algorithm to tell us where we should go for lunch today when there’s a Five Guys across the street. These discussions were going on the last time I checked in on LW, IIRC, and there doesn’t seem to have been much progress made.
I haven’t seen anyone link to a LW post written after I deleted since I deleted. I suspect this has less to do with aggregators or BDFL nomination committees and more to do with the fact that a long time ago people used to post good things here and then they stopped.
Then again, better CSS wouldn’t hurt. This place looks like Reddit. Nobody wants to link to a place that looks like Reddit.
Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence.
That’s true. LW isn’t bringing back yvain/Scott or other similar figures. However, it is a cool training ground/incubator for aspiring writers. As of now I’m a ‘no one.’ I’d like to try to see if I can become ‘some one.’ SSC comments don’t foster this. LW is a cool place to try, it’s not like anyone is currently reading my own site/blog.
Hi Anna,
Please consider a few gremlins that are weighing down LW currently:
Eliezer’s ghost—He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the “owner” of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.
the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we’re told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com, which means that not only has discussing politics not harmed SSC, it may actually be drawing in people who care about genuine insights in this extremely complex space that is of very high interest.
the “original content”/central hub approach (related to #1) -- This should have been an aggregator since day 1. Instead it was built as a “community blog”. In other words, people had to host their stuff here or not have it discussed here at all. This cost us Robin Hanson on day 1, which should have been a pretty big warning sign.
The codebase, this website carries tons of complexity related to the reddit codebase. Weird rules about responding to downvoted comments have been implemented in there, nobody can make heads or tails with it. Use something modern, and make it easy to contribute to. (telescope seems decent these days).
Brand rust. Lesswrong is now kinda like myspace or yahoo. It used to be cool, but once a brand takes a turn for the worse, it’s really hard to turn around. People have painful associations with it (basilisk!) It needs burning of ships, clear focus on the future, and as much support as possible from as many interested parties, but only to the extent that they don’t dillute the focus.
In the spirit of the above, I consider Alexei’s hints that Arbital is “working on something” to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future (that still suffers from problem #1 AFAICT) is exactly what I would do if I wanted to maintain the status quo for a few more years.
Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i’m not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.
I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it’s something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.
Re: 1, I vote for Vaniver as LW’s BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he’s been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)
Anyone want to join me in this, or else make a counterproposal?
Agree with both the sole dictatorship and Vaniver as the BDFL, assuming he’s up for it. His posts here also show a strong understanding of the problems affecting less wrong on multiple fronts.
Seconding Anna and Satvik
I also vote for Vaniver as BDFL.
Who is empowered to set Vaniver or anyone else as the BDFL of the site? It would be great to get into a discusion of “who” but I wonder how much weight there will be behind this person. Where would the BDFL’s authority eminate from? Would he be granted, for instance, ownership of the lesswrong.com domain? That would be a sufficient gesture.
I’m empowered to hunt down the relevant people and start conversations about it that are themselves empowered to make the shift. (E.g. to talk to Nate/Eliezer/MIRI, and Matt Fallshaw who runs Trike Apps.).
I like the idea of granting domain ownership if we in fact go down the BDFL route.
that’s awesome. I’m starting to hope something may come of this effort.
An additional point is that you you can only grant the DFL part. The B part cannot be granted but can only be hoped for.
An additional additional point is that the dictator can indeed quit and is not forced to kill themselves to get out of it. So it’s actually not FL. And in fact, it’s arguably not even a dictatorship, as it depends on the consent of the governed. Yes, BDFL is intentionally outrageous to make a point. What’s yours?
The person who owns the website doesn’t need consent of the people who visit the website to make changes to the website.
Funny how I didn’t notice anyone become outraged.
And, of course, BDFL’s powers do NOT depend on the consent of the governed—it’s just that the governed have the ability to exit.
As to the point, it’s merely reminding of the standard trade-off with dictator-like rulers. They are like a little girl:
I’m concerned that we’re only voting for Vaniver because he’s well known, but I’ll throw in a tentative vote for him.
Who are our other options?
I’ll second the suggestion that we should consider other options. While I know Vaniver personally and believe he would do an excellent job, I think Vaniver would agree that considering other candidates too would be a wise choice. (Narrow framing is one of the “villians” of decision making in a book on decision making he suggested to me, Decisive.) Plus, I scanned this thread and I haven’t seen Vaniver say he is okay with such a role.
I do agree; one of the reasons why I haven’t accepted yet is to give other people time to see this, think about it, and come up with other options.
(I considered setting up a way for people to anonymously suggest others, but ended up thinking that it would be difficult to find a way to make it credibly anonymous if I were the person that set it up, and username2 already exists.)
Also because he already is a moderator (one of a few moderators), so he already was trusted with some power, and here we just saying that it seems okay to give him more powers. And because he already did some useful things while moderating.
Do we know anyone who actually has experience doing product management? (Or has the sort of resume that the best companies like to see when they hire for product management roles. Which is not necessarily what you might expect.)
I do. I was a product manager for about a year, then founder for a while, and am now manager for a data science team, where part of my responsibilities are basically product management for the things related to the team.
That said, I don’t think I was great at it, and suspect most of the lessons I learned are easily transferred.
Edit: I actually suspect that I’ve learned more from working with really good product managers than I have from doing any part of the job myself. It really seems to be a job where experience is relatively unimportant, but a certain set of general cognitive patterns is extremely important.
OK, I vote for Satvik as the person to choose who the BDFL is :D
I’ve done my fair bit of product management, mostly on resin.io and related projects (etcher.io and resinos.io) and can offer some help in re-imagining the vision behind lw.
It would be good to know what he thinks the direction of LW should be, but I would really like to see a new BDFL.
I agree that Vaniver should be.
I concur with placing Vaniver in charge. Mainly, we need a leader and a decision maker empowered to execute on suggestions.
I agree, assuming that “technical admin powers” really include access to everything he might need for his work (database, code, logs, whatever).
Throwing in another vote for Vaniver.
Having a BDFL would be great. Vaniver seems to be a good candidate.
I have reservations about this, especially the weird ‘for life’ part.
On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like “discussions on any topic, but with extremely high intellectual standards”. Some ideas:
In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. “agree with the conclusion but disagree with the argument”, or “accurate points, but ad-hominem tone”.
A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
Site erring on the side of being over-opinionated. It doesn’t necessarily need to be the community hub
Votes from highly-voted users count for more.
Integration with predictionbook or something similar, to show a user’s track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on “here is how I solved a problem I or other people were struggling with”
No main/discussion split. There are probably other divisions that make sense (e.g. by topic), but this mostly causes a lot of confusion
Better notifications around new posts, or new comments in a thread. Eg I usually want to see all replies to a comment I’ve made, not just the top level
Built-in argument mapping tools for comments
Shadowbanning, a la Hacker News
Initially restricted growth, e.g. by invitation only
“Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. “agree with the conclusion but disagree with the argument”, or “accurate points, but ad-hominem tone”.”—this seems complex and better done via a comment
For the Russian LessWrong slack chat we agreed on the following emoji semantics:
:+1: means “I want to see more messages like this”
:-1: means “I want to see less messages like this”
:plus: means “I agree with a position expressed here”
:minus: means “I disagree”
:same: means “it’s the same for me” and is used for impressions, subjective experiences and preferences, but without approval connotations
:delta: means “I have changed my mind/updated”
We also have 25 custom :fallacy_*: emoji for pointing out fallacies, and a few other custom emoji for other low-effort, low-noise signaling.
It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.
Shared here: What reacts do you to be able to give to posts? (emoticons, cognicons, and more)
This Slack-specific emoji capability is akin to Facebook Reactions; namely a wider array of aggregated post/comment actions.
Some sort of emoticon could work, like what Facebook does.
Personally, I find the lack of feedback from an upvote or downvote to be discouraging. I understand that many people don’t want to take the time to provide a quick comment, but personally I think that’s silly as a 10 second comment could help a lot in many cases. If there is a possibility for a 1 second feedback method to allow a little more information than up or down, I think it’s worth trying.
I’m reminded of Slashdot. Not that you necessarily want to copy that, but that’s some preexisting work in that direction.
This would be a top recommendation of mine as well. There are quite a few prediction tracking websites now: PredictionBook, Metaculus, and Good Judgement Open come to mind immediately, and that’s not considering the various prediction markets too.
I’ve started writing a command line prediction tracker which will integrate with these sites and some others (eventually, at least). PredictionBook and Metaculus both seem to have APIs which would make the integration rather easy. So integration with LessWrong should not be particularly difficult. (The API for Metaculus is not documented best I can tell, but by snooping around the code you can figure things out...)
On that topic how you upvote? I’ve never been able to figure it out. I can’t find any upvote button. Does anyone know where the button is?
It’s a thumbs-up that is in the lower left corner of a comment or post (next to a thumbs-down). It looks like the top of these two thumbs-ups (or the bottom one after you’ve clicked it):
If you don’t see it, it may be that they’ve turned off voting for new or low-karma accounts.
Ya, that must be it. I’ve been on here for like 3 years (not with this account though) but only after the diaspora. Really excited that things are getting posted again. One major issue with such a system is that I now feel pressure to post popular content. A major feature of this community is that nothing is dismissed out of hand. You can propose anything you want so long as it’s supported by a sophisticated argument. The problem with only giving voting privileges to >x karma accounts is that people, like myself, will feel a pressure to post things that are generally accepted.
Now to be clear I’m not opposed to such a filter. I’ve personally noticed that for example, slatestarcodex doesn’t have the same consistently high quality comments as lesswrong. For example people will have comments like “what’s falsification?”etc. So I acknowledge that such a filter might be useful. At the same time however I’m pointing out one potential flaw with such a filter, that it lends itself to creating an echo-chamber.
Could you say more about what you have in mind here?
Maybe something like this? https://debatemap.live (note: I’m the developer of it)
I think you’re right that wherever we go next needs to be a clear schelling point. But I disagree on some details.
I do think it’s important to have someone clearly “running the place”. A BDFL, if you like.
Please no. The comments on SSC are for me a case study in exactly why we don’t want to discuss politics.
Something like reddit/hn involving humans posting links seems ok. Such a thing would still be subject to moderation. “Auto-aggregation” would be bad however.
Sure. But if you want to replace the karma system, be sure to replace it with something better, not worse. SatvikBeri’s suggestions below seem reasonable. The focus should be on maintaining high standards and certainly not encouraging growth in new users at any cost.
I don’t believe that the basilisk is the primary reason for LW’s brand rust. As I see it, we squandered our “capital outlay” of readers interested in actually learning rationality (which we obtained due to the site initially being nothing but the sequences) by doing essentially nothing about a large influx of new users interested only in “debating philosophy” who do not even read the sequences (Eternal November). I, personally, have almost completely stopped commenting since quite a while, because doing so is no longer rewarding.
This is important. One of the great things about LW is/was the “LW consensus”, so that we don’t constantly have to spend time rehashing the basics. (I dunno that I agree with everything in the “LW consensus”, but then, I don’t think anyone entirely did except Eliezer himself. When I say “the basics”, I mean, I guess, a more universally agreed-on stripped down core of it.) Someone shows up saying “But what if nothing is real?”, we don’t have to debate them. That’s the sort of thing it’s useful to just downvote (or otherwise discourage, if we’re making a new system), no matter how nicely it may be said, because no productive discussion can come of it. People complained about how people would say “read the sequences”, but seriously, it saved a lot of trouble.
There were occasional interesting and original objections to the basics. I can’t find it now but there was an interesting series of posts responding to this post of mine on Savage’s theorem; this response argued for the proposition that no, we shouldn’t use probability (something that others had often asserted, but with much less reason). It is indeed possible to come up with intelligent objections to what we consider the basics here. But most of the objections that came up were just unoriginal and uninformed, and could, in fact, correctly be answered with “read the sequences”.
When it’s useful it’s useful, when it’s damaging it’s damaging, It’s damaging when the sequences don’t actually solve the problem. The outside view is that all too often one is directed to the sequences only to find that the selfsame objection one has made has also been made in the comments and has not been answered. It’s just too easy to silently downvote, or write “read the sequences”. In an alternative universe there is a LW where people don’t RTFS unless they have carefully checked that the problem has really been resolved, rather than superficially pattern matching. And the overuse of RTFS is precisely what feeds the impression that LW is a cult...that’s where the damage is coming from.
Unfortunately, although all of that is fixable, it cannot be fixed without “debating philosophy”.
ETA
Most of the suggestions here have been about changing the social organisation of LW, or changing the technology. There is a third option which is much bolder than than of those: redoing rationality. Treat the sequences as a version 0.0 in need of improvement. That’s a big project which will provide focus, and send a costly signal of anti-cultishness, because cults don’t revise doctrine.
Good point. I actually think this can be fixed with software. StackExchange features are part of the answer.
I’m not sure so what you mean. Developing Sequences 0.1 can be done with the help of technology, but it can’t be done without community effort, and without a rethink of the status of the sequences.
I think the basilisk is at least a very significant contributor to LW’s brand rust. In fact, guilt by association with the basilisk via LW is the reason I don’t like to tell people I went to a CFAR workshop (because rationality → “those basilisk people, right?”)
Reputations seem to be very fragile on the Internet. I wonder if there’s anything we could do about that? The one crazy idea I had was (rot13′d so you’ll try to come up with your own idea first): znxr n fvgr jurer nyy qvfphffvba vf cevingr, naq gb znxr vg vzcbffvoyr gb funer perqvoyr fperrafubgf bs gur qvfphffvba, perngr n gbby gung nyybjf nalbar gb znxr n snxr fperrafubg bs nalbar fnlvat nalguvat.
Ooh, your idea is interesting. Mine was to perngr n jro bs gehfg sbe erchgngvba fb gung lbh pna ng n tynapr xabj jung snpgvbaf guvax bs fvgrf/pbzzhavgvrf/rgp, gung jnl lbh’yy xabj jung gur crbcyr lbh pner nobhg guvax nf bccbfrq gb univat gb rinyhngr gur perqvovyvgl bs enaqbz crbcyr jvgu n zrtncubar.
As opposed to what? Memorising the One true Philosophy?
The quotes signify that they’re using that specifically as a label; in context, it looks like they’re pointing to the failure mode of preferring arguments as verbal performance to arguments as issue resolution mechanism. There’s a sort of philosophy that wants to endlessly hash out the big questions, and there’s another sort of philosophy that wants to reduce them to empirical tests and formal models, and we lean towards the second sort of philosophy.
How many problems has the second sort solved?
Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?
Too many for me to quickly count?
Yes. It seems to me that both of those factors drive discussions, and most conversations about philosophical problems can be easily classified as mostly driven by one or the other, and that it makes sense to separate out conversations where the difficulty is natural or manufactured.
I think a fairly large part of the difference between LWers and similarly intelligent people elsewhere is the sense that it is possible to differentiate conversations based on the underlying factors, and that it isn’t always useful to manufacture difficulty as an opportunity to display intelligence.
Name three, then. :)
What I have in mind there is basically ‘approaching philosophy like a scientist’, and so under some views you could chalk up most scientific discoveries there. But focusing on things that seem more ‘philosophical’ than not:
How to determine causality from observational data; where the perception that humans have free will comes from; where human moral intuitions come from.
Scientists don’t approach philosophy though, they run screaming in the other dimension.
The Scientific Method doesn’t work on untestable stuff.
Approaching philosophy as science is not new. It has had a few spectacular successes, such as the wholesale transfer of cosmology from science to philosophy, and a lot of failures, judging by the long list of unanswered philosophical questions (about 200, according to wikipedia). It also has the special pitfall of philosophically uninformed scientists answering the wrong question:-
What causality is is the correct question/.
Whether humans have the power of free will is the correct question.
Whether human moral intuitions are correct is the correct question.
Oh, if you count that one as a question, then let’s call that one solved too.
Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.
Disagree for roughly the same reason; the question of where the word “correct” comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.
Solved where?
How can philosophers be systematically wrong about the nature of their questions? And what makes you right?
Of course, inasmuch as you agree with Y., you are going to agree that the only question to be answered is where the perception comes for, but this is about truth, not opinion: the important point is that he never demonstrated that.
if moral intuitions come from God, that might underpin correctness, but things are much less straightforward in naturalistic explanations.
On one level, by the study of dynamical systems and the invention of differential equations.
On a level closer to what you meant when you asked the question, most of the confusing things about ‘causality’ are actually confusing things about the way our high-level models of the world interact with the world itself.
The problem of free will is a useful example of this. People draw this picture that looks like [universe] → [me] → [my future actions], and get confused, because it looks like either determinism (the idea that [universe] → [my future actions] ) isn’t correct or the intuitive sense that I can meaningfully choose my future actions (the idea that [me] → [my future actions] ) isn’t correct.
But the actual picture is something like [universe: [me] → [my future actions] ]. That is, I am a higher-level concept in the universe, and my future actions are a higher-level concept in the universe, and the relationship between the two of them is also a higher-level concept in the universe. Both determinism and the intuitive sense that I can meaningfully choose my future actions are correct, and there isn’t a real conflict between them. (The intuitive sense mostly comes from the fact that the higher level concept is a lossy compression mechanism; if I had perfect self-knowledge, I wouldn’t have any uncertainty about my future actions, but I don’t have perfect self-knowledge. It also comes from the relative importance of decision-making as a ‘natural concept’ in the whole ‘being a human’ business.)
And so when philosophers ask questions like “When the cue ball knocks the nine ball into the corner pocket, what are the terms of this causal relation?” (from SEP), it seems to me like what they’re mostly doing is getting confused about the various levels of their models, and mistaking properties of their models for properties of the territory.
That is, in the territory, the wavefunction of the universe updates according to dynamical equations, and that’s that. It’s only by going to higher level models that things like ‘cause’ and ‘effect’ start to become meaningful, and different modeling choices lead to different forms of cause and effect.
Now, there’s an underlying question of how my map came to believe the statement about the territory that begins the previous paragraph, and that is indeed an interesting question with a long answer. There are also lots of subtle points, about stuff like that it’s interesting that we don’t really need an idea of counterfactuals to describe the universe and the dynamical equations but we do need an idea of counterfactuals to describe higher-level models of the universe that involve causality. But as far as I can tell, you don’t get the main point right by talking about causal relata and you don’t get much out of talking about the subtle points until you get the main point right.
To elaborate a bit on that, hopefully in a way that makes it somewhat clearer why I find it aggravating or difficult to talk about why my approach on philosophy is better, typically I see a crisp and correct model that, if accepted, obsoletes other claims almost accidentally. If you accept the [universe: [me] → [my future actions] ] model of free will, for example, then nearly everything written about why determinism is correct / incorrect or free will exists / doesn’t exists is just missing the point and is implicitly addressed by getting the point right, and explicitly addressing it looks like repeating the point over and over again.
This is also where the sense that they’re wrong about questions is coming from; compare to Babbage being surprised when a MP asked if his calculator would give the right output if given the wrong inputs. If they’re asking X, then something else is going wrong upstream, and fixing that seems better than answering that question.
Nope. On most of the detailed questions a philosopher might want to ask about causality , physics comes down firmly on both sides. Physics is not monolothic.
Does causality imply determinism? (In)determinism is an open question in physics. Note that “differential equations” are used in both classical (deterministic by most accounts) and quantum (indeterminstic by most accounts) physics.
Must causes precede effects? Perhaps not, if timeless physics, or the theory of closed timelike curves, is correct.
Is causality fundamental? It is in causal dynamic triangulation, and a few other things. otherwise not.
Which may be true or false depending on whatever “meaningfully” means. If “meaningful” means choosing between more than one possible future, as required by libertarian free will, then determinism definitely excludes meaningful choice, since it excludes the existence of more than one possible future.
The main problem here is vagueness: you didn’t define “free will” or “meaningful”. Philosophers have known for a long time that people who think free will is compatible with determinism are defining it one way, and people who think it is not are defining it another way. If you had shown that the libertarian version of free will is compatible with determinism, you would have shown something momentous , but you actually haven’t shown anything because you haven’t defined “free will” or “meaningful”.
Incidentally, you have also smuggled in the idea that the universe actually is, categorically, deterministic. (Compatibilism is usually phrased hypothetically). As noted, that is actually an open question.
Explaining the feeling of having free will, is a third definition, something different yet again. You don’t see much about in mainstream philosophical literature because the compatibility between a false impression of X and the non-existence of X is too obvious to be worth pointing out—not because it is some great insight that philosophers have never had because they are too dumb.
Having a false impression of X is the least meaningful version of X, surely!
So is causality entirely high level or does it have a fundamental basis?
I find it aggravating to keep pointing out to people that they haven’t in any way noticed the real problem. It seems to you that you have solved the problem of free will just because you are using concepts in such a vague way that you can;t get a handle on the real problem.
For the human level, it is irrelevant whether quantum physics is lawfully deterministic or lawfully following a quantum random number generator. It is still atoms boucing according to equations, except that in one case those equations include a computation of a random number. If every atom is secretly holding a coin that it flips whenever it bounces off another atom, from the human level it makes no difference.
People are often mesmerized by the word “indeterministic”, because they interpret it as “that means magic is possible, and my thoughts actually could be changing the physical events directly”. But that absolutely doesn’t follow. It the atoms flips a coin whenever it bounces off another atom, that is still completely unrelated to the content of my thoughts.
Quantum experiments that show how particles follow some statistical patterns when moving through two slits, still don’t show any connection between the movement of the particle and the human thought. So this is all a huge red herring.
If you don’t understand how whether the atom is flipping a truly random coin when bouncing off another atom, or whether it only follows a computation that doesn’t include a random coin is completely irrelevant for debating human “free will”, then you are simply confused about the topic.
Maybe this will help:
Imagine that a master has two slaves. The first slave receives a command “today, you will pick cotton the whole day”. The second slave receives a command “today in the morning, your foreman will flip a coin—if it lands head, you will pick cotton the whole day; if it lands tails, you will clean the stables the whole day”. Is the second slave any more “free” than the first one? (Just because until the foreman flips the coin he is unable to predict what he will be doing today? How is that relevant to freedom? If the foreman instead of a coin uses a quantum device and sends an electron through two slits, does that make the difference?)
Perhaps laypeople are that confused, but what we are talking about is Yudkwosky versus professional philosophy.
Philosophers have come up with a class of theory called “naturalistic libertarian free will”, which is based on appealing to physical indeterminism to provide a basis for free will, without appeals to magic. (eg Robert Kane’s).
You speak as though your thoughts are distinct from the physical behaviour of your brain...but you don’t actually believe. Plugging in your actual belief that thoughts are just a high-level description of fine-grained neural processing, then the question of Fw becomes the following:
“How can a physical information-processing system behave in a way that is, seen from the outside indeterminstic (unpredictable in principle) and also, within reasonable limits, rational, intelligent and agentive.
(ie from the outside we might want to preserve the validity of “X did Y because they thought it was a good idea” but only as a high-level descritption, and without thoughts appearing in the fundamental ontology).
That is the problem that naturalistic FW addresses.
Do the reading I’ve done before calling me confused. You guys would sound a lot more rational f you could get into the habit of saying “I know of no good argument for Y” instead of “Y is wrong and anyone who believes it is an idiot”.
The usual fallacy: you are assuming that the coin flip is in the driving seat, but actually no part of brain has to act on any particular indeterminstic impulse. If an algorithm contains indeterminsitc function calls embedded in determinstic code, you can’t strip out the deterministic code and still be able to predict what it does.
More like: my thoughts are implemented by the interaction of the atoms in my brain, but there is no meaningful relation between the content of my thoughts, and how the atoms in my brain flipped their coins.
Somewhat related to this part in “The Generalized Anti-Zombie Principle”:
My point is that technically there is an interaction between the content of my thoughts and how the individual atoms in my brain flip their coins (because the “concent of my thoughts” is implemented by positions and movements of various atoms in my brain), but there is still no meaningful correlation. It’s not like thinking “I want to eat the chocolate cake now” systematically shifts the related atoms in my brain to the left side, and thinking “I want to keep the chocolate cake for tomorrow” systematically shifts the related atoms in my brain to the right side.
If the atoms in my brains would receive different results from flipping their coins, could it change the content of my thoughts? Sure. Some thought impulses carried by those atoms could arrive a few nanoseconds sooner, some of them a few nanoseconds later, some of them could be microscopically stronger or microscopically weaker. According to chaos theory, at some moment later, an imaginary butterfly in my mind could flap its wings differently, and it could make the difference between whether my desire to eat the cake wins over the plan to put it in the fridge, if the desires are sufficiently balanced. On the other hand, the greater imbalance between these two desires (and the shorter time interval for changes to chaotically propagate through the system), the smaller chance of the imaginary butterfly to change the outcome.
But my point is, again, that there is no meaningful correlation between the coin flips and the resulting thoughts and actions. Suppose you have two magical buttons: if you press one of them, you can make all my cake-decision-related atoms receive a head on their coins, if you press the other, you can make them all receive tails. You wouldn’t even know which one to press. Maybe neither would produce the desired butterfly.
The conclusion is that while technically how the atoms flip their coins has some relation with the content of my thoughts, the relation is meaningless. Expecting it to somehow explain the “free will” means searching for the answer in the wrong place, simply because that’s where the magical quantum streetlight is.
The aspects that are “unpredictable in principle” are irrelevant to whether it seems rational and agentive.
A stone rolling down the hill is technically speaking “unpredictable in principle”, because there is the “Heisenberg’s uncertainty” about the exact position and momentum of its particles, and yet it doesn’t seem rational nor agentive. If this argument does not give “free will” to stones, it shouldn’t be used as an explanation of “free will” in humans, because it is not valid in general.
There is a relationship between your brain state and your thoughts, which is that your thoughts are entirely constituted by, and predictable from, your brain state. Moreover, the temporal sequence of your thoughts is constituted by and predictable from you the evolution of your brain state, whether it is determinsitic or indeterministc.
I see no grounds for saying that your thoughts lack a “meaningful” connection to your brain states in the indeterministic case only, … but then I don’t know that you mean by “meaningful”. Care to taboo it for me?
No. Its more like identity. You seem, to be saying that your thoughts aren’t non -physical things are causing physical brain states. That’s something. Specifically, it is a refutation of interactionist dualism...but, as such it doesn’t have that much to do with free will, as usually defined. If all libertarian theories were a subset of interactionist theories, you would be on to something,, but they are not.
Taboo meaningless, please.
Saying it is the wrong answer because it is the wrong answer is pointless. You need to find out what naturalistic libertarianism actually says, and then refute. It.
So much the better for naturalistic libertarianism , then. One of the standard counterargument to it is that the more free you are , the less rational you would be.
Which would refute the claim that indeteminism alone is a sufficient condition for rationality and agency. But that claim is not made naturalistic libertarianism. Would it kill you to do some homework?
This is like saying that if physics does not result in consciousness in stones, we shouldn’t admit that it results in consciousness in humans.
I have no particular reason to think that we have libertarian free will. But we do make choices, and if those choices are indeterminate, then we have libertarian free will. If those choices are indeterminate, it will in fact be because of the indeterminacy of the underlying matter.
If your argument is correct, something more is needed for libertarian free will besides choices which are indeterminate. What is that extra component that you are positing as necessary for free will?
My point exactly. If physics does not result in consciousness in stones, then “physics” is not an explanation of consciousness in humans.
And neither is “quantum physics” an explanation of free will in humans (as long as we use any definition of “free will” which does not also apply to stones).
Well, the philosophers are supposed to have some superior insights, so I am waiting for someone to communicate them clearly. Preferably without invoking quantum physics in the explanation.
My guess is that “free will” belongs to the realm of psychology. We can talk about when we mean when we feel that other people (or animals, or hypothetical machines) have “free will”, and what we mean when we feel that we have “free will”. That’s all there is about “free will”. Start with the experiences that caused us to create the expression “free will” in the first place, and follow the chain of causality backwards (what in the world caused us to have these experiences? how exactly does that work?). Don’t have a bottom line of “X, in principle” first.
So… what would make me feel that someone or something has a free will? I guess “not completely predictable”, “not completely random”, “seems to follow some goals” and “can somewhat adapt to changes in its environment” are among the key components, but maybe I forgot something just as important.
But whether something seems predictable or unpredictable to me, that is a fact about my ability to predict, not about the observed thing. I mean, if something is “unpredictable in principle”, that would of course explain my inability to predict it. But there are also other reasonable explanations for my inability to predict—some of them so obvious that they are probably low-status to mention—such as me not having enough information, or not having enough computing power. I don’t see the atoms in other people’s brains, I couldn’t compute their movements fast enough anyway, so I can’t predict other people’s thoughts or actions precisely enough. Thus, other people are “not completely predictable” to me.
I see no need to posit that this unpredictability exists “in principle”, in the territory. That assumption is not necessary for explaining my inability to predict. If there is no reason why something should exist in the territory, we should avoid talking about it like it necessarily exists there. The quantum physics is a red herring here. My inability to predict systems reaches far beyond what the Heisenberg’s uncertainty would make me concede. The vast majority of my inability to predict complex systems such as human brains—and therefore the vast majority of my perception of “free will”—is completely unrelated to quantum physics. (Saying that the quantum noise is the only thing that prevents me from reading the contents of your brain and simulating them in real time would be completely delusional. Probably no respected philosopher holds this position explicitly, but all that hand-waving about “quantum physics” is pointing suggestively in this direction. I am saying it’s a wrong direction.)
And how I believe in my own “free will”? Similarly, I can’t sufficiently observe and predict the working of my own brain either. (Again, the quantum noise is the least of my problems here.)
Adding to my previous comment, to explain the point about stones more fully:
I understand libertarian free will to mean, “the ability to make choices, in such a way that those choices are not completely deterministic in advance.”
We know from experience that people have the ability to make choices. We do not know from experience if they are deterministic in advance or not. And personally I do not know or care.
Your objection about the second part seems to be, “if the second part of the definition is satisfied, but only by reason of something which also exists in stones, that says nothing special about people.”
I agree, it says nothing special about people. That does not prevent the definition from being satisfied. And it is not satisfied by stones, since stones do not have the first part, whether or not they have the second.
Generic physics doesn’t even even account for toasters. You need to plug in structure.
An explanation all in itself. or a potential part of an explanation , including other things, such as structure.
Tracing the feeling back might result in a mechanism that produces a false impression of freedom, or a mechanism that results in freedom. What you are suggesting leaves the question open.
Who do yo think is doing that? The claim is hypothetical..that if indeterminism exists in the territory, then it could provide the basis for non-illusory FW. And if we investigate that, we can resolve the question you left open above.
This is all fine, for how you understand the idea of free will. And I personally agree that it does not matter whether the world is unpredictable in principle or not. I am just saying that people who talk about libertarian free will, define it as being able to make choices, without those choices being deterministic. And that definition would be satisfied in a situation where people make choices, as they actually do, and their choices are not deterministic because of quantum mechanics (which may or may not be the case—as I said, I do not care.) And notice that this definition of free will would not be satisfied by stones, even if they are not deterministic, because they do not have the choice part.
In the previous comment, you seemed to be denying that this would satisfy the definition, which would mean that you would have to define libertarian free will in an idiosyncratic sense.
Yes. Viliam is assuming that if you actions correspond to an non-deterministic physics, it is “randomness” rather than you who are responsible for your actions. But what would the world look like if you were responsible for your actions? Just because they are indeterminate (in this view) does not mean that there cannot be statistics about them. If you ask someone whether he wants chocolate or vanilla ice cream enough times, you will be able to say what percentage of the time they want vanilla.
Which is just the way it is if the world results from non-deterministic physics as well. In other worlds the world looks exactly the same. That is because it is the same thing. So there is no reason for Viliam’s conclusion that it is not really you doing it; unless you were already planning to draw that conclusion no matter the facts turned out to be.
What process do you use to determine which problem is more ‘real’? That seems like our core disagreement, and we can probably discuss that more fruitfully.
The real problem is the problem as discussed in the literature.
So, implicitly, “the more professional philosophers care about a problem, the more real it is”?
The more you diverge from discussing the problem in the literature, the less you are really solving the age old problem of X, Y or Z, as opposed to a substitute of your own invention.
Of course there is also a sense in which some age old problem could be a pseudo problem—but the above reasoning still applies. To really show that a problem is a pseudo problem, you need to show that about the problem as stated and not, again, your own proxy.
I see, but it seems to me that people are interested in age old problems for three main reasons: 1) they have some conflicting beliefs, concepts, or intuitions, 2) they want to accomplish some goal that this problem is a part of, or 3) they want to contribute to the age old tradition of wrestling with problems.
My main claim is that I don’t care much about the third reason, but do care about the first two. And so if we have an answer for where an intuition comes from, this can often satisfy the first reason. If we have the ability to code up something that works, this can satisfy the second reason.
To give perhaps a cleaner example, consider Epistemology and the Psychology of Human Judgment, in which a philosopher and a psychologist say, basically, “for some weird reason epistemology as a field of philosophy is mostly ignoring modern developments in psychology, and so is focusing its attention on the definition of ‘justified’ and ‘true’ instead of trying to actually improve human decision-making or knowledge acquisition. This is what it would look like to focus on the latter.”
No, it does not. If you do not care about that age-old problem, you don’t have an obligation to show anything about it. You can just ignore the pseudo problem and deal with the actual problem you’re interested in.
All this is posited on having made a claim to have solved problem an existing problem. Read back.
Vaniver was saying that causality is entirely high level.
That cannot be the case, though, because it means that causality itself is caused by the low level, which is a contradiction.
The true meaning of cause is just “what has something else coming from it, namely when it can help to explain the thing that comes from it.” This cannot be reduced to something else, because the thing it was supposedly reduced to would be what causality is from, and would help to explain it, leading to a contradiction.
Disagreed, because this looks like a type error to me. Molecular chemistry describes the interactions of atoms, but the interactions of atoms are not themselves made of atoms. (That is, a covalent bond is a different kind of thing than an atom is.)
Causality is what it looks like when you consider running a dynamical system forward from various starting points, and noting how the future behavior of the system is different from different points. This is deeply similar to the concept of ‘running a dynamical system’ in the first place, and so you might not want to draw a distinction between the two of them.
My point is that our human view of causality typically involves human-sized objects in it, whereas the update rules of the universe operate on a level much smaller than human-sized, and so the connection between the two is mostly opaque to us.
I’m not sure I understand what you are saying, and I am very sure that you either did not understand what I was saying, or else you misinterpreted it.
I was using “cause” in a very general sense, where it is almost, but not quite, equivalent to anything that can be helpful in explaining something. The one extra element that is needed is that, in some way, the effect comes “from” the cause. In the situation you are calling causality, it is true that you can say “the future behavior comes from the present situation and is somehow explained by it,” so there is a kind of causality there. But that is only one kind of causality, and there are plenty of other kinds. For example “is made out of” is a way of being an effect: if something is made out of something else, the thing that is made is “from” the stuff it is made out of, and the stuff helps to explain the existence of the thing.
My point is that if you use this general sense of cause, which I do because I consider it the most useful way to use the word, then you cannot completely reduce causality to something else, but it is in some respect irreducible. This is because “reducing” a thing is finding a kind of cause.
It looks to me like you’re saying something along the lines of ‘wait, reverse reductionism is a core part of causation because the properties of the higher level model are caused by the properties of the lower level model.’ I think it makes sense to differentiate between reductionism (and doing it in reverse) and temporal causation, though they are linked.
I agree with the point that if someone is trying to figure out the word “because” you haven’t fully explained it until you’ve unpacked each of its meanings into something crisp, and that saying “because means temporal causation” is a mistake because it obscures those other meanings. But I also think it’s a mistake to not carve out temporal causation and discuss that independent of the other sorts of causation.
Maybe. But Yudkowsky sometimes writes as though it is fundamental.
It would mean causality is constituted by the low level. Nowadays, causation means efficient causation, not material causation.
As before …efficient causation is narrower than anything that can explain anything.
I agree, it would not be a contradiction to think that you could explain efficient causality using material causality (although you still might be wrong.) But you could not explain material causality in the same way.
Off the top of my head: Fermat’s Last Theorem, whether slavery is licit in the United States of America, and the origin of species.
Is that a joke?
The last time I counted I came up with two and a half.
I’ve considered that view and found it wanting, personally. Not every problem can be solved right now with an empirical test or a formal model. However, most that can be solved right now, can be solved in such a way, and most that can’t be solved in such a way right now, can’t be solved at all right now. Adding more “hashing out of big questions” doesn’t seem to actually help; it just results in someone eventually going meta and questioning whether philosophy is even meant to make progress towards truth and understand anyway.
Can you tell which problems can never be solved?
Only an ill-posed problem can never be solved, in principle.
Is there a clear, algorithmic way of determining which problems are ill posed?
Yeah, you just need a halting oracle and you’re sorted.
For the benefit of anyone else who’d need to Google: Benevolent Dictator For Life
I am working on a project with this purpose, and I think you will find it interesting:
http://metamind.pro
It is intended to be a community for intelligent discussion about rationality and related subjects. It is still a beta version, and has not launched yet, but after seeing this topic, I have decided to share it with you now.
It is based on the open source platform that I’m building:
https://github.com/raymestalez/nexus
This platform will address most of the issues discussed in this thread. It can be used both like a publishing/discussion platform, and as a link aggregator, because it supports both twitter-like discussion, reddit-like communities, and medium-like long form articles.
This platform is in active development, and I’m very interested in your feedback. If LessWrong community needs any specific functionality that is not implemented yet—I will be happy to add it. Let me know what you think!
This is, I think, the largest social obstacle to reconstitution. Crossposting blog posts from the diaspora is a decent workaround, though—if more than a few can be convinced to do it.
Speaking as a writer for different communities, there are 2 problems with this:
Duplicate content: unless explicitly canonized via headers, Google is ambiguous about which version should rank for keywords. This hits small & upcoming authors like a ton of bricks, because by default, the LW version is going to get ranked (on basis of authority), and their own content will be marked both as a duplicate, and as spam, and their domain deranked as a result.
“An audience of your own”: if a reasonable reader can reasonably assume, that “all good content will also be cross-posted to LW anyways”, that strongly eliminates the reason why one should have the small blogger in their RSS reader / checking once a day in the first place.
The HN “link aggregator” model works, because by directly linking to a thing, you will bump their ranking; if it ranks up to the main page, it drives an audience there, who can be captured (via RSS, or newsletters); and therefore have limited downside of participation.
“Strong LW diaspora writers” is a small enough group that it should be straightforward to ask them what they think about all of this.
My willingness to cross post from Putanumonit will depend on the standards of quality and tone in LW 2.0. One of my favorite things about LW was the consistency of the writing: the subject matter, the way the posts were structured , the language used and the overall quality. Posting on LW was intimidating, but I didn’t necessarily consider it a bad thing because it meant that almost every post was gold.
In the diaspora, everyone sets their own standards. I consider myself very much a rationality blogger and get linked from r/LessWrong and r/slatestarcodex, but my posts are often about things like NBA stats or Pokemon, I use a lot of pictures and a lighter tone, and I don’t have a list of 50 academic citations at the bottom of each post. I feel that my much writing isn’t a good fit for G Wiley’s budding rationalist community blog, let alone old LW.
I guess what I’m saying is that there’s a tradeoff between catching more of the diaspora and having consistent standards. The scale goes from old LW standards (strictest) → cross posting → links with centralized discussion → blogroll (loosest). Any point on the scale could work, but it’s important to recognize the tradeoff and also to make the standards extremely clear so that each writer can decide whether they’re in or out.
I have been doing exactly this. My short-term goal is to get something like 5-10 writers posting here. So far, some people are willing, and some have some objections which we’re going to have to figure out how to address.
The big downside of this is that it divides the discussion.
But what’s so bad about divided discussion? In some ways it helps by increasing the surface area to which the relevant ideas are exposed.
On (4), does anyone have a sense of how much it would cost to improve the code base? Eg would it be approximately $1k, $10k, or $100k (or more)? Wondering if it makes sense to try and raise funds and/or recruit volunteers to do this.
I think a good estimate is close to $10k. Expect to pay about $100/hr for developer time, and something like 100 hours of work to get from where we are to where we want to be doesn’t seem like a crazy estimate. Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.
If you can find volunteers who want to do this, we would love code contributions, and you can point them towards here to see what needs to be worked on.
I think you are underestimating this, and a better estimate is “$100k or more”. With an emphasis on the “or more” part.
Having “trouble to find people willing to do the work” usually means you are not paying enough to solve the problem. Market price, by definition, is a price at which you can actually buy a product or service, not a price that seems like it should be enough but you just can’t find anyone able and/or willing to accept the deal.
The problem with volunteers is that LW codebase needs too much highly specialized knowledge. Python and Ruby just to get a chance, and then study the code which was optimized for perfomance and backwards compatibility, at the expense of legibility and extensibility. (Database-in-the-database antipattern; values precomputed and cached everywhere.) Most of the professional programmers are simply unable to contribute, without spending a lot of time studying something they will never use again. For a person who has the necessary skills, $10k is about their monthly salary (if you include taxes), and one month feels like too short time to understand the mess of the Reddit code, and implement everything that needs to be done. And the next time, if you need another upgrade, and the same person isn’t available, you need another person to spend the same time to understand the Reddit code.
I believe in long term it would be better to rewrite the code from scratch, but that’s definitely going to take more than one month.
At one point I was planning on making a contribution. It was difficult just getting the code setup and there was very little documentation on the big picture of how everything was supposed to work. It is also very frustrating to run in a development mode. For example, on Mac you have to run it from within a disk image, the VM didn’t work, and setting up new user accounts for testing purposes was a huge pain.
I started trying to understand the code after it was set up, and it is an extremely confusing mess of concepts with virtually no comments, and I am fluent in web development with Python. After 4-6 hours I was making progress on understanding what I needed to make the change I was working on, but I wasn’t there yet. I realized that making the first trivial contribution would probably take another 10-15 hours and stopped. The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.
The issues list on GitHub represents at least several hundred hours of work. I think 3 or 4 contributors could probably do a lot of damage in a couple months of free time, if it weren’t quite so unenjoyable. $10K is definitely a huge underestimate for paying an outsider. I do think that a lot of valuable low-hanging fruit, like stopping karma abuses and providing better admin tools, could be done for $10-20K though.
Thanks for trying to work on that one!
This seems like the sort of thing that we should be able to include with whatever makes the admin account that’s already there; I was watching someone running a test yesterday and while I showed them the way to award accounts karma, I didn’t know of a way to force the karma cache to invalidate, and so they had to wait ~15 minutes to be able to actually make a post with their new test account.
These sorts of usability improvements—a pull request that just adds comments for a section of code you spent a few hours understanding, an improvement to the setup script that makes the dev environment better, are sorely needed and greatly appreciated. In particular, don’t feel at all bad about changing the goal from “I’m going to close out issue X” to “I’m going to make it not as painful to have test accounts,” since those sorts of improvements will lead to probably more than one issue getting closed out.
Maybe it would be easier to make contributions that rely on the code as little as possible—scripts running on separate pages, that woud (1) verify that the person running them is a moderator, and (2) connect to the LW database (these two parts would be common for all such scripts, so have them as two functions in a shared library) -- and then have a separate simple user interface for doing whatever needs to be done.
For example, make a script called “expose_downvotes” that displays a text field where the moderator can copy the comment permalink, and after clicking “OK” a list of usernames who downvoted the specific comment is displayed (preferably with hyperlinks to their user profiles). For the user’s convenience, the comment id is automatically extracted from the permalink.
Then the moderator would simply open this script in a second browser tab, copy link location from the “Permalink” icon at the bottom of a comment, click “OK”, done.
Compared with the solutions integrated into LW web page, this solutions is only slightly more complicated for the moderator, but probably much more simple for the developer to write. Most likely, the moderator will have the page bookmarked, so it’s just “open bookmark in a new tab, switch to old tab, right-click on the comment icon, copy URL, switch to new tab, click on the text field, Ctrl+V, click OK”. Still hundred times more simple (and thousand times faster!) than calling tech support, even assuming their full cooperation.
Each such script could be on a separate page. And they could all be linked together by having another function in the shared library which adds a header containing hyperlinks to all such scripts.
I had difficulties finding people without mentioning a price; I’m pretty sure the defect was in where and how I was looking for people.
I also agree that it makes more sense to have a small number of programmers make extensive changes, rather than having a large number of people become familiar with how to deal with LW’s code.
I will point out there’s no strong opposition to replacing the current LW codebase with something different, so long as we can transfer over all the old posts without breaking any links. The main reason we haven’t been approaching it that way is that it’s harder to make small moves and test their results; either you switch over, or you don’t, and no potential replacement was obviously superior.
I’m new and came here from Sarah Constantin’s blog. I’d like to build a new infrastructure for LW, from scratch. I’m in a somewhat unique position to do so because I’m (1) currently searching for an open source project to do, and (2) taking a few months off before starting my next job, granting the bandwidth to contribute significantly to this project. As it stands right now, I can commit to working full time on this project for the next three months. At that point, I will continue to work on the project part time and it will be robust enough to be used in an alpha or beta state, and attract devs to contribute to further development.
Here is how I envision the basic architecture of this project:
A server that manages all business logic (i.e. posting, moderation, analytics) and interfaces with the frontend (2) and database (3).
A standalone, modular frontend (probably built with React, maybe reusing components provided by Telescope) that is modern, beautiful, and easily extensible/composable from a dev perspective.
A database, possibly NoSql given the nature of the data that needs to be stored (posts, comments, etc). The first concern is security, all others predicated on that.
I will kickstart all three parts and bring them to a good place. After this threshold, I will need help with the frontend—this is not my forte and will be better executed by someone passionate about it.
I’m not asking for any compensation for my work. My incentive is to create a project that is actually immediately useful to someone; open-sourcing it and extending that usability is also nice. I also sympathize with the LW community and the goals laid out in this post.
I considered another approach: reverse-engineer HackerNews and use that as the foundation to be adapted to LW’s unique needs. If this approach would be of greater utility to LW, I’d be happy to take it.
Thanks for the offer! Maybe we should talk by email? (this username @ gmail.com)
If you don’t get a proper response, it may be worthwhile to make this into its own post, if you have the karma. (Open thread is another option.)
Currently HackerNews and LW both run on the Reddit code base. On of the problems is that Reddit didn’t design their software to be easily adopted to new projects. That means it’s not easily possible to update the code with new versions.
A lot of the data will be votes.
Nitpick: Hackernews isn’t reddit derived. It is some written in arc. And not open source.
I see various people volunteering for different roles. I’d be interested in providing design research and user experience support, which would probably only be needed intermittently if we have someone acting as a product manager. It might be nice to have someone in a light-weight graphic design role as well, and that can be freelance.
Like ananda, I’m happy to do this as an open-contribution project rather than paid. I’ll reach out to Vaniver via email.
I have some front-end experience and would love to help you(I’m a student). Email me at my username @gmail.com
Well, if someone would be willing me to pay for one year of full-time work, I would be happy to rewrite the LW code from scratch. Maybe one year is an overestimate, but maybe not—there is this thing known as planning fallacy. That would cost somewhat less than $100k. Let’s say $100k, and that included a reserves for occassionally paying someone else to help me with some specific thing, if needed.
I am not saying that paying me for this job is a rational thing to do; let’s just take this as an approximate estimate of the upper bound. (The lower bound is hoping that one day someone will appear and do it for free. Probably also not a rational thing to do.)
Maybe it was a mistake that I didn’t mention this option sooner… but hearing all the talk about “some volunteers doing it for free in their free time” made me believe that this offer would be seen as exaggerated. (Maybe I was wrong. Sorry, can’t change the past.)
I certainly couldn’t do this in my free time. And trying to fix the existing code would probably take just as much time, the difference being that at the end, instead of new easily maintainable and extensible code, we would have the same old code with a few patches.
And there is also a risk that I am overestimating my abilities here. I never did a project of this scale alone. I mean, I feel quite confident that I could do it in a given time frame, but maybe there would be problems with performance, or some kind of black swan.
I would probably try to solve it as a separate step. First, make the new website, as good as possible. Second, import the old content, and redirect the links. Only worry about the import when the new site works as expected.
Or maybe don’t even import the old stuff, and keep the old website frozen. Just static pages, without ability to edit anything. All we lose is the ability to vote or comment on a years-old content. At the moment of transition, open officially the new website, block the ability to post new articles on the old one, but still allow people to post comments on the old one for the following three months. At the end, all old links will work, read-only.
Not trolling here, genuine question.
How is the LW codebase so awful? What makes it so much more complicated than just a typical blog, + karma? I feel like I must be missing something.
From a UI perspective it is text boxes and buttons. The data structure that you need to track doesn’t SEEM too complicated (Users have names, karma totals, passwords and roles? What am I not taking into account?
Age, mostly. My understanding is Reddit was one of the first of its kind, and so when building it they didn’t have a good sense of what they were actually making. One of the benefits of switching to something new is not just that it’s using technology people are more likely to be using in their day jobs, but also that the data arrangement is more aligned with how the data is actually used and thought about.
It’s also tied to some pretty old versions of Python and various libraries, and people often need help just getting the development server going.
It’s a modified copy of an early Reddit codebase. Besides it has, um, founder effects X-/ -- for example the backend SQL database is used just as an engine behind a handcrafted key-value store...
If the money is there, why not just pay a freelancer via Gigster or Toptal?
Historically, the answers have been things like a desire to keep it in the community (given the number of software devs floating around), the hope that volunteer effort would come through, and me not having much experience with sites like those and thus relatively low affordance for that option. But I think if we pay for another major wave of changes, we’ll hire a freelancer through one of those sites.
(Right now we’re discussing how much we’re willing to pay for various changes that could be made, and once I have that list I think it’ll be easy to contact freelancers, see if they’re cheap enough, and then get done the things that make sense to do.)
[edit] I missed one—until I started doing some coordination work, there wasn’t shared knowledge of what sort of changes should actually be bought. The people who felt like they had the authority to design changes didn’t feel like they had the authority to spend money, but the people who felt like they had the authority to spend money didn’t feel like they had the authority to design changes, and both of them had more important things to be working on.
This sort of leadership vacuum seems to be a common problem in the LW community. Feels to me like people can err more on the side of assuming they have the authority to do things.
Yeah, a good default is the UNODIR pattern (“I will do X at Y time unless otherwise directed”)
I can code in python, but I have no web dev experience—I could work out what algorithms are needed, but I’m not sure I would know how to implement them, at least not off the bat.
Still, I’d be willing to work on it for less then $100 per hour.
Thanks for the offer!
If you’re working for $x an hour, do you think you would take fewer that 100/x times as long as someone who is experienced at web dev?
Fair pay would be $x an hour given that it takes me 100/x times as long as someone who is experienced at web dev. However in reality estimates of how long the work will take seem to vary wildly—for instance you and Viliam disagree by an order of magnitude.
The more efficient system might be for me to work with someone who does have some web dev experience, if there is someone else working on this.
Hi. I used to have an LW account and post sometimes, and when the site kinda died down I deleted the account. I’m posting back now.
Please do not start discussing politics without enforcing a real-names policy and taking strong measures against groupthink, bullying, and most especially brigading from outside. The basic problem with discussing politics on the internet is that the normal link between a single human being and a single political voice is broken. You end up with a homogeneous “consensus” in the “community” that reflects whoever is willing to spend more effort on spam and disinformation. You wanted something like a particularly high-minded Parliament, you got 4chan.
I have strong opinions about politics and also desire to discuss the topic, which is indeed boiling to a crisis point, in a more rationalist way. However, I also moderate several subreddits, and whenever politics intersects with one of our subs, we have to start banning people every few hours to keep from being brigaded to death.
I advise allowing just enough politics to discuss the political issues tangent to other, more basic rationalist wheelhouses: allow talking about global warming in the context of civilization-scale risks, allow talking about science funding and state appropriation of scientific output in the context of AI risk and AI progress, allow talking about fiscal multipliers to state spending in the context of effective altruism.
Don’t go beyond that. There are people who love to put an intellectual veneer over deeply bad ideas, and they raid basically any forum on the internet nowadays that talks politics, doesn’t moderate a tight ship, and allows open registration.
And in general, the watchword for a rationality community ought to be that most of the time, contrarians are wrong, and in fact boring as well. Rationality should be distinguished from intellectual contrarianism—this is a mistake we made last time, and suffered for.
Ha-ha
You seem to have a desire to discuss the topic only in a tightly controlled environment where you get to establish the framework and set the rules.
I didn’t see anything in eagain’s comment that demanded that he[1] get to establish the framework and set the rules.
(It is easy, and cheap, to portray any suggestion that there should be rules as an attempt to get to set them. Human nature being what it is, this will at least sometimes be at least partly right. I don’t see that that means that having rules isn’t sometimes a damn good idea.)
[1] Apologies if I guessed wrong.
Eagain knows which ideas are “deeply bad” and he’s quite certain they need to be excluded from the conversation.
I didn’t say excluded from the conversation. I said exposed to the bright, glaring sunlight of factual rigor.
These words do not appear anywhere in your comment. Instead you said:
“Don’t go beyond that” seems to mean not allowing those politics and the bad-idea raiders. “Not allowing” does not mean “expose to sunlight”, it means “exclude”.
I’m not sure if this what eagain was alluding to, but this does seem advisable; Do not permit (continuous) debates of recognizably bad ideas.
I admit this is difficult to enforce, but stating that rule will, in my opinion, color the intended purpose of this website.
The word “bad” looks to be doing all the heavy lifting in here.
Which isnt being done because of what...? Widespread stupidity?
Perhaps he does. It wouldn’t exactly be an uncommon trait. However, there is a gap between thinking that some particular ideas are very bad and we’d be better off without them, and insisting on setting the rules of debate oneself, and it is not honest to claim that someone is doing the latter merely because you are sure they must be doing the former.
This thread is about setting the rules for discussions, isn’t it? Eagain is talking in the context of specifying in which framework discussing politics can be made to work on LW.
Yup. That is (I repeat) not the same thing as insisting that he get to establish the framework and set the rules.
(It seems to me that with at least equal justice someone could complain that you are determined to establish the framework and set the rules; it’s just that you prefer no framework and no rules. I don’t know whether that actually is your preference, but it seems to me that there’s as much evidence for it as there is for some of what you are saying about eagain’s mental state.)
And yet I’m not telling LW how to set up discussions...
Aren’t you? I mean, you’re not making concrete proposals yourself, of course; I don’t think I have ever seen you make a concrete constructive proposal about anything, as opposed to objecting to other people’s. But looking at the things you object to and the things you don’t, it seems to me that you’re taking a position on how LW’s discussions should be just as much as eagain is; you’re just expressing it by objecting to things that diverge from it, rather than by stating it explicitly.
Lumifer seems to object to things because he finds it enjoyable to object to things, and this is a good explanation for why he objects to things rather than making his own proposals. But this means that he is not necessarily taking a position on how discussion should be, since he would be likely to object to both a proposal and its opposite, just because it would still be fun to object.
It seems to me that there are definite regularities in which proposals he objects to and which he doesn’t.
Hmm. That sounds like a nice rule: anyone who spends all their posting efforts on objecting to other people’s ideas without putting forth anything constructive of their own shall be banned, or at least downvoted into oblivion.
I think that would be excessive. Pointing out others’ mistakes is a useful activity. (Think of Socrates.) Also, downvoting is disabled right now.
The thing is, I understand the difference between argument points and policy proposals. These are very very different creatures.
I remark that this is not a million miles from what Eugine_Nier tried to do, and unfortunately he was not entirely unsuccessful. (Though he didn’t get nearly as far as producing a homogeneous consensus in favour of his ideas.)
I would rather politics happen in all those other places you mentioned.
Re: #2, it seems like most of the politics discussion places online quickly become dominated by one view or another. If you wanted to solve this problem, one idea is
Start an apolitical discussion board.
Gather lots of members. Try to make your members a representative cross-section of smart people.
Start discussing politics, but with strong norms in place to guard against the failure mode where people whose view is in the minority leave the board.
I explained here why I think reducing political polarization through this sort of project could be high-impact.
Re: #3, I explain why I think this is wrong in this post. “Strong writers enjoy their independence”—I’m not sure what you’re pointing at with this. I see lots of people who seem like strong writers writing for Medium.com or doing newspaper columns or even contributing to Less Wrong (back in the day).
(I largely agree otherwise.)
What explosions from EY are you referring to? Could you please clarify? Just curious.
I agree completely.
Politics has most certainly damaged the potential of SSC. Notably, far fewer useful insights have resulted from the site and readership than was the case with LessWrong at it’s peak, but that is how Yvain wanted it I suppose. The comment section has, according to my understanding become a haven for NRx and other types considered unsavoury by much of the rationalist community, and the quality of the discussion is substantially lower in general than it could have been.
Sure.
Codebase, just start over, but carry over the useful ideas implemented, such as disincentivizing flamewars by making responses to downvoted comments cost karma, zero initial karma awarded for posting, and any other rational discussion fostering mechanics which have become apparent since then.
I agree, make this site read only, use it and the wiki as a knowledge base, and start over somewhere else.
I think Hacker News has a better solution to that problem (if you reply to someone who replied to you, your reply gets delayed—the deeper the thread, the longer the delay).
I wonder if the correct answer is essentially to fork Hacker News, rather than Reddit (Hacker News isn’t open source, but I’m thinking about a site that takes Hacker News’s decisions as the default, unless there seems to be a good reason for something different.)
Well, there’s a vanilla version of HN that comes with the Arc distribution. It doesn’t look like any of the files in the Arc distribution have been modified since Aug 4, 2009. I just got it running on my machine (only took a minute) and submitted a link. Unsure what features are missing. Relevant HN discussion.
If someone knows Paul Graham, we might be able to get a more recent version of the code, minus spam prevention features & such? BTW, I believe Y Combinator is hiring hackers. (Consider applying!)
Arc isn’t really used for anything besides Hacker News. But it’s designed to enable “exploratory programming”. That seems ideal if you wanted to do a lot of hands-on experimentation with features to facilitate quality online discussion. (My other comment explains why there might be low-hanging fruit here.)
Hacker News was rewritten in something other than Arc ~2-3 years ago IIRC, and it was only after that that they managed to add a lot of the interesting moderation features.
There are probably better technologies to build an HN clone in today–Clojure seems strictly better than Arc, for instance–the parts of HN that are interesting to copy are the various discussion and moderation features, and my sense of what they are mostly comes from having observed the site and seeing comments here and there over the years.
Here is some alternative code for building an HN clone: https://github.com/jcs/lobsters (see https://lobste.rs/about for differences to HN).
Yes, I think Hacker News is plausibly the best general-purpose online discussion forum right now. It would not surprise me if it’s possible to do much better, though. As far as I can tell, most online discussion software is designed to maximize ad revenue (or some proxy like user growth/user engagement) rather than quality discussions. Hacker News is an exception because the entire site is essentially a giant advertisement to get people applying for Y Combinator, and higher-quality discussions make it a better-quality advertisement.
Relevant: http://danluu.com/hn-comments/
This is the platform Alexandros is talking about: http://www.telescopeapp.org/
If I were NRx, I would feel very amused at the idea of LW people coming to believe that they need to invite an all-powerful dictator to save them from decay and ruin… :-D
What’s hilariously ironic is that our problem immigrants are Eugine’s sockpuppets, when Eugine is NRx and anti-immigrant.
That Eugine is so much of a problem is actually evidence in favour of some of his politics.
And when the dictator stops Eugine, it will also prove that Cthulhu always swims left.
(Meanwhile, in a different tribe: “So, they have a dictator now, and of course it’s a white male. That validates our beliefs!”)
Don’t forget that Cthulhu is a white male :-P
(race/sex are social constructs, aren’t they? Cthulhu is definitely not one of oppressed minorities so there you go)
You’re talking about someone using the easiest method of disruption available to individuals, combined with individual voter fraud.
This is difficult to stop because of the site’s code, which I think the single owner of the site chose.
LW has a BDFL already. He’s just not very interested and (many) people don’t believe he’s able to restore the website. We didn’t “come to believe” anything.
No, EY effectively doesn’t act as a BDFL. He doesn’t have the effective power to ban contributors. The last time I asked him to delete a post he said that he can’t for site political reasons.
The site is also owned by MIRI and not EY directly.
Lessee… He isn’t so much benevolent as he is absent. I don’t see him exercising any dictatorial powers and as to “for life”, we are clearly proposing that this ain’t so.
So it seems you’re just wrong. An “absentee owner/founder” is a better tag.
As a newbie, I have to say that I am finding it really hard to navigate around the place. I am really interested in rational thinking and the ways people can improve it, as well as persuation techniques to try to get people to think rationally about issues, since most of them fall to cognitive biases and bad illogical thinking.
I have found that writing about these concepts for myself really help in clarifying things, but sometimes miss a discussion on these topics, so that’s why I came here.
For me, some things that could help improve this site:
1) better organization and making it clearer to navigate
2) a set of easy to read newbie texts
3) ability to share interesting posts from other places and discussing them
I didn’t delete my account a year ago because the site runs on a fork of Reddit rather than HN (and I recall that people posted links to outside articles all the time; what benefit would a HN-style aggregator add over either what we have now or our Reddit fork plus Reddit’s ability to post links to external sites?); I deleted it because the things people posted here weren’t good.
I think if you want to unify the community, what needs to be done is the creation of more good content and less bad content. We’re sitting around and talking about the best way to nominate people for a committee to design a strategy to create an algorithm to tell us where we should go for lunch today when there’s a Five Guys across the street. These discussions were going on the last time I checked in on LW, IIRC, and there doesn’t seem to have been much progress made.
I haven’t seen anyone link to a LW post written after I deleted since I deleted. I suspect this has less to do with aggregators or BDFL nomination committees and more to do with the fact that a long time ago people used to post good things here and then they stopped.
Then again, better CSS wouldn’t hurt. This place looks like Reddit. Nobody wants to link to a place that looks like Reddit.
That’s true. LW isn’t bringing back yvain/Scott or other similar figures. However, it is a cool training ground/incubator for aspiring writers. As of now I’m a ‘no one.’ I’d like to try to see if I can become ‘some one.’ SSC comments don’t foster this. LW is a cool place to try, it’s not like anyone is currently reading my own site/blog.