Welcome to Less Wrong! (11th thread, January 2017) (Thread B)
(Thread A for January 2017 is here, this was created as a duplicate but it’s too late to fix it now.)
Hi, do you read the LessWrong website, but haven’t commented yet (or not very much)? Are you a bit scared of the harsh community, or do you feel that questions which are new and interesting for you could be old and boring for the older members?
This is the place for the new members to become courageous and ask what they wanted to ask. Or just to say hi.
The older members are strongly encouraged to be gentle and patient (or just skip the entire discussion if they can’t).
Newbies, welcome!
The long version:
If you’ve recently joined the Less Wrong community, please leave a comment here and introduce yourself. We’d love to know who you are, what you’re doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.
A few notes about the site mechanics
To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!
Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the “Reply” link at the bottom of that comment’s box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the “Help” link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have “karma” scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it’s part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn’t know why. It happens to all of us sometimes, and it’s perfectly acceptable to ask for an explanation. (Sometimes it’s the unwritten LW etiquette; we have different norms than other forums.) Take note when you’re downvoted a lot on one topic, as it often means that several members of the community think you’re missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user’s name to view all of their comments and posts.
All recent posts (from both Main and Discussion) are available here. At the same time, it’s definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there’s a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way. There’s also a succession of open comment threads for discussion of anything remotely related to rationality.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they’ve better identified their deeper disagreements, or simply “tap out” of a discussion that’s stopped being productive. (Seriously, you can just write “I’m tapping out of this thread.”) This is absolutely OK, and it’s one good way to avoid the flamewars that plague many sites.
EXTRA FEATURES:
There’s actually more than meets the eye here: look near the top of the page for the “WIKI”, “DISCUSSION” and “SEQUENCES” links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It’s a good place to look if someone’s speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there’s a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they’re pretty engrossing in my opinion. They are also available in a book form.
A few notes about the community
If you’ve come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you’ll probably get a good read on what, if anything, has already been said here on that topic, what’s widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click “Create new article” in the upper right corner next to your username, then write the article, then at the bottom take the menu “Post to” and change it from “Drafts” to “Less Wrong Discussion”. Then click “Submit”. When you edit a published post, clicking “Save and continue” does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don’t worry, you can later promote it from there to the main page if it’s well-received. (It’s much better to get some feedback before every vote counts for 10 karma—honestly, you don’t know what you don’t know about the community norms here.)
Alternatively, if you’re still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest open comment thread. In fact, Open Threads are intended for anything ‘worth saying, but not worth its own post’, so please do dive in! Informally, there is also the unofficial Less Wrong IRC chat room, and you might also like to take a look at some of the other regular special threads; they’re a great way to get involved with the community!
If you’d like to connect with other LWers in real life, we have meetups in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There’s also a Facebook group. If you have your own blog or other online presence, please feel free to link it.
If English is not your first language, don’t let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the “send message” link on the upper right of their user page). Either put the text of the post in the PM, or just say that you’d like English help and you’ll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It’s worth saying that we might think religion is off-topic in some places where you think it’s on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren’t interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it’s absolutely OK to mention that you’re religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don’t require any previous reading:
The Allais Paradox (with two followups)
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
- Welcome to LessWrong (10th Thread, January 2017) (Thread A) by 7 Jan 2017 5:43 UTC; 15 points) (
- 2 Apr 2017 0:29 UTC; 5 points) 's comment on Project Hufflepuff: Planting the Flag by (
- 16 Jan 2017 22:28 UTC; 2 points) 's comment on Open thread, Jan. 02 - Jan. 08, 2017 by (
- 2 Apr 2017 0:15 UTC; 0 points) 's comment on Open thread, Mar. 27 - Apr. 02, 2017 by (
The person you are talking to is a university professor who teaches game theory, so it is definitely on you to prove that assertion.
Incidentally, your tone and posting behavior suggests that you are a troll and are not participating in the community in good faith.
This person is new, I know we deal with troll behaviour a bit around here, and I’d prefer if you were more delicate with throwing out such opinions.
This is an unusual case. If we still had downvote activated, the early argumentative comments by Flinter would have been banished to oblivion, and this would essentially have been a non-event.
After reading this comment I decided to take a break from interacting with Flinter, and then resumed communicating with them with a firm resolution in mind to treat them in good faith even if I harbored doubts. I suppose I view it as a kind of challenge, like a Rubik’s cube, to try to crack through to actually communicate with such a person. I still think there’s about a 33% chance that I was right in my original assessment, that they’re just here for the lulz. I never liked the downvote, in fact I was vocal about wanting to get rid of it, but I do wish there was some mechanism for constraining the impact someone like this can have on the forum.
Do we have the same definition of a troll? Just wondering because the term seems to have drifted and I wonder where I stand. One sided flaming is what I would call it, because the person is hostile and insulting, resulting from emotional discussion. IMO Trolling requires the deliberate intent to provoke, as if that was his whole reason to post here. It’s more likely that this person is dead serious, but socially inept (too strong?)
This person has written volumes of stuff in various places for years, seems unlikely that he’s just messing with people for amusement. More likely that he is a true believer, just really bad at communication. I’d say Lumifer is lightly trolling (somewhat acceptably) because he is egging this person on, knowing full well that this person will make a spectacle of themselves.
I would have just gone with the term “crackpot” which I think has sufficiently clear meaning and points to exactly the right thing. They don’t seem to be at all interested in actually convincing or communicating; they were much more interesting in establishing how persecuted they were.
Now Flinter has deleted a large number of posts, but if he hadn’t, you would be able to see that they gleefully continued all discussions that were combative but stopped responding on discussion threads where his points were being directly and dispassionately challenged. I see that as evidence that they were some flavor of ne’er-do-well, if not a typical “for the lulz” troll.
Oh, not a vanilla troll, this was a prophet, bringing glorious and eternal truth to the unwashed masses. As befits a true prophet, he was laughed at and cast out by hoi polloi. Surely this proves the great significance of his message.
How could you possibly know what a random person knows of? Why are you so rude?
Hi, my horizons are towards hardcore Effective Altruism, whereas to be a successful E altruist you have to figure out how your brain works, emotional intelligence, QM and how to condition yourself. I’m very concerned that rational people who have apparently mastered the Way spend their time arguing on irrelevant matters with users here rather than being in line with their utility function and purpose. So a part of my future research is how to figure out how to communicate with high-IQ individuals here to unlock their potential and improve their reasoning.
For now I have to read the Sequences, do some math, read Jaynes and other rationalist material. http://rationality.org/resources/reading-list
I have around 7017+-500 pages left to read and understand, which will take around a year. If you have any other suggestions for material to read based on my post history among others, I highly appreciate it. Thanks.
Good luck! I’m looking forward to reading your ebook on 5 easy tips on how to unlock my inner high-IQ potential.
So… you are going to spend your time arguing with users here, but you’ve come with a reason for it, so it doesn’t seem irrational? ;)
Hello, this is the user formerly known as Romashka. I work in a bookstore, read botany articles for pleasure, have a family. I do not expect to post to Discussion, and will probably comment only occasionally. Good luck to everyone.
Welcome back! Is there a particular reason why you don’t expect to post to Discussion? You used to, from time to time, under your old ID. What’s changed?
(Please don’t feel obliged to answer this question. But I’d be interested in the answer if you’d like to give it.)
I’m now more active on Facebook, where 1) I don’t have to use English, and 2) the (few) commenters are mostly botanists and zoologists with much more experience than I have—this is like refining your search terms:) and I live near enough that I can go talk to them in person, if need be, so it’s kind of mixed online and real-life discussion.
I haven’t had the time to go talk with them lately, but I do hope to do it soon enough.
As to non-biology-related stuff, I don’t expect to post on it because I only come across it randomly, and so...don’t expect to:)
Not that you need my approval, but those sound like excellent reasons.
welcome back!
Goodbye Less Wrong!
Love y’all. Hope to see you soon.
Okay, I hear you, but the site has its own rules about presentation and dialogue. Given that you’ve said it’s something that only you can explain correctly to people, maybe you’d actually be better off starting a blog and putting it in there? Then you could do it your own way and present your information as you see fit. Because if you explain it here people might not listen the way you want them to, and that might be very frustrating for you.
Hello, I’m a math-cs undergrad and aspiring effective altruist, but I haven’t chosen a cause yet. Since that decision is probably one of the most important ones, I should probably wait until I’ve become stronger.
To that end, I’ve read the Sequences (as well as HPMOR), and I would like to attend a CFAR workshop or similar at some point in the future. I think one of my problems is that I don’t actually think that much about what I read. Do you have any advice on that?
Also, there are a couple of LWers in my college with whom I have met twice, and we would like to start organising meetups regularly. Would you please give me some karma so that I can add new meetups? (I promise I will make up for it with good contributions)
Thanks!
try rewriting what you have read or teaching it to other people. This will help you feel like you understand it better and go back and re-learn what you might have missed. See also: Feynman notebook method.
Do you mean that you don’t put much thought into deciding what to read, or that when you read something you don’t reflect on it?
I don’t reflect on it. This happens in two ways:
I find reflecting much more cognitively demanding than reading, so if there is a ‘next post’ button or similar, I tend to keep reading.
Also, sometimes when I try to actually think about the subject, it’s difficult to come up with original ideas. I often find myself explaining or convincing an imaginary person, instead of trying to see it with fresh eyes. This is something I noticed after reading the corresponding Sequence.
I guess establishing an habit of commenting would help me solve these problems.
karma awarded.
Thank you!
I think the problem here is that this a place for people who accept the possibility that they could be wrong and look to others to check and maybe improve their ideas—so that we can all help each other be “Less Wrong”.
You have an idea that you’re certain is right, and you don’t think anybody here can possibly improve it or contribute anything to it. That’s why people are questioning whether this is the right place for your material.
You also haven’t had time to build up credibility—not John Nash’s credibility, your credibility. That’s why I suggested participating in the community a bit before insisting that people listen to you.
Hello, would this be the current introduction thread?
Not sure I belong here. I haven’t read through much of the site but it seems like a useful resource. I’m looking for people I can communicate with and relate to. I found the site via a search for “a sense for logic”. I think of it as feeling ideas connect, and it’s my current best guess as to why I’m apparently incomprehensible to most people. An example of it: I can usually tell whether I’ll remember something by feel.
Does that seem relatable? And is this the wrong place to try and make friends?
This is a good place to make friends. Or at least that’s what I think.
Sounds like you might solve that communication problem by having better models of how other people work and trying out different ways of sending information. Different ways to say the same thing.
Anyway, welcome!
Hello! I’m Ryan; some of you may know me from the Boston or NYC meetups, or from my excursions to the Bay. I’m finally getting around to really using this account; anything that I posted more than a year or so ago can be safely ignored, or laughed at if you’re in the mood for a chuckle. I’m hoping to primarily focus on longevity research and how people can work together well on things in general; currently collecting info to try and make a general post about the current state of the field. I’m thoroughly a layperson in most regards—I have a BA in psych and a bit of a knack for cold reading and general Hufflepuffing, but that’s about my whole skillset. Well: also meaning well and being quick to learn/update. I’m kinda proud of that. But still, I’m by no means the sharpest or most expert; I just tend to stick to things until I figure them out.
So: pleasure to (cyber) meet you, and hi again to people I already know!
I am applesauce.
Found this place through another user and quite a few concepts/topics/thoughts/content was interesting. Currently have a year left till I become licensed to start diagnosing people with the DSM-5 and on my way to be an RN as well… I am a crappy counselor so I meet all types of people...but the members of this site have peculiar thoughts and processes which is pretty fascinating.
Bottomline: I just like to listen to people.
First thing I want to say is that I do not have a mathematics or philosophy degree. I come from an engineering background. I consider myself as a hobbyist rationalist. English is not my first language, so pease forgive me when I make grammar mistakes.
The reason I’ve come to LW is because I believe I have something of value to contribute to the discussion of the Sleeping Beauty Problem. I tried to get some feedback by posting on reddit, however maybe due to the length of it I get few responses. I find LW through google and the discussion here is much more in depth and rigorous. So I’m hoping to get some critiques on my idea.
My main argument is that in case of the sleeping beauty problem, agents free to communicate thus having identical information can still rightfully have different credence to the same proposition. This disagreement is purely caused by the difference in their perspective. And due to this perspective disagreement, SIA and SSA are both wrong because they are answering the question from an outside “selector” perspective which is different from beauty’s answer. I concluded that the correct answer should be double-halving.
Because I’m new and cannot start a new discussion thread I’m posting the first part of my argument here see if anyone is interested. Also my complete argument can be found at www.sleepingbeautyproblem.com
Consider the following experiment:
Duplicating Beauty (DB)
Beauty falls asleep as usual. The experimenter tosses a fair coin before she wakes up. If the coin landed on T then a perfect copy of beauty will be produced. The copy is precise enough that she cannot tell if herself is old or new. If the coin landed on H then no copy will be made . The beauty(ies) will then be randomly put into two identical rooms. At this point another person, let’s call him the Selector, randomly chooses one of the two rooms and enters. Suppose he saw a beauty in the chosen room. What should the credence for H be for the two of them?
For the Selector this is easy to calculate. Because he is twice more likely to see a beauty in the room if T, simple bayesian updating gives us his probability for H as 1⁄3.
For Beauty, her room has the same chance of being chosen (1/2) regardless if the coin landed on H or T. Therefore seeing the Selector gives her no new information about the coin toss. So her answer should be the same as in the original SBP. If she is a halfer 1⁄2, if she is a thirder 1⁄3.
This means the two of them would give different answers according to halfers and would give the same answer according to thirders. Notice here the Selector and Beauty can freely communicate however they want, they have the same information regarding the coin toss. So halving would give rise to a perspective disagreement even when both parties share the same information.
This perspective disagreement is something unusual (and against Aumann’s Agreement Theorem), so it could be used as an evidence against halving thus supporting Thirdrism and SIA. I would show the problems of SIA in the another thought experiment. For now I want to argue that this disagreement has a logical reason.
Let’s take a frequentist’s approach and see what happens if the experiment is repeated, say 1000 times. For the Selector, this simply means someone else go through the potential cloning 1000 times and each time he would chooses a random room. On average there would be 500 H and T. He would see a beauty for all 500 times after T and see a beauty 250 times after H. Meaning out of the 750 times 1⁄3 of which would be H. Therefore he is correct in giving 1⁄3 as his answer.
For beauty a repetition simply means she goes through the experiment and wake up in a random room awaiting the Selector’s choice again. So by her count, taking part in 1000 repetitions means she would recall 1000 coin tosses after waking up. In those 1000 coin tosses there should be about 500 of H and T each. She would see the Selector about 500 times with equal numbers after T or H. Therefore her answer of 1⁄2 is also correct from her perspective.
If we call the creation of a new beauty a “branch off”, here we see that from Selector’s perspective experiments from all branches are considered a repetition. Where as from Beauty’s perspective only experiment from her own branch is counted as a repetition. This difference leads to the disagreement.
This disagreement can also be demonstrated by betting odds. In case of T, choosing any of the two rooms leads to the same observation for the Selector: he always sees a beauty and enters another bet. However, for the two beauties the Selector’s choice leads to different observations: whether or not she can see him and enters another bet. So the Selector is twice more likely to enter a bet than any Beauty in case of T, giving them different betting odds respectively.
The above reasoning can be easily applied to original SBP. Conceptually it is just an experiment where its duration is divided into two parts by a memory wipe in case of T. The exact duration of the experiment, whether it is two days or a week or five years, is irrelevant. Therefore from beauty’s perspective to repeat the experiment means her subsequent awakenings need to be shorter to fit into her current awakening. For example, if in the first experiment the two possible awakenings happen on different days, then the in the next repetition the two possible awakening can happen on morning and afternoon of the current day. Further repetitions will keep dividing the available time. Theoretically it can be repeated indefinitely in the form of a supertask. By her count half of those repetitions would be H. Comparing this with an outsider who never experiences a memory wipe: all repetitions from those two days are equally valid repetitions. The disagreement pattern remains the same as in the DB case.
PS: Due to the length of it I’m breaking this thing into several parts. The next part would be a thought experiment countering SIA and Thirdism. Which I would post in a few days if anyone’s interested.
You should have karma to post now.
What do you do when you have a thousand questions to ask, and a thousand things to say, in a place where you do not normally do either? How do you say the first thing?
As a rationalist, what do you what to see more of in literature? I enjoyed HPMOR, and that’s how I got here, a few months ago. It reminds me of textbooks, but I wasn’t bored. It’s one of my favorite books, and I’ve been recommending it to Ender’s Game fans. I want to write a book or tell a story like that.
Origin story? I think of myself as an irrationalist, but I’m busy debugging. It’s more difficult than debugging code (c++ atleast), but more important, and...hopefully more rewarding.
A few questions:
Can we comment here multiple times?
Is this the best place to talk about ourselves on Less Wrong, or is that our User page?
Is there a place to talk about our personal experiences and efforts ‘becoming more rational’, and encourage each other, or is this just a place for general scientific discussion and posts?
Is there a timeline page for this website? If not, what’s important about this site’s history? Any interesting simultaneous sets of events? If not, is there anyone keeping records?
Is there a max comment length?
I know politics aren’t talked about on Less Wrong, but religion is. If you view irrationality, or ‘that which the truth can destroy’ as things which ‘should be destroyed with the truth’, then why not talk about a vortex of bias and irrationality and poor design? Or, as a problem, talk about solutions. If solutions are never discussed how will the problem ever be solved? By everyone joining a group that can solve the problem but doesn’t talk about it, but believes it will magically be solved when everyone does? While everywhere else, whenever someone thinks they have magically solved all problems and uncovered the secret to world peace, they shout it to the heavens and don’t stop ranting about it where everyone can hear, including the internet. This seems exactly like the one of the few places I’d actually want to talk, and listen to people talk, about politics. It reminds me of Be Secretly Wrong.
How can I get up to date on the latest parts of Less Wrong? If The Sequences are the introduction, where are things now?
(I assume “here” = welcome thread.) Yes, of course. But no need to introduce yourself more than once.
The most recent “Group Rationality Diary” thread might be the best place for that.
Once upon a time, an economist called Robin Hanson started a blog called “Overcoming Bias”. He invited one Eliezer Yudkowsky, an amateur artaificial intelligence theorist and philosopher (note: he might disagree with that characterization), to post on his blog, and for some time OB was a joint Hanson/Yudkowsky blog, with Yudkowsky’s contributions constituting a sort of informal course in rationality-as-Eliezer-sees-it. After a few years of this, Robin Hanson wanted his blog back and quite a community had built up that was mostly following and commenting on Eliezer’s posts, and a new site was created for that community: lesswrong.com. It was seeded with all Eliezer’s old OB posts. It was a thriving would-be-rationalist community for some time, but in the last few years a lot of what used to be its regulars have gone elsewhere and it’s generally reckoned that both quality and quantity of content here are much lower than they used to be. There are various plausible conjectures about why. There are occasional attempts to fix this by various means.
Probably, but it’s pretty long. I don’t recall ever hitting it, and (some of) my comments tend to be longer than most.
Unfortunately, political discussions here have often turned out quite unhelpful—more heat than light. So political discussion (especially if more specific) is generally discouraged here. There is fairly frequent political discussion, in a somewhat-rationalist community, in the open threads at Slate Star Codex (whose author was a very highly valued participant here on LW until he went his own way).
I don’t think there’s anything cleverer than reading the recent archives. You could look for particularly highly-voted posts, but note that until quite recently there was one user with a multitude of sockpuppets mass-downvoting everything posted by people whose politics he didn’t like (and, for all I know, mass-upvoting things posted by people whose politics he did, but that hasn’t been noticed if so) so the scores on things are less useful than you might hope.
Yes.
This. No one looks at User pages on a regular basis (as far as I know)
Yes and yes. Both.
No, but I’m sure one of the old-timers will be willing to summarise :-)
Yes, enforced by software. It’s quite reasonable.
Well, kinda. Generally speaking, political philosophy is OK, the current outrage of the day isn’t. Even-handed analysis of the situation is OK, partisan rants aren’t.
Read the forum.
Thanks a lot. I was nervous about posting here.
Howdy, I’ve always enjoyed a good argument/debate. In 2012 I got in the middle of two friends argument about the health of being a vegetarian and decided to do my own research and settle the issue. I was disappointed that there was not an easy way to prove a point. That sent me down a rabbit hole of decision making systems and theory.
I have listened to every episode of “Rationality: From AI to Zombies”. I’ve also researched several of the decision tools on http://lesswrong.com/lw/1qq/debate_tools_an_experience_report/
I am a software engineer and my latest experiment to build tools to help people agree is Reason Score and I would appreciate any time people have to provide feedback and debate with me on the tool.
I also work on the Pro-Truth Pledge and I am on the board of Intentional Insights which is an educational nonpartisan 501(c)(3) nonprofit devoted to promoting science-based truth-seeking, rational thinking, and wise decision-making.
I look forward to learning with you, Bentley
Hi, I’m Alexander. I’m going to university for computer science and an interdisciplinary honors program in the fall that includes formal study into logic as well as literature, physics, and philosophy. My main interests include AI’s present and future states, ethics, science, and improving my rational capabilities as a method of further pursuing truth. I’d like to also find effective outlets for altruism. I look forward to dialectics to be had here and to hopefully have some beliefs changed.
Probably not banned, but I predict that your ideas will play out without a lot of impact over a few weeks. There’s a core of an interesting idea—money as in indicator of values (in the CEV sense of “value”), but you don’t seem to be listening to discussion, don’t seem to see the gaping holes, and are mostly preaching.
Hey!
I am a Dutch Liberal Arts & Sciences student (political philosophy, law and economics). Last semester I started studying Game Theory and only very recently I discovered the world of rationalists and this site. I am an absolute newbie when it comes to the themes discussed at LW, but I am completely fascinated.
I am now reading the sequences and will probably not post too much, because I will mostly be learning.
What I am very interested in, is how LW users actually apply rationality to their own lives. In terms of habit-formation, work/life/sleep-schedule, nurishment etc. What do you guys do and why do you do it? What (online) tools do you use? What life rules do you live up to?
Welcome!
I manage my goals and my time using some systems I built myself. I manage my mental health using methods I built myself. I fix bugs as I go. I have lots of little things that are hard to detail but if you ask something more specific I probably have ideas.
Hello! I’ve been a longtime lurker, but going off to my first year of college has given me the space to really understand what this site is trying to accomplish and decide that yeah I want to participate.
I have somehow gotten myself into an ongoing debate with a theist “rationality” group on campus (they do at least have the stated goal of seeking truth and they want people with multiple perspectives even if it is my considered opinion that the leader/only other person who contributed to our most recent discusiion abandons the truthseeking claim by saying he’s “not afraid of any question”). My main problem I’m having is that it’s a small group- I was the only atheist at the last meeting, and I debated the leader while the two theist students just kind of sat there and said we “both had good points,” which I don’t think was especially helpful input (especially as neither of them could tell me which points in particular were good). So I guess mostly I’m here because I’d like to win our future conversations a bit more effectively and to do that I’m going to need to get better to do that. I’m not sure if I don’t really understand what I think or if I’m just bad at thinking on my feet, but somehow there’s a disconnect here and we seem to be misunderstanding each others’ arguments.
Is there a particular thread where one can ask for help for either debating in general or religious debates in particular? Because that’s what I need I think and quick uses of the search feature provides a lot of semirelevant articles with comments as new as 3 years old… Alternatively, any advice here would be great.
Hi everybody!
I’m new here, so I’d like to share my rationalist origin story. (Please somebody tell me if I’m doing this in the wrong place.) I only became aware that rationality was a thing very recently. I’m getting started with the sequences and rationalist blogs, but there is a ton to read and it will take me a while. I’m familiar with many of the concepts and I have strong opinions about them, though I realize there’s a lot to learn. I am going to try to express my opinions but hold onto them loosely, so PCK can work.
I was introduced to rationality by attending a CFAR workshop in early May. I’m not sure exactly why I signed up. A few people at work had raved about it, but I didn’t really understand what it would help me accomplish. For the last year I’ve been feeling a lot of anxiety about the future of humanity and the possible collapse of society, etc. I’ve been coping with this anxiety by writing short stories about a moral revolution. I think one of our root problems is that people mostly talk about how things aren’t working. I wanted to write about how things might be working perfectly. If there was a specific goal, I went to CFAR for help making more progress at becoming an author.
I found the workshop to be transformative in many ways I won’t go into here. It helped me with my writing project as well, but not in the way I expected. My writing is concept-heavy, but I am bad at creating characters. One concept that is important to me is that humanity needs a new kind of philosophy. Something that isn’t quite a religion or a scientific theory or an economic model but is something that combines all of those domains. This philosophy would strengthen individuals and give groups in different domains shared values/goals. Rationalism strikes me as this kind of philosophy. The rationalists I observed in the workshop struck me as being stronger because of what they know. Rationalism had changed them in ways I don’t yet fully understand. In short, you all make me believe that a moral revolution is possible. You help me imagine the kinds of people who will address humanity’s biggest challenges.
This is a really long intro I feel weird about that but I’m going to post it anyway. Rationalism is great. You’re all great.
Welcome. My thing is problem solving. Now in the sense that there is a lot of reading worth doing, it might be better to make a bugs list, or a curiosity list and then post it and others can suggest where to go to get understanding of the things you are seeking.
I think at this point I’m in the learning phase where I’m just staring at things in wonder. For my bugs and technique discussions, I have followups with my workshop buddies and that seems to be working pretty well. I think the reason I came to lesswrong was to understand more about the community itself. Who is part of it? How big is it? What is everybody talking about? Those kinds of things. Reading posts every couple of days seems to be working for now.
It might help if there are recent posts where the community is focused inward and talking about itself. I’ve seen a couple of these, but if there are any good ones that come to mind for you I would appreciate it.
Thank you for being welcoming.
Hey! My name’s Jared and I’m a senior in high school. I guess I started being a “rationalist” a couple months ago (or a bit more) when I started looking at the list of cognitive biases on Wikipedia. I’ve tried very hard to mitigate almost all of them as much as I can and I plan on furthering myself down this path. I’ve read a lot of the sequences on here and I like to read a lot of rationalwiki and I also try to get information from many different sources.
As for my views, I am first a rationalist and make sure I am open to changing my mind about ANYTHING because reality doesn’t change on your ability to stomach it.
As for labels, I’m vegan (or at least strict vegetarian), anarcho-communist (something around the range of left libertarian), agnostic (not in the sense that I’m on the fence but that I’m sure that we don’t know—so militant agnostic lol).
My main first question is, since you guy are rationalists, why aren’t you vegetarian or vegan? The percentage that is vegetarian on sites like lesswrong and rationalwiki is hardly higher than the public (or seems so). I would think considering you are rationalists you would understand vegetarian or veganism and go for it for sure. Am I missing something because this actually blows my mind? If you oppose it, I really wanna hear some arguments because I’ve never heard a single even somewhat convincing argument and I’ve argued with oh so many people about it. Obviously goal of veganism is to lessen suffering not end it etc.
But yeah hey!
Why do you think that rationalism would lead people to becoming veg(etari)an?
And a counter question: since you are a rationalist, how come you’re an ancom?
Because it is the rational choice. There are barely any benefits to eating meat and a ton for vegetarianism. Animals are conscious to pleasure and pain and can suffer (ask for sources—its a documented fact). If you gave any consideration at all to animals you would abhor factory farming as 50+billion die each year. Factory farming contributes to 50% of greenhouse emissions. On a macro-economic scale, plant foods are much more sustainable and many more people could be fed if we grew plants. Factory farming is inefficient. On a micro-economic scale, vegetarian foods are cheaper: rice, pasta, beans, etc. Vegetarians and vegans are healthier and general with lower mortality rate, lower bmi, lower risk of heart disease. There are no deficiencies. You do have to take a B12 pill if you are vegan, but lots of livestock are fed B12 pills anyway and they are extremely cheap. Like 10$ for hundreds of them, and this money can be gotten from the money saved from not buying meat.
The only real benefit to eat meat is convenience and that’s because of society.
As for counterarguments:
“Meat is delicious”—Just because something is pleasurable doesn’t mean its right to infringe on others’ rights. We don’t allow a lot of things because of this: ie rape etc. Also, if you cared about taste, you would spend more money and effort towards meals.
“Plant rights”—Usually this a joke. My rebuttal is slippery slope etc. Also even if plants should have rights, vegetarianism uses less plants because 70% of plant goods are used to feeding livestock.
There are many arguments but I don’t want to counter them all unless you bring them up because that would take too much time.
As for ancom, well that’s what I’ve come up with that’s rational? If I hear a new thing my opinion may change, but I believe in equality and liberty.
This is a dangerous statement to make. Would you change your mind about Veg*ism? What would it take?
We grow plants, many more are not automatically fed.
Factory farming exists because it is efficient.
There was a recent meta-study confirming that meat has no link to any of those. I would add the caveat that processed meats are less healthy, but that’s a factor of the preservatives not the meat itself. If there is a healthy aspect to veg* it would be about extra effort applied to food maintenance as a lifestyle not about the benefits of vegetables instead of meat. (no link because I don’t have it on hand but have asked around to see if I can find it)
That depends on your world view.
Not all plant matter is viable for human consumption. Humans can’t eat grass. By feeding it to cows we can harvest nutrients from parts of the earth that are not always viable for human crops.
You would make more friends around here describing yourself as, “aspiring rationalist” as we do. And being careful about the label “rational” and using it as an identity (see: keep your identity small)
Sure, very easily. You would have to prove to me that 1) Animals aren’t conscious or for some reason aren’t worth moral consideration 2) Global warming doesn’t exist or factory farming doesn’t affect it 3) Meat is healthy (I understand paleo can be healthy so this point may not matter) 4) Meat is cheaper, more efficient, and more sustainable compared to plants
True, but I think they should be ;)
No it doesn’t. It exists because it WAS convenient and efficient. It is now not the best possible solution. It is cheaper and more efficient to produce plants calorie and protein-wise.
Nah I know correlation =/= causation.
Most cows don’t eat grass in factory farming condition. I don’t really get what you’re saying with the not viable thing. We could always switch those for viable crops and it would be more efficient.
I didn’t know this was a thing. My bad. This was more of a semantics things. I thought of the word “rationalist” as the same as what you think “aspiring rationalist” is.
That would be called politics. (the politics of why some are fed and not others) And has very little to do with how much meat we eat, and a lot more to do with the state of geopolitical events.
This is where we disagree on this point. I would say it’s not always possible to grow human-edible crops in all land areas that we currently grow animals crops or generally have animal herds. I can’t prove that over the internet, but consider climates not ideal for human food, dry climate, wet climate, rocky mountainous regions…
By what mechanism would you propose that veg* is healthier?
Certainly! Not a problem, we tend to have a way of talking around here. Kind of a “jargon”, not hard to get used to, but tends to make it possible to tell who is on the same page as you in terms of reasonableness or still learning. Definitely look at the wiki for some of the terms and the sequences is a great read.
You are confused between rationality and values.
Rationality concerns itself with empirical reality and with causality in this empirical reality. Rationality does not tell you which things you must like, which rights you must respect, or which goals you must pursue. For example, “animal rights” is not a rationality argument, it’s a values argument.
Equality of rights, equality of opportunities, or equality of outcomes?
If concern over greenhouse gas emissions is a part of your argument for veg(etari)anism, you may wish to remove rice from your recommended vegetarian food list. Rice cultivation is a major source of anthropogenic atmospheric methane.
My main reason is animal suffering but thanks for the new information. I’ll look that up and keep that in mind!
Hi, everybody, I am Yuri. I am willing to continue figuring out what is going on in my life, with me and people around me, why this all seems so wrong and how to fix it.
Neither of the things you are complaining about has anything to do with your character. One is attacking your prose style and the other your willingness to be explicit about your points and why we should be interested in them.
If you treat all criticism as personal attack and accordingly take it personally, you make it impossible to learn from criticism. This is an appropriate course of action only if you believe yourself immune to error. I do not know of anyone who is immune to error.
I’m not trying to welcome you, I’m trying to explain why your posts were moved to drafts against your will.
I’m not arguing with or talking about Nash’s theory. I’m telling you that your posts are low quality and you need to fix that if you want a good response.
My point in the last paragraph is that you are treating everyone like dirt and coming across as repulsive and egotistical.
“You are incorrect” was referring to “No, you can’t give me feedback.”. Yes, we can. If you’re not receptive to feedback, you should probably leave this site. You’re also going to struggle to socialize with any human beings anywhere with that attitude. Everyone will dislike you.
Keep in mind that it’s irrelevant how smart or right you are if no one wants to talk to you.
Why do you think there is nothing wrong with your delivery? Multiple people have told you that there was. Is that not evidence that there was? Especially because it’s the community’s opinions that count, not yours?
Why do you think it’s above me?
Hello, I’m just a guy that found this site by chance, I have a “system” I base my decision making on but while I wasn’t able to find “problems” in my way of thinking I am sure there must be some, so I wanted to write it down for you to dissect, probably a lot of stuff you’ve heard of already but oh well :D
welcome! You might like to hang out on the soon-to-be-merged new site—http://www.lesserwrong.com
This site is inactive.
Hey there,
Just joined. My only exposure to LW has been reading about it on other websites, and reading a short story by Yudkowsky (I think) about baby eating aliens, which was a fun read. (Though I prefer the original ending to the “real” one.)
I have no idea what I plan to get out of joining the site, other than looking around. I know I do have an itch to write out my thoughts about a few topics on some public forum, but no idea if they’re at all novel or interesting.
So, I do have questions about what the “prevalent view (assuming there is one)” is on LW about a couple topics, and where I can find how people have arrived at that view.
Qualia. I don’t believe they exist. Or, equivalently, qualia being something “special” is an illusion, just like free will. Is there a consensus here about that? Or has the topic been beaten to death? Also, would the perception of having free will itself count as qualia?
The possibility that we’re in a simulation. I believe it’s basically currently not calculable, given what we know. It’s a consequence of me finding no compelling reason to believe that the capabilities of technology either end shortly beyond our current capabilities, or are unimaginably limitless. It’s simply not predictable where they end, but obvious that they do end somewhere. Any of that interest anyone?
LW is kinda dead (not entirely, there is still some shambling around happening, but the brains are in short supply) and is supposed to be replaced by a shinier reincarnated version which has been referred to as LW 2.0 and which is now in open beta at www.lesserwrong.com
LW 1.0 is still here, but if you’re looking for active discussion, LW 2.0 might be a better bet.
Re qualia, I suggest that you start with trying to set up hard definitions for terms “qualia” and “exists”. Once you do, you may find the problem disappears—see e.g. this.
Re simulation, let me point out that the simulation hypothesis is conventionally known as “creationism”. As to the probability not being calculable, I agree.
I need some way to examine the core beliefs of my life and evaluate whether they are actually sensible or just what I have grown up thinking? Just thinking about these things and trying to evaluate them is not working, since what I already know seems correct (Confirmation bias), and my mind is just going around in circles.
Here are some of the things I believe strongly, that I want to examine, and proof of one should not come from an unproven point.
Being a religious agnostic
Believing that things should be proven using the method of “scientific enquiry and experimentation”.
Individual freedom is paramount. Society should give people freedom to practice their own religion, sexual preference, occupation, way of living etc, as long as it doesn’t harm others. ⇒ This is typical liberal stuff, but what is the criteria for evaluation? What function are we trying to maximise, and how do we calculate what the return value is?
Reading a lot of books to expose myself to new ways of thinking.
Travel as a means of personal development. ⇒ Learning new languages, meeting new people etc to gain a varied perspective on life.
What kind of things could potentially disprove your core beliefs?
I don’t know. Need help figuring that out too. The thing that is making me uncomfortable is that these are mostly things I have taken for granted, and they just sound right to me. I have not examined them, or considered any of the alternatives seriously.
To quote Karl Marx, “practice is the criterion of truth”. Observe reality (as opposed to what’s inside your skull), see if it matches what you believe it is, experiment and check if the outcomes agree with your predictions.
However for certain things, for example those called “values” or “preferences”, the categories of true/untrue are not applicable. Here, the issue is more of considering the consequences of your actions and deciding on the acceptable trade-offs.
“Considering the consequences of your actions and deciding on the acceptable trade-offs”. This should work I think.
Read, “theory and reality” by Peter Godfrey Smith
I will try this out.
Hello to all rationalistas. (?)
I am new here, and I intend to lurk, doing the reading regularly, and catching up from a position of being far behind, until I feel more confident about contributing. I only discovered this group a few days ago.
I have unfortunately come to the conclusion that socioeconomic revolt, by any means necessary, is a moral and ethical imperative for all people, to maximise the chances of the survival of the human species.
I hope to be proven wrong, and have my bias revealed and dissected. I am perhaps rather desperate to be proven wrong, because I do not like my own conclusions.
Thanks in advance for any help I receive, and am able to reciprocate.
Is there some sort of new member “kiddie pool” where people aspiring to improve their own rational processes can feel free to speak as they/we wish without knowing the correct terminology, and without an academic background regarding logic itself?
I guess, to learn, and express, in aid of learning, there needs to be some sort of safe “bumper bowling” alley available.
I have little access to formal education, and so, in the interests of self improvement, would like discourse which is both forgiving and conducive to improving discursive quality.
I feel I am just as likely to say something which is misinterpreted, due to (what amounts to) sub-cultural norms here, from this community, as I am to say something accurately insightful. This is intimidating, despite my intention to improve my expressive accuracy. Maybe I am intimidated by elitism and expertise, to the point of rejecting the service itself? This is probably biased and irrational, but worth describing, because the act of changing cultural attitudes (in service to the goal of increasing societal rationality), requires us all to be aware of the limitations of a macro-cultural audience.
Maybe I just mean to ask: Is there a way to throw ideas around and see what sticks, without becoming a forum pariah?
Thanks for the feedback Elo, Luminifer, gjm.
“You are on an internet forum. How much safer to do you want to be?”—Lumifer. Some forums are “more equal than others.” I suspect anyone who has had unpleasant experiences online develops a modicum of caution if not healthy apprehension.
One of the reasons I wish to participate here, is because of social isolation in a regional area. I don’t have access to face to face discourse with people who share a curiosity or desire to analyse topics much further away than the end of their nose, so to speak (bar a few much-loved exceptions).
Thank you each for your assistance and time. I feel I have a great many stupid questions to ask, and look forward to discovering which of those are not stupid.
Community is great! You’ll fit right in :)
I hope you won’t become a pariah regardless, but if you are extra-worried then there are occasional “stupid question” threads which might be usable as “stupid idea” threads too. (Not to imply that either the questions or the ideas are necessarily stupid, but the point is that even if they are the norm in such threads is meant to be to (1) be nice and (2) not inflict reputational damage on the person saying stupid things.)
This is the kiddie pool.
You are on an internet forum. How much safer to do you want to be?
It is perfectly fine to try, fail, and try again. In fact, that’s how most of learning works.
Sure, some people will misunderstand you. Take it as an opportunity to practice expressing yourself very very clearly.
There are some chat groups you can join, you can post in the open thread. You can try and fail. If you want to write a post and are not sure about the quality—make sure to have spent 2 hours writing it (if not more like 20 hours) as a fail-safe.
Yes we come across as elitist. As long as you are willing to learn, willing to be curious about why others think differently from you and willing to change your mind—that’s what matters.
If you want to teach yourself and you are willing to read and do your research you will fit right in. That means books, papers, theories. We are always ferocious about knowledge. And if you can teach us—that would be great too
Welcome! You may find the topic of politics is generally frowned upon around here because of the tendency for people to go a little bit tribal in the process of talking about it. “us or them” and all that.
Aside from that, glad to have you on board and willing to question your beliefs. Feel free to ask any questions you have :)
Regarding politics, and the frowning, is it acceptable to focus on measurable results, rather than ideologies (or political “teams”—re: cerulean vs blue vs green)? Whilst I understand the tribalism you refer to, it is a bias this group and website seems to be inherently about combating; as such falsely dichotomous thinking is irrational.
For example: No matter which party is in power, across most of the world’s countries, economic systems remain largely unaltered over recent decades. The social and psychological effects on cultural norms, born of the structural economic framework, ought not be discussed despite their affect on trends of perceived rationality (the bias of culturally normal rational thought) because this topic bleeds into “politics”. I don’t see how economic debate can be considered separate from political or cultural debate. I don’t see how rationality can be separated from politics.
Is that too political for the scope of this forum? Interdependent causation?
If so, that’s okay, it just negates about half of my reasons for engaging here.
I don’t know how it is possible to separate rational discourse and political discourse. I don’t see how there can be a firewall between them. The social is the political, which defines what is considered rational, which is in turn influenced by cultural normalcy in the form of bias. Art, culture, community, education, social and even civilisation outcomes seem inextricable from the organisational structure we call the political sphere.
I could be wrong about all of the above.
It may be better to let me know now, if political discourse, about theory and measurable socio-cultural results, are beyond the scope of this forum, because then, I won’t waste anyone’s time.
I opened by saying: “I have unfortunately come to the conclusion that socioeconomic revolt, by any means necessary, is a moral and ethical imperative for all people, to maximise the chances of the survival of the human species.”
This is my present, primary concern. If I am not allowed to discuss this, I am in the wrong place. Thanks.
It sounds like you want some second opinions and rational evaluation regarding your political conclusion—necessity of revolt. OK.
I can think of reasons for and reasons against such a conclusion, but probably you should spell out more of your reasoning first. For example, why will revolt help humanity survive?
Generally speaking, it’s fine to discuss political philosophy and political theory. What LW tries to avoid is dumb tribal-emotional fights along the lines of “Trump is a moron! No, he will MAGA!” which just make everyone stupider.
Of course you should be prepared for disagreement—this is a diverse forum, so it’s guaranteed that there will be someone who doesn’t like your ideas. Note that this is normal—ideas that everyone agrees with are too milquetoast to be interesting.
I understand. I guess I am a little surprised that this forum has had problems with such views, given its intent.
I don’t know which of my own views are unusual or not though. I am sure reading more of the content here will help me assess this. I don’t know how commonly known, are notions of determinism, social engineering, or involuntary cultural identity. I may also be quite unskilled at describing such in a way which can change a mind. I don’t know.
We do not know what assumptions we already hold, which we do not or have not questioned, until something happens to highlight them, and bring them to conscious analysis. So, it seems safer to assume we all have such false assumptions than not. I hope to share some I have discovered within myself. Perhaps that may help others, or perhaps they will just say “D’uh”.
Perhaps I could find a more effective way to say: belief in objective free will is of much the same rational coherency as a belief in ghosts. Not provably impossible, but there is not much reason to fixate on either sans empiricism.
I don’t know if any of the above is interesting, or just mundane to you. Would it be better to say: “Belief in objective free will is just as irrational as belief in ghosts?”, and then make a case to be tested?
It’s not that the forum had problems, it’s just that this forum is quite vigilant about preventing such problems from appearing and spreading.
Is there a particular reason for you to care? LW is quite insistent that whether your views are correct is much, much more important than whether they are popular.
Eh. The issue of free will is discussed here on a regular basis. Eliezer’s take is here. Your assertion probably needs some clarification (e.g. what’s “objective” and how do you measure coherency?)
The reason I care, is because if I feel enthusiastic about writing a piece on a topic, I don’t want to bore others with what they have already considered. I don’t know what others have already considered. I quite agree that correct is far more important than popular or common.
“Objective” is what is still real, when you do not believe in it.
Bored people can click over to another post very easily. If you feel enthusiastic, do it.
Not a terrible useful definition when applied to free will.
(The canonical form of the above is “Reality is that which, when you stop believing in it, doesn’t go away” -- Philip K. Dick)
Thanks for the PK Dick origin. I’m grateful.
I’ve recently been examining the Sopolsky lectures on behavioural biology (25 part y-tube playlist from Stanford), and have had my view that objective free will is unlikely to exist in any practical way, thoroughly reinforced.
Feeling free, is rather different to being a free self determining agent.
It is remarkably useful to note that what “is”, is not necessarily what we think/believe “is” real.
Subjective reality vs objective reality. Never the twain shall meet… but our subjective position on what is real and true, can come closer to the objective foundation on which our minds are built. The journey to attain greater subjective accuracy of our understanding of the [objective] universe… is the pursuit of … being more correct… and less wrong… and is of great value.
Is there any kind of empirical test which can answer whether free will is objective or not?
In Kuhnian terms, is there a falsifiable statement somewhere in here?
This is more about “Prima facie”, as a legal and rational term. On face value, we weigh evidence, intending to look anew. We must consciously discard existing assumptions, in order to consciously re-asses the topic of free will.
We accept that: -in utero nutrients and stress for the mother can affect behaviour later. -A horridly abusive childhood influences behaviour later. -A hot day can alter a person’s cognitive ability. -Low blood sugar can affect emotional intensity and cognitive ability.
...There are so many circumstantial factors which influence our thinking, involuntarily, that they must be considered overwhelming.
Competing with all of that evidence for why people behave the way they do (sociology, psychology, neurology, etc) is the experience of “being” oneself. An agent of one’s own story.
The narrative we create for ourselves, about why we do what we do, presently seems to come after the biological and circumstantial reaction to influences on us. From this position (so far an empirical one) we can surmise that our own personal narrative is more of a post-reaction rationalisation, and not actually something which could be called “free” or “agency” or “independence”.
However, because we cannot be certain that free will is not some metaphysical, sans-causation “force” (sorry for lack of a better term) we cannot presently explain, we must accept that free will is not disprovable. Much like God.
We have a weight of empirical evidence which explains influences upon people, and it is opposed by “feelings”, culture, religion, and subjective experience. Anecdotal stories promote free will. These are the same as thinking a dream is real at the time, or thinking the room is warm when really you have a fever. This is subjective experience, not empiricism.
We can doubt (Descartes) pretty much anything from an epistemological point of view, but after that, we still have to accept that there is a weight of evidence one way or another. This is our (limited) guide for our rational positions.
The weight of evidence leads us to see that advertising exists because it works. An influence designed to corrupt rational choice, still exists because it is effective.
We are all unaware of two major influences on our actions. One is bias. Irrational bias exists as an influence on us, largely because we are unaware of it. We cannot compensate for a bias of which we are unaware. The other big influence is the cultural indoctrination of ideas we have ceased to question. We do not question foundational cognitive items, if it does not occur to us to do so. We don’t know what our assumptions are, until something happens to revel them.
The more likely, evidence based scenario, is that we are far more reactionary, involuntary actors, than not. On top of that, we are more likely to rationalise our own agency post-neurology, post-influence, than to be “free” agents. Then we arrive back at the idea that our own subjective experience of agency and “self” is involuntary.
I hope that helps further the discourse.
I’m sorry I am unfamiliar with Thomas Kuhn’s work, I will examine it soon. In the sense that a statement’s opposite ought to be true, if the statement is true… I’m not sure how to apply that to personal subjectivity of self and the involuntary narrative we observe ourselves observing. hehe.
I don’t see any reason for that must.
Consider driving. There are so many factors which influence where the car is going—from gravity to roads—and yet, you are driving.
I don’t think we can. It is possible, of course, for you to take the position that it’s turtles all the way down, that is, that the next moment in time is fully and mechanically determined by the state of the universe at the previous moment, including your brain and your consciousness, but this approach is also not provable or disprovable and doesn’t look to be too useful for anything.
How do you gain any information about the outside world other than through subjective experiences?
Not so. “I don’t know” is a perfectly good answer. Honest, too.
That’s a different claim. It’s one thing thing for you to say that free will does not exist at all—as you do in the beginning of the comment—and quite another thing to start talking about the degree to which our (free-will) decision-making is influenced by factors we’re not conscious of.
By the time you have named a political figure of recent history you are already in the territory of what might be people’s identities.
Sometimes by naming an ideology you challenge someone’s identity. Then without realising it you are having a debate about how a person’s own character must be wrong because this ideology is wrong. From there is a short step to full flame wars.
Part of the problem is that people are not good at talking about their ideologies while separating those ideologies from themselves.
There is theoretical discussion here. Some people will choose to not participate, if there is too much talk there will be complaints.
We work with “not too much” being a common resource as you might find in the tragedy of the commons. It’s very hard to agree on how much is not too much but still worth it.
There is a series called, “politics is the mindkiller” which fuelled a lot of avoiding talking about politics. There are definitely other places to talk about politics on the internet. Having said that if you can explain (when you do) by way of moving up and down the ladder of abstraction while not naming ideologies or politicians—you are welcome to start a discussion.
Rationality has lots of parts. It has the parts that have you working out how to conclude that a coin flip is or is not biased (epistemics) and it has the parts that have you deciding how to bet on the coin in real life (instrumental). Yes some of that is socio-political. But some of it is also working out how to stop procrastinating or how to lose weight.
You are claiming the inherent bias of identity (ontology?), is involuntary. I’m not disagreeing, but pointing it out because it seems unavoidable. In service to being “Less Wrong” I suppose we’d all like to have such identity based bias highlighted for us in such a way which was not a cause of conflict and defensiveness. I visualise this a sort of communicative code, in which I pretend to be a robot, and try to avoid habits of subcultural expression.
Instead of saying “tankie” or “Stalinist” I should say “centralised authoritarian left”. Is that adequate?
Is it better to use terms like “individual profit motive”, or “private ownership rights”, as opposed to “capitalist ideology”?
Is that what you are concerned about? Labels, particularly political labels, are useful as a linguistic tool of thought, but also neatly disposed of by someone else’s preconceptions. It’s better to speak in concepts rather than labels, because labels mean different things to different people, and entire conversations can occur, where each participant thinks the others understand the same thing by a term, but don’t, leading to… horror.
I ask a lot of questions, apparently, at first. Thanks for your assistance.
BBC managed to do the show “Yes, Minister” with contains plenty of political content without saying which ideology the minister happens to have and which party he belongs to.
Quite a lot of what political ideology is about isn’t actual politics but the spectator sport of politics.
“Yes Minister” showed us all that the notion of an ideologue in politics is a fallacy. Whatever values a person has, those values are constantly compromised and neutered, because the way politics “really” works, is more about compromise based on career goals, not some sort of ideological purity.
Self interest kills idealistic goals.
Bureaucracy and the status quo render idealism untenable.
So, relying on politicians to create significant socioeconomic change in society, and the world, must rely on a person doing an impossible job. There is no point electing a different person to do the same job, if the job is actually impossible.
Economic power is political power. Wealth equates to political power. Democracy and Capitalism are incompatible concepts.
Princeton proved this in 2014. There is no democracy in the US, and there is no particular reason to think any other Western country is particularly different. For your consideration: https://scholar.princeton.edu/sites/default/files/mgilens/files/gilens_and_page_2014_-testing_theories_of_american_politics.doc.pdf
The Princeton study didn’t say that the rich have all the power. Both the rich and the poor want performance-based pay for teachers but it doesn’t happen because the teacher’s unions and various unelected bureaucrats in the educational system don’t want it.
Both the Kochs and Soros want to end the war on drugs but the DEA is politically powerful enough that it doesn’t simply get shut off.
No democracy, really? Or would it be more accurate to say that US democracy falls short of some sort theoretical ideal?
So can you cut to the chase and tell us your solution to all this?
Short version: The lower 90% of citizens on the socioeconomic scale, have absolutely no influence over the actual policies which are enacted by the US government. No matter which party is in power, for the last 40+ years.
So, Democracy does not exist. It isn’t real. It is fake. It is a culturally accepted reality, but not an objective reality.
I wouldn’t recommend it. Language should be clear and to the point—the purpose of an expression is to communicate meaning and given that the recipient understands the word, “tankie” would usually be a better term to use. People with severe identity-based bias problems should deal with them and not force everyone else to tiptoe around.
Besides, precision matters. “Individual profit motive” is not the same as “private ownership rights” which is not the same as “capitalist ideology”.
True, but you are forced to use words in any case and unnecessarily roundabout expressions do not help.
Just explicitly define the terms you use and don’t worry too much whether to put a “label” sticker on these terms, or the “concept” sticker :-)
The “or” clauses in my question define sub-categories of the concept. Both ownership and profit motive are inherent to capitalist ideology, but neither define the whole.
Hmmm. If there is no human left to ask questions, moral philosophy becomes extinct, and all questions are moot. In order to continue questioning, there must still be humans alive. Ergo: the basis of all moral philosophy must be constrained by, and its quality measured by, the resulting probability of the continuation of humanity (including whatever evolutionary processes ensue). If any ideology places the survival of the human species at risk, it is fundamentally unacceptable, and ought to be rejected.
Any ideology which accepts “mutually assured destruction” as a reasonable geopolitical tool, is inherently irrational, meaning, rationally deficient, or, genuinely insane. These ideologies ought to be opposed by any means necessary, excepting those means which endanger human continuation. The continuation of a system of hierarchy and privilege for existing rulers, ought never supersede the necessity of continuation of the species (and of course humans need many other species, to continue).
I would like to make a case for the necessity of revolt when I have time and energy appropriate to the task.
Thanks.
Not ergo. Preventing moral philosophy from becoming extinct is not the absolute good dominating all others.
Future is uncertain, the risk always exists. Ideologies typically make trade-offs (different for different ideologies, of course) between competing goals.
What other option (in the context of, say, 1950s) would you propose and how would you implement/enforce it?
LOL.
To quote from an old flash cartoon
-- Fire ze missiles!
-- But I am le tired
-- Fine, take a nap AND THEN FIRE ZE MISSILES!
All questioning; one reasonable definition of human progress and value, …would end if humans end. The process is what we are, more than anything else. We ask, we find answers, we evolve, we continue (hopefully) with better information than before. We make better decisions, and forge better priorities.
Some people I know are unperturbed by the idea of human extinction, as if the result would be “deserved” because “we” failed to survive.
I have a problem with that, relating to the notion that few humans decide the fate of many. A few of us have massive influence over culture, beliefs, and the actions of the many, and so only a few of us can decide to extinguish all of us. The “blame” for disaster, is not shared equally. It is disproportionately allocated to those with socio-economic power.
Most of us are not blame for the perspectives we have been taught to accept. Most of us are victims of ideological premises which are held involuntarily. One example, is nationalism. Why is one country one never chose to be born into, better than another, who’s populace never chose to be born there? We are one species, and we need to continue, to keep asking questions, and thereby fix our mistakes.
The first goal of any moral human is to reduce the likelihood of human extinction. I hope that clears up the issue.
I would rather obey some different ideological socio-economic-political construct/model, than accept all of humanity ought to die to avoid such a scenario. After all we are speaking of a very few humans in positions of power, making these decisions for everyone, and they seem biased towards maintaining their own privilege as if it is objectively necessary. It is not. Involuntary bias is inherent to hierarchy. It is a product of social apartheid. The alternative would be inclusion of “leadership” within the same social circumstances as the many. ie: inclusion in the communities they rule, rather than separation. Social norms in a given sub-culture, like that of the so called “elite”, change circumstantially. The resulting values and attitudes are divergent from what the majority would consider appropriate.
I would propose that the inherent problem with hierarchy, is isolation from the macrocultural values of a population, which leads to a psychosocial bias including derision of those lower on the hierarchy than the rulers, and so the rulers become disconnected from collective rationality. Disconnected via involuntary bias.
This means the ruling “class” make decisions which suit themselves, rather than decisions which are of objective benefit to the continuation of the species.
You have made me feel bad with your “LOL”, and I’m unsure if you have said this to make me feel poorly, or to make yourself feel better. Perhaps some of each?
I don’t see it as self-evident.
If you assert that reducing the likelihood of human extinction overrides all and any other goals, you become vulnerable to what’s locally known as Pascal’s Mugging (basically, for an extremely high-value event you are forced to react to extremely low probabilities of it happening).
Is that a choice someone is offering you? By the way, how do you think such scenarios work in game theory?
It was a chuckle. Laughter is good. Don’t take everything as social jousting.
Addition If Gandhi was to be given the choice to reduce his empathy slightly, in exchange for a reward, and he did so, every new exchange like that is more likely to be agreed to. This idea was mentioned on this site somewhere.
It is the same with cultural indoctrination into hierarchical social structures. The more we become used to concentrated power, the less we are able to notice and assess other options. Cultural norms inform and restrain rational thought. Bias is involuntary. Now we see existential threat from the “normal” operation of our structure, we have trouble doing anything about it, because all alternatives have been caused to be widely believed to be wrong. Breaking out of that cognitive trap involves assessing some uncomfortable ideas...
-If it is likely that continuing this socioeconomic structure makes human extinction probable, this century, what actions are acceptable as “resistance”?
Utilitarianism would indicate that massive casualties in pursuit of revolutionary change are preferable to total casualties from inaction. Both positions are only hypotheses. Empiricism splits each as more, or less probable. Extinction does seem increasingly likely as our system unfolds over time, so hardship from revolt in increasingly; the rational option.
...not that this fictional revolt is likely to occur, just pointing out it may well be entirely moral to wage violent revolt in pursuit of a new and more rational system conducive to continuing human survival. Just a thought experiment. Perhaps well-used guillotines in town squares are preferable to apathetic acquiescence to existing power systems. I don’t know. I hate the idea. It is worth pointing out the moral efficacy of such an idea now that democracy has been absolutely neutered.
You’re tiptoeing all around this without explicitly saying anything definite. So what do you want your revolution to do, uncomfortably?
Why do you expect that a revolt will save humanity from extinction? To quote you yourself once again, “we don’t know what we are wrong about”.
That’s a popular position. But, historically speaking, the outcomes of taking it are not great.
re: Pascal’s Mugging. There are thresholds. Would a guy hand over his wallet if he was about to die from starvation, and the wallet contained his only means to prevent this? A quadrillion doesn’t matter if he is not alive to see it.
The difference is that the best information we have, indicates that no available “officially sanctioned” structural change is better than radical change, if the goal is the survival of the human species. Inequality (capitalism) killed democracy, because wealth is power. WE cannot vote using a democracy we do not have to get democracy back. We cannot vote to prevent an oligarchic class continuing to promote consumption and the poisoning of our world. Strong cultural bias, plus power, is genocidally dangerous. What ought people who see the systemic, structural, existential threat do, if all legal avenues for change are shut off?
re: game theory choice. Yes. We are all in a situation where we must decide if one socio-economic paradigm is worth fighting for over another. Historically wars are fought by the poor, for, the rich. The dominant preserve their hierarchical privilege through various means of convincing the subjugated that it is they who are under threat.
This would not matter nearly so much if we did not have evidence that our species’ projected timeline is shrinking. There is a large body of evidence that humanity may wipe ourselves out in several different ways before the end of this century. This circumstance is systemically unacceptable. If we could all continue indefinitely, being brutal and torturous, over consuming, wasting, propagandising the lessers, and toxifying this blue marble… that would be less bad, than doing so knowing the likely result is near term extinction.
We know there is an existential threat from inaction. This means inaction is morally deficient.
There are high odds that the economic incentives and stratification (including sub-cultural influences on values—Lord Acton’s letters from 1880s: “Power corrupts” etc), will override the ability of the powerful to rationally guide humanity out of the trap we have built for ourselves.
The wealthy are now sociologically obsolete, and the ideologies they use to rationalise their positions, are also the ones which prevent conservation and environmental preservation, peace, egalitarianism, positive health outcomes, and rational planning for our collective future. Self interest often opposes any notion of global planning to shield against shared threats.
Sorry for the ramble. I’m doing my best, and hopefully learning to do better.
Why are there thresholds (=discontinuities) and where do they come from?
So tell us.
Not true. Ancient times’ wars were fought for survival. The side which lost decisively was often just erased. The males were killed, the women were taken and sold off, the settlements were razed. See Carthage, for example.
Medieval times’ wars were fought for power and wealth—the poor (that is, the peasants) were often the victims, but if their side lost, little changed in their lives. They continued to be serfs, just to another lord, and it didn’t matter that much.
Would you like to estimate the probabilities for these different ways?
Equivalent: We know there is an existential threat from action. This means action is morally deficient.
What does that mean?
There are societies without wealthy people. They… don’t do well. Notable examples are Soviet Russia and Communist China.
re: self evident. If no one is left alive to question, then there are no more questions from us. Tree-falling in the woods. Does it fall is no-one notices? Yes. Do we care? On what foundation do we judge this new lack of tree?
We “know” so little, or at least know our knowledge is imperfect, so we also know that we would form more coherent/accurate/cogent value judgements if we had more information which was accurate. Our present judgements on moral value are likely to change with a greater understanding of reality.
If, right now, we don’t value human existence as much as we ought to, we can only discover how correct that judgement is, with more data/information.
We don’t know what we are wrong about, and what we are wrong about informs our value judgement.
If a person is a misanthropist, the pursuit of accurate knowledge is the pursuit of proving one’s own bias irrational.
That process is valuable. In order to validate the “choices” we make now, someone needs to be able to learn from them, and validate them, or not. Continued human existence, is a core of moral philosophy. Morality cannot exist in a void. Morality exists because we do.
Questioning is intrinsically definitive of human value, because without it, our existence is without experience. The difference between experience and reality, is the unknown.
So what?
Equivalent: If, right now, we value human existence more than we ought to, we can only discover how correct that judgement is, with more data/information.
As you yourself point out, “we don’t know what we are wrong about”.
Sure, but again, so what? You treat the existence of morality (or of “questioning”) as an absolute good, but offer no reasons why this should be so.
I started reading Ozy and SSC’s blogs about a month ago, thought they were quite good, so I figured I may as well see what LW is like.
I’ll be starting school quite soon, taking some classes from what used to be Shimer College (now the Shimer Great Books Program at North Central College, if I remember correctly). Are there any Shimerians on here?
Also, has anyone here read Robert Pirsig’s Lila: An Inquiry into Morals? Zen and the Art of Motorcycle Maintenance was good, so I figured I may as well read the sequel, and Lila has a lot of neat tricks in it that help me in trying to figure out what certain concepts/institutions might have in common with other ones, what they are opposed to, etc.
There’s a good division in it between static and Dynamic (I normally don’t like arbitrary capitalizations, but I figure I may as well follow Pirsig’s style guide, it has a good feel to it) divisions of Quality, where static is the easily recognizable forms in the manner of “This feels bad, I think the hot stove is hurting me” or “It feels good to belong to a community”, whereas Dynamic quality is mostly unanalyzable due to it’s nature (it’s a rational kind of irrationality—we see Dynamic Quality, so we go for it. Only when backed up by static patterns of Quality does it succeed).
Pirsig hypothesizes that the universe is trying to move towards greater and greater Dynamic Quality, but can only do so with the help of static patterns of Quality, otherwise the mystical euphoria of Dynamic Quality will come upon us and we will die, unable to bring about more Dynamic Quality. There’s a long history lesson to go along with this, but I’m getting quite tired and would like to just post this so that I don’t forget to later.
Hi LW community, if anything I hope my experiences from here on are humbling, I’m not particularly well read though extremely opinionated, my only real interaction with Philosophy has been a collection of works by Plato, not to say they didn’t teach me anything, I feel as if they merely solidified my previous beliefs, whether that’s a good or bad thing I’m also not sure which I’m more afraid of, I’ve barely even scratched “Rationality: From AI to Zombies” hope to change that within the month though as I understand there is a wealth of suggested readings. I only stumbled across this community some 2 hours ago, I had heard the outline of what is probably the most controversial topic to have existed and followed the white rabbit down this rabbit hole, I’m not sure what the communities attitude towards that topic is whether its like Voldemort and should not be name or conversation on it is as open as any other.
I am a Theist though my beliefs differ from my religious surroundings immensely to the point of either the more accurate interpretations or complete heresy, my beliefs and path to them mostly stemmed from the idea What if there is a ’God and Reverse Engineering such an entity from the perspective of human reality, though not to say my up bringing didn’t influence me but I agree very little on anything, regardless I like to think if presented with an argument that should convince me I would change my mind on the subject, I’ve historically mostly been your typical Hollywood Rationalist being introduced to the anime Death Note from a young impressionable age I idealized the character “L”. I wouldn’t exactly know how to label myself, I don’t believe in labels per se, I don’t believe two individuals can believe the same thing based on fundamental Nature Nurture influences, I do believe labels serve their purpose of generalizing and that our brains use this as a way of organisation though such organisation leading to bias. I believe that there must exist one inherent truth though perspective is the protagonist existing in its own reality of such truth, and that the human perspective isn’t the definitive view i.e. Meaning is non existent, merely a construct of the brain, murder is not evil it just is a thing, nor is that which is good, good, I do not believe truth and reality are the same thing, I’m concerned about freewill and the implications of its existence or its non existence and experience, where is and what is color and the experience of it, what is it about science that allows for the creation of experience and where does it reside. My Apologize for what is an example albeit abstract mess of not necessarily relating views and simply the ones at the top of my head in the 5 minutes of writing of my stance future topics and places I’ll be lurking.
If there is a label for me based on those I’d be interested to hear it, other than that after i get some reading done you may find me lurking about
Thank you, say hi, I wonder if there are any vices we share that we can socialize through such as games though my social life has deteriorated to non existence since I stopped playing I’d love to meet new people.
If it’s basilisk—no one cares.
If its cult—no one cares.
If it’s eugenics—no one cares.
If it’s politics—no one cares but don’t post it here.
If it’s mra it’s generally not appreciated here.
If it’s other stuff—sounds like fun! Welcome! Let’s talk!
I very much appreciate the clarification.
Feel free to ask via pm.
Hey, I’ve been an anonymous reader off and on over the years.
Seeing that there was some interest in Bostrom’s simulation argument before (http://lesswrong.com/lw/hgx/paper_on_the_simulation_argument_and_selective/), I wanted to post a link to a paper I wrote on the subject, together with the following text, but I was only able to post into my (private?) Drafts section. I’m sorry I don’t know better about where the appropriate place is for this kind of thing (if it’s welcome here at all). The paper: http://www.cs.toronto.edu/~wehr/rd/simulation_args_crit_extended_with_proofs.pdf
This is a very technical paper, which requires some (or a lot) of familiarity with Bostrom/Kulczycki’s “patched” Simulation Argument (www.simulation-argument.com/patch.pdf). I’m choosing to publish it here after experiencing Analysis’s depressing version of peer review (they rejected a shorter, more-professional version of the paper based on one very positive review, and one negative review that was almost certainly written by Kulczycki or Bostrom themself).
The positive review (of the earlier shorter, more-professional version of the paper) does a better job of summarizing the contribution than I did, so with the permission of the reviewer I’m including an excerpt here:
For example, the statement of the argument in https://wiki.lesswrong.com/wiki/Simulation_argument definitely needs to be revised.
I’m not 100% clear as to where the non-ambitious posts should go, so I will write my question here.
Do you know of a practical way of finding intellectual friends, so as to have challenging/interesting conversations more often? Not only is the social aspect of friendship in general invaluable (of course I wouldn’t be asking here if that was the sole reason), but I assume talking about the topics I care and think about will force me to flesh them out and keep me closer to Truth, and is a great source of novelty. So, from a purely practical standpoint (although I don’t deny other motives), I want to improve this part of my life.
Sporadic discourse with my normal friends often pops up in unsuitable conditions and with underequipped participants. Meeting the right type of person in real life takes a huge sample and social skills. Focused forums, like this one, contain the right type of people and are very useful, but lacking in one-to-one personal and casual conversation (neither method is superior, I’d prefer a mix of both to the current imbalance).
Fun fact about me (or a thinly vailed plea for a diagnosis): Often when I’m bothered by a problem or simply bored, my mind will conjure vivid conversations with one of my friends and have us argue this problem. I never actually aim for it to happen, it’s as spontaneous as normal thinking. I have no proof, but I’d say those imaginary conversations are more productive, because my imaginary listeners will disagree or misunderstand me, raising important points or faults in my reasoning. Whereas with normal thinking, I agree with myself the wast majority of time.
Depending on where you are in your life and education, you could consider enrolling in graduate school. I found that I tended to have intellectual conversations with my fellow students and professors in graduate school. Plus you will have at least one common interest with your fellow students—whatever subject you are studying in school.
Grad school is too big of a commitment just to find intellectual friends. But, if you have an interest in grad school to advance your education or career, then meeting intellectual friends is an added benefit.
Finally, even if you are working and do not wish to go back to school full time, many universities offer a master’s program that you can enroll in on a part-time basis. As a part-time student you will have less contact with your fellow students and therefore fewer chances to make friends, etc., but this can be overcome with a little effort to socialize, attend events, host small dinner parties, etc.
I do this too. I don’t think that it is abnormal—I agree with you that it can be a useful way to think through issues. I once worked with a more senior engineer who was also a personal friend and mentor. But, his job was demanding and he was always quite busy. So, when I needed his help to solve some problem, I would think about what sorts of questions he would ask, so that I could be prepared to answer them—basically, I would play out the (probable) conversation in my head ahead of time to avoid wasting his time. More often than not, this process would yield the answer to the problem, and I would end up not having to bother him at all.
If I’ve managed to translate “graduate school” to our educational system correctly, then I currently am in undergraduate school. Our mileages vary by quite a bit, most people I meet aren’t of the caliber. Also, it’s hard to find out if they are. Socially etiquette prevents me from bringing up the heavy hitting topics except on rare occasions.
I guess I should work on my social skills then cast a bigger net. The larger the sample, the better odds I have of finding someone worthwhile. Needless to say I’m introverted and socialization doesn’t come easily, but I’ll find a way.
Oh, thank the proverbial God.
In that case, you could look for clubs and organizations to join at your university. If you are in engineering or natural sciences, there will probably be a professional/academic organization for your sub discipline you could join (e.g. IEEE for electrical engineers, ACS for chemistry majors, ACM for computer science, etc.) I would imagine that mathematics and liberal arts have similar organizations as well. And, attend the meetings and functions. You could also look for other organizations on campus such as political organizations, cultural organizations, a cinema society (if you are a film enthusiast), etc.
No guarantees that these will lead to intellectual conversations, but the people who join and participate in these type of organizations tend to be (on average) more intellectual than those who do not.
And, as Grothor suggested, look for nearby LessWrong meetups (if any).
Same here. I find that simulating other people’s reaction to my arguments, mistakes, or work that I’ve done is helpful. When I want to find logical errors in my arguments, I imagine explaining them to someone with a strong background in philosophy. When something isn’t working well in the lab, I imagine explaining the situation to someone with experience, and if I feel embarrassed or like they’re about to offer a super obvious solution, it usually means I’ve made some silly mistake. Also, getting back to Sandi’s question, some of the most helpful people for me to simulate are people that I met through the LessWrong meetup in Austin.
My classmates in grad school are often, but not always, a good source of more productive intellectual conversations. There is still sometimes an issue of differences in the style of thinking that people appreciate, or the kinds of topics they’re interested in. And, of course, just because someone has had enough success in graduate school to stick around and be a friend for a few years doesn’t mean they don’t succumb to a variety of biases that can make it harder to have the kinds of conversations you’re seeking.
(Also, the place to ask this sort of question might be the current Open Thread: http://lesswrong.com/r/discussion/lw/ol5/open_thread_feb_06_feb_12_2017/)
Where on earth do you get that from?
I have done, and intend to do, neither of those things.
Probably not in a disorganized, random way, and certainly not filtered through an already-decided lens. Some people have had success (on other topics) by having a discussion topic for a specific paper or book, and a thread that’s effectively a reading/study group for that paper.
Unsure if Nash’s monetary ideas will fit that profile or not.
I’d like to bet with you on one or both of those predictions if you are open to it.
Rude refers to your method of communicating, not the content of what you said. “I mean that you do not know of the subject, and I do. I can explain it, and you might understand” is very rude, and pointlessly so.
Why do you think you know how much game theory I know?
edit: I edited out the “Is English your first language” bit. That was unnecessarily rude.
Yes exactly. And I am also asking you why you never considered 20 years of his most defining work before this? He proposes that the introduction of an international e-currency with a stable supply will incite our current fiat systems to asymptotically approach a limit he calls “ideal money”.
But I don’t want to introduce it like this. I want us to understand the other two threads I created that someone else linked to, because it is an incredibly difficult subject and read and I can save us the time.
Then write a clear and cogent post about Ideal Money—in one single thread, not three—that meets the posting standards.
That’s your only option right now. You can stamp your foot about it all you like but that is the case. Show people that you can behave sensibly and they will listen to you. If you keep allowing your frustration to override your common sense you’ll just keep having discussions like this.
I’ve tried to be helpful because you’re new, but I don’t think there’s any more I can say. I’m tapping out.
You don’t understand what I am saying. But others reading our dialogue will. I’ll say it like this: Nash doesn’t need me to jump through YOUR hoops to make his argument correct and valueable for this community and the world. You said you were going to read his 8 page essay, you could have read it by now...why are you still attacking my character, the delivery, and the messenger?
tapping out can be explained here
Perfect thanks. I teach Ju Jitsu (which people ascribe to an integral part of mma) with no tapouts: https://steemit.com/mma/@jokerpravis/extending-bjj-with-no-tap-outs-the-end-of-conflict-and-competition
It is similar to what you linked, that its not that I won and Tiff conceited defeat, but there that is a risk of exhaustion or injury or some certain discomfort they wish to avoid.
I think though they weren’t at all speaking to me or my argument or my situation and they tired themselves out and also were to scared to actually address the topic of Nash’s Ideal Money.
This happens in Ju Jitus too even with “no tap outs” when the person is not conserving energy properly (and therefore not utilizing it efficiently).
I am confident that “tapping out” on the lesswrong sense originated in martial arts. Lesswrong is made up of many memes like this one, it’s a culture more than it is just a website. It’s hard to jump right in and be engaged on the level of people who juggle lots of these heuristics of behaviour all the time and have been doing so for years. Welcome! I am sure you are more than capable of catching up on our culture! You will fit right in! Lots of our culture is on the wiki and lots is in the sequences, please look around and see what you can see!
Thank you. Yes I expected it did. But these days tapping out in martial arts implies win/loss. I am happy to see the definition here doesn’t imply that, but at the same time there seems to be a need to imply “im not saying you won but I am tapping”, which is still different than I teach.
My students look for equilibrium positions and so there is never a purposeful ends. They don’t seek ends through conflict they seek re-solution through inquiry.
I have read many posts from here and look forward to reading more. But I will be sad if I can’t engage because we all lose a lot of value, and on my first real post I was warned I would be banned if I continue and I’m not sure what I am to avoid (or why we can’t have a thread/discussion about 20 years of Nash’s works that no one is talking about).
If you would like to make a draft in a google doc and PM me a link I would be happy to help you turn it into a post that won’t get deleted so quickly.
As an estimate; consider how long it might take you to write the post and aim for more than two hours of thinking, sitting and writing. Preferably up to 5 hours.
It’s funny some of my great posts took 20mins to write, and some took more than 100 hours.
Nash was a brilliant gift to humanity. Some of the ways you presented it has made it hard for people to be willing to engage. Happy to help with that.
You are underestimating the time and effort I have put into all this. Years. And I really appreciate it but you see its such a difficult concept and so deep to traverse that it needs to be presented in a certain way. And its not rational for you to make the assumption that your help would taint the presentation (especially cause it would)
You have no idea what he did, no clue. You don’t know anything about him. No one does. He spent his whole life on this problem of Ideal Money and 20 years explaining the solution.
Why will no one read and address his works?
Cheers!
Part of the problem you are currently encountering is in your presentation of the idea. I am willing to accept the premise that it is important, but I am as yet unconvinced that it’s more important than the 1200 hours of other work on my todo list.
Yes I have no idea what you are talking about; I can’t know right now. Do tell!
That may be true. My offer still stands. If you think I am managing to communicate with you now; then maybe it’s worth me managing to help you communicate with other people around here. You probably already know this but humans are not automatically strategic
My offer still stands. Willing to try. Not willing to guarantee success.
You will find it difficult making friends with your exciting ideas if you present them from a perspective of that attitude.
A friend recently offered logicnation to us, and asked the same question. “please read these hundreds of pages before talking to me” it’s a lot to ask. Try asking for something smaller (or breaking it down into smaller chunks) if you want people to engage.
I am new and a moderator already made a clearly irrational action against me and I am dumbfounded. I mean to present a very difficult subject that no one else can present, and I did so perfectly and in the only way possible and the moderator moderated the attempt out of existence.
Doesn’t irrationality run counter to this site’s stated mission?
To be clear, I am presenting the most important topic in the world, with the assumption that it is probably significant and correct because it’s John Nash’s (most significant) work.
Why is Less Wrong censoring out Nash’s work and implying that it is irrational?
I’m the person that moved Flinter’s post to drafts, suggesting that he resubmit it as a linkpost to Nash’s talk and put his commentary in a comment, instead of the primary post.
It’s not Nash’s most significant work, and it is not the most important topic in the world. Those sorts of statements are a major contributor to why I thought the post was bad.
(In case people are wondering if I’m politically motivated, Hayek, a person who Nash describes as thinking parallel thoughts, is my favorite political thinker. This is policing post quality, not content.)
Is it possible to use moderation tools to hide the parent comment or move it. It doesn’t even belong here and others have been nice enough to offer good feedback regardless. This is a welcome thread, and it’s being derailed with bizarre behavior.
Sadly, the only direct tool I have is comment deletion, which rather than pruning or hiding the tree below it replaces it with a box that says “Comment Deleted” and its children in place. I could ask Grothor to make a new intro thread, and then delete or draft this thread.
Re this post: http://lesswrong.com/lw/ogp/a_proposal_for_a_simpler_solution_to_all_these/
You wrote something provocative but provided no arguments or explanations or examples or anything. That’s why it’s low-quality. It doesn’t matter how good your idea is if you don’t bother to do any legwork to show anyone else. I for one have no why your idea would and don’t care to do work to figure it out because the only reason I have to do work is that you said so.
Also, you might want to tackle something more concrete than “all these difficult observations and problems”. First, it’s definitely true that your ‘solution’ doesn’t solve all the problems. Maybe it helps with some. So which ones? Talk about those.
Also, your writing is exhaustingly vague (“I also value compression and time in this sense, and so I think I can propose a subject that might serve as an “ideal introduction” (I have an accurate meaning for this phrase I won’t introduce atm).”). This is really hard not to lose interest in while reading, and it’s only two random sample sentences.
Re http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/, you’re going to have to do more work to make an interesting discussion. It’s not like “Oh, Flinter, good point, you and (all of us) might have different meanings for ‘ideal’!” is going to happen. It’s on you to show why this is interesting. What made you think the meanings are different? What different results come from that? What’s your definition? What do you think other peoples’ are, and why are they worse?
I agree with Vaniver that those two posts in their current form should have been at least heavily downvoted. Though that doesn’t happen much in practice here since traffic is low. I’m not sure what the removal policy is but I guess it probably applied.
Also, if you keep writing things like “No, you can’t give me feedback. It’s above you. I have come here to explain it to you. I made 3 threads, and they are equally important.” you’re going to be banned for being an ass, no question. You’re also wildly incorrect, but that’s another matter.
And, more directly, since downvoting is currently disabled.
Hi Flinter, welcome to Less Wrong.
Don’t be too upset about a mod moving your post. You just need to get a bit more familiar with the site rules before you dive in. I’m sure it’s nothing to do with their views on John Nash. If I made a post about how much I love Terry Pratchett, a mod would take it down for being irrelevant, but that wouldn’t mean they necessarily disapproved of Terry Pratchett, would it?
Maybe take a day or two to read some threads, make a few comments and settle in here. You’ve got plenty of time to make your arguments once you’ve found your feet a bit.
Epistemic status: I do not speak for that moderator or the rest of LW. I rarely post here but have been a long time lurker. I believe that the following is correct, but I haven’t thought about it for a significant length of time.
I believe the issue is that you are asserting a specific issue as being the most important ever, with little proof other than that John Nash worked on it, which could be an appeal to authority. You provided little proof about why it is important. You gave no actual suggestions, merely comments.
You also posted three individual posts in a short time span, when all three could have been combined into a single one. It is considered polite to limit the number of posts started.
If I were you I would have presented the three separate posts in a single one, with more explanation about why you think the topic is significant, relying solely on the merits of the topic, not on an appeal to authority. I would also have given a suggestion, since you clearly seem to think that there should be something done about the issue, rather than relying on the community to give a suggestion.
Also, this might be just me, but I still have no clear picture on what the topic actually is after skimming the beginning of Nash’s lecture.
Thank you! You cannot argue it is an appeal to authority as a way of refuting it. I say its probably significant and correct because its Nash, and it is quite easy to traverse an 8 page paper as a community and decide whether I am making a substantial claim.
I am presenting a very difficult topic that not even Nash could get you to understand. It makes little sense for you to suggest that I am doing it wrong.
“Also, this might be just me, but I still have no clear picture on what the topic actually is after skimming the beginning of Nash’s lecture.”
Exactly. Please allow me to explain 20 years of lectures, in a very short time, so we can all understand the significance...especially before I am banned by this mod.
What did you post? I study game theory and might be able to give you more feedback.
(Not OP)
It was http://sites.stat.psu.edu/~babu/nash/money.pdf, http://lesswrong.com/r/discussion/lw/ogp/a_proposal_for_a_simpler_solution_to_all_these/, and http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/.
I am going to be the first person to use this “welcome thread” to suggest that a new member is not welcome at Less Wrong. In the case of Flinter, this conclusion should be immediately obvious from the low-quality posts and the abusive style of communication.
Around here, we have a saying that “a well-kept garden dies by pacifism”. A moderator needs to uproot this weed.
Welcome! So glad that you invited yourself to join us!
John Nash won a nobel prize for game theory. No one ignored him. He’s a great mathematician and economist, they made a movie about his life. The whole community mourned when he died in a car accident. No one is ignoring him.
Battles over definitions are interesting, and I would encourage you to become familiar with 37 ways that words can be wrong before challenging definitions.
This is a very bold claim, and would require very confident evidence to back it up. I am certainly not saying no, but the burden of proof is on you to explain why it matters so greatly to be world changing. Please feel free to put together a thesis which describes that.
Again a bold claim, no one is censoring any body of work and if we did it would still be on Wikipedia, or free to talk about it elsewhere (as with the general avoidance of politics)
You seem very excited about the idea, please explain more!
If future if you want people to be more interested in listening to you, you might want to avoid saying the following phrases:
″ a moderator already made a clearly irrational action”,
“action against me”,
“I mean to present a very difficult subject”,
“no one else can present”,
“I did so perfectly”,
″ the only way possible”,
“Doesn’t irrationality run counter to this site’s stated mission?”(rhetorical question),
“To be clear,”
“the most important topic in the world”
“with the assumption that it is probably significant and correct”
“Why is Less Wrong censoring …”
He spoke for 20 years and wrote for that time on the subject Ideal Money that he had been developing his whole life. He toured country to country proposing his idea. Have you head of it, because you just stated you aren’t ignoring him and neither is the community. Do you understand his argument/proposal and what are you doing about the significance of it?
edit: (also btw what he was given prizes for was just components and sub-solutions contained within his bigger proposal Ideal Money)
There is nothing to battle over. I will be using all commonly accepted definitions. But I am particularly interested in whether or not we share the same definition for “ideal”, which is not a challenge or battle.
I have such a thesis’ but why would you ask for mine and not attend to Nash’s in order to judge the truth of it? That is irrational.
Yes my thread on it was removed and the mod explained they favor Hayek over Nash which is a clear indication of such bias. If they thought Nash’s proposal had merit and was rational then we would be having dialogue in the main forum like it belongs or AT LEAST in the discussion section.
Thanks, cheers!
I haven’t had a chance yet, but it’s now on my list. I am digging into Keynesian Economics and Revealed preference theory at the moment.
I hope so, but just to be clear it’s best to state your premises. Especially when presenting your information.
Nash was a mathematician, I would love to see the easiest explanation to understand that you have.
Main is currently closed. As a matter of retirement, it’s mostly inactive. And is reserved for posts that both excel in ideas and clear presentation of those ideas to a wide audience. If I were to drop a link to the homepage of wikipedia and suggest all folks need to read it, that would be of little help to anyone, and would not make it to main.
Nash explains how Keynesian is just another form of failed communism. He explains even post-Keynesians are just Keynesians (alluding to the fact that he is thinking FAR beyond anyone even just emerging crypto currencies. He explains how a revolution will end the Keynesian ero of central banking.
More importantly you said we aren’t being ignorant to Nash, and I showed that you are and still assert we all are. He did something significant with his whole life and stored it in 8 pages. I read more than 8 pages of links from you alone I think ;)
My premise is well stated. The introduction of an objective stable unit of value. But the mod moderated my presentation out of existence.
He was much more than that, and the mod removed the explanation! It involves two thread that still exist that I made today though, one on how to solve every problem on this forum, and another that discusses the shared meaning of ideal. Read those and that is the best explanation ever.
Nash’s works Ideal Money obviously belongs there. Don’t be irrational.