Open & Welcome Thread September 2021
If it’s worth saying, but not worth its own post, here’s a place to put it.
If you are new to LessWrong, here’s the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don’t want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Hey everyone! I’m Birdy, and currently in my second-to last-year of Germany’s equivalent of high school.
I’ve discovered LessWrong only about two months ago, after I saw someone mention HPMOR in their “top-ten-lifechanger-books ever”-list in a reddit thread. Needless to say, I was really confused and curious, because what crazy kind of fanfiction permanently affects people’s lives? So I looked it up online, and started reading. I stumbled upon LessWrong shortly after, while going down the rationality-rabbithole a bit further. And so, here I am, and genuinely believe that discovering this place is one of the greatest things to happen to me so far.
Arriving here felt like seeing sense in the world for the first time; my parents and brother aren’t involved in science or academics at all (unless you count the “alternative medicine” and pseudo-science my mum regularly gets from facebook). I genuinely wasn’t aware there even was a place like LessWrong, or that discussions could even be so civil, reasonable and informative.
I know I still have a lot to learn, even more to un-learn, but I’m looking forward to the journey. Two months already made me notice countless small, positive changes in the way I think and see myself and the world. (The only troublesome side effect: school has become much less tolerable as a whole. I’m truly trying to get through it with top grades, but now that I see how much time I waste there, it’s much harder to try and be interested in the actual material...)
When I was fourteen, I decided to become a politician; mostly out of frustration with where the world is headed, and how little I could to prevent it. I’m still very much interested in trying to help save the world from going to hell in the next few decades, but I’m very uncertain as to whether or not my current job aspirations are really the best way to reach that goal.
Regardless; I’m very glad to be here, and excited to contribute in whichever way I can.
Hello and welcome!
I felt much warmth reading your intro. I remember how magical LessWrong was for me when I first discovered it. (Now, almost a decade in, I have a different feeling towards it, but I remain deeply proud to participate in this community.)
All of which is to say that I feel vicarious excitement for the experiences you have ahead of you. I look forward to meeting you in person one day. : )
I think this would not have helped me very much, so YMMV, but one frame you might want to consider is that of half-assing [school] with everything you’ve got.
Thanks a lot for kind words!
I looked into the half-assing-thing, and found that it might actually be somewhat helpful for me (in the sense that I’ll stop putting so much effort in the subjects that aren’t as relevant/rewarding when it’s not necessary). This is something I’ve struggled with for quite a while, so thank you for the resources as well, I appreciate the effort :)
I’m interested what are you feelings from lesswrong now?
Welcome from a fellow German here! IIRC I also stumbled on Less Wrong via HPMoR, though back then the story wasn’t even finished yet.
I must say, I’m impressed with the quality of your English writing at that age!
If you’re ambitious and driven to choose a career to make the world a better place, check out the resources at 80,000 Hours from the Less-Wrong-adjacent Effective Altruism community. They’ve done lots of research and thinking into various career paths and their expected impacts, requirements, etc. They’re not perfect, in that they e.g. expect a lot from their readers, and below a certain level of ambition and conscientiousness much of their advice might not be particularly applicable. But now might be a good time to check whether their resources could be useful to you.
If you think you could benefit from chatting with someone to get a rough overview of the landscapes of Less Wrong or effective altruism, I’m available to chat. I’m mostly a longtime lurker in the community, but I do have enough familiarity with it that I can at least point towards further resources on most topics.
Thanks for the offer; if I end up having any questions, I’ll take you up on it.
I also looked into the 80,000 Hours community, and although I didn’t get very far yet, it seems quite promising. It’s definitely a lot to take in, but I think you’re right; it would be useful for me now to at least dive into it for a few hours and then decide whether or not to continue.
I appreciate the compliment, as well—I’ve been working on developing sufficient writing skills for a while now, and am very happy to hear it pays off.
Politicians still have a lot of power in our society, so it’s one way to create change.
Given what you wrote about your background I think there’s a good change that you currently don’t have a good source about how people become politicians in Germany.
German politics differs from US politics in that money isn’t central for becoming a job as a politician. What’s central is how the people who go to the meetings of the party for which you want to be elected see you.
If you want to become a politician it’s good to join one of the parties that has representatives in your state (Bundesland) early and participate in discussions.
There’s a lot of tension between moving to the views that the other people in your party have which is partly necessary to be accepted and seen as trustworthy and then contributing your own views. If you have detailed ideas and write them up in a motion and the other people support that motion that’s one of the ways to earn an reputation as someone valuable to have around. Depending on the local enviroment it can also be very important with whom you build relationships in addition to your general reputation for being thoughtful.
You’re right—I don’t have even half as much of a clue about the whole process as I’d like to have, yet. I very much appreciate that you took the time to explain the basics to me.
Looking for reasonably reliable sources, joining a party, and building a certain reputation there should be extremely high on my list of priorities right now. I’ll be looking to check them off as soon as possible.
Thanks a lot!
Maybe this overview over some career paths in German politics is helpful: https://forum.effectivealtruism.org/posts/7FqszSxJ6NHBcZ7nW/report-on-careers-in-politics-and-policy-in-germany
It is! Thank you!
Welcome! That’s very similar to how I arrived here (also discovered HPMOR in german high-school, also ran into LessWrong afterwards and started everything else Eliezer had written), so I hope you end up having a good time. I hope I get to see you around more! :)
Hello there! I’m Kaloyan (“Kalo”) and I recently joined LessWrong. I was reminded of the platform’s existence in an episode of the Your Undivided Attention podcast. I actually first found the site a couple of years ago (can’t even remember how—searching for Zettelkasten content perhaps?), but did not get involved because I found it very overwhelming. In fact, I still do—there is so much content, on so many topics I believe to be important, that it feels impossible to become a part of the community. I realize that’s just my little voice of worry talking, so now I’m on a mission to prove myself wrong, starting with this post.
I was born and raised in Sofia, Bulgaria and recently graduated from the University of Southampton (UK) with a BSc in Computer Science. After working on my dissertation in my final year, I was inspired to further my research into complex networks and evolutionary game theory, which is what I am doing right now. I am also applying for PhDs and Masters in Europe, hoping to move to a new country soon.
Other than that, I spend a lot of time working on my personal development and the quality of my work. I enjoy experimenting with my productivity, I feel in my element when working to understand and explain complex topics, and I’m just starting to dip my toes into some popular philosophy. I enjoy writing and want to become a better communicator (I’ve started off by writing on Medium).
Now before I go, here’s a flurry of random facts: I did Kung Fu for two years, I am
addicted toin love with green tea, the best shows I’ve seen in the past few years are Dark and Lupin, I have started my own company that failed silently, and if I wasn’t doing research I’d become a data artist.Looking forward to taking part in the conversations on LessWrong. See you in the comments!
To get through the historical content faster, I would suggest reading the original “Sequences” in the book form, and then the 2018 community essays. (That’s still a lot of text, but ultimately less than trying to drink from the firehose of LessWrong front page and wondering how much you still missed.)
The Sequences written by Eliezer Yudkowsky are available here. Note that you can also “buy” the e-books for $0.
The 2018 community essays are here as a paper book, but you can find the list of contents here, and then find the links to the individual essays here.
Thank you for the advice!
Greetings, LWers!
I’ve finally
found the timemade up my mind to write this, so here I am.I’ve noticed that many new members have stumbled upon the rationalist community because of HPMOR. As I never read fanfiction sites (or sites talking about fanfiction sites), my case was quite different. For some reason I distinctly remember the ridiculously long chain of links that brought me here, so I’ll post the whole list just to give an idea of how long it can take to realize the existence of a site like LessWrong:
Search for insights about the P=NP conjecture during my PhD.
Find the P-versus-NP page, a very good summary that also links to this excellent post by Scott Aaronson.
Start reading Scott Aaronson’s blog.
Scott Aaronson mentions Unsong (in this post).
Start reading Unsong.
Return reading Scott Aaronson’s blog.
Scott Aaronson dedicate this post to the infamous NYT article about Scott Alexander.
Fail to realize that Scott Alexander is the author of Unsong.
Scott Aaronson directly quote I Can Tolerate Anything Except The Outgroup (in this post).
Follow the link and read my first SSC post.
Start reading SSC from some top posts.
Still fail to realize that Scott Alexander is the author of Unsong.
Finally notice the “Scott also writes Unsong” note in the about page.
Continue reading SSC.
SSC mentions LessWrong.
Finally land on LW frontpage.
Start reading the Sequences.
Start reading the Codex.
Start reading HPMOR (directly from LW).
Finally sign up (after several months of lurking).
I’m not sure about which conclusion we can draw from this. Maybe that wondering about P=NP has a small chance of making you a better rationalist. Maybe that you can spend more than one year following a computer scientist professor who declares himself on the fringes of the “rationalist movement” without realizing that a rationalist movement even exists (in my defense, I started reading Shtetl-Optimized in mid-2019, and I didn’t exactly dig through the older posts… still, it took me more than one year to finally land on LW). In hindsight, many posts from Scott Aaronson are quite obviously related to rationalist concepts. For example, I first learned about the classical paperclip maximizer from Shtetl-Optimized (here), but even googling “paperclip maximizer” I didn’t land on the rationalist blogosphere. I just learned the paperclip maximizer classical description. It may be worth mentions that after reading the relevant Wikipedia entry, my first thought was “an amoral paperclip maximizer can fit perfectly into my Planescape campaign”, which indicates that maybe I’m a bit too much addicted to D&D.
Welcome! That chain of links was fun to read :)
What that guy said!
Coming across Scott Aaronson by way of searching for info about P=NP. That happened to me a long time ago. At the time, my reaction to ‘I think we should add physics doesn’t enable P=NP’ as a law was something like ‘What? Don’t you need some reason to assert that it’s impossible?’ (Though I did wonder if that’s where thermodynamics came from.)
Welcome! I always enjoy reading people’s journey to here, and am looking forward to seeing you around on here and other rationalist places on the internet! (or in person, if that ever occurs) :)
Hi! I’m Helaman Wilson, I’m living in New Zealand with my physicist father, almost-graduated-molecular-biologist mother, and six of my seven siblings.
I’ve been homeschooled as in “given support, guidance, and library access” for essentially my entire life, which currently clocks in at nearly twenty two years from birth. I’ve also been raised in the Church of Jesus Christ of Latter-Day Saints, and, having done my best to honestly weigh the evidence for its’ doctrine-as-I-understand-it, find myself a firm believer.
I found the Rational meta-community via the TvTropes>HPMOR chain, but mostly stayed peripheral due to Reddit’s TOS, the lack of fiction community on LessWrong, and somewhat-borne-out concerns that I would not actually be accepted here. I was an active participant in Marked for Death, but left over GMing disagreements about two years in.
My biggest present concern with LessWrong as a community is the Karma system, which is not only one-dimensional, but not even a specific axis. I don’t mind one-dimensional praise, but I hate inarticulate criticism. Deeply awful feeling. I always try to give my best effort, you know?
If you want to place me elsewhere, it’s almost always a variant of Horatio Von Becker, or LordVonBecker on Giant in the Playground, due to the shorter character limit.
Karma for most things is just pretend points (a perk of our small size), so don’t feel too stressed. For new-ish posts, though, votes should be primarily interpreted as voting on what you want to appear highly when people look at the front page
I share this concern, but am also at a loss for what might be better. I thought, briefly, of Slashdot’s system where there are various reasons for upvotes (funny, insightful, etc), but that always turned out to be a bit messy.
I’ve suggested before that when someone downvotes it might prompt to enter a reason, which is what I’m more curious about.
I’ve also wondered before if I could get admin feedback on why something wasn’t (or was) Frontpaged. But, as if they were reading my mind, that feature like that launched this week. :)
I would like if there was a well-researched LessWrong post on the pros and cons of different contraceptives. - Same deal with a good post on how to treat or prevent urinary tract infection, although I’m less excited about that.
I’d be willing to pay some from my private money for this to get done. Maybe up to £1000? Open to considering higher amounts.
It would mostly be a public service as I’m kind of fine with my current contraception. So, I’m also looking for people to chip in (either to offer more money or just to take some of the monetary burden off me!)
Examples of content that I would like to see included:
Clarity on the contraception and depression question. e.g. apparently theory says that hormonal IUDs should give you less depression risk than pills, but in empirical studies it looks like it’s the other way around? Can I trust the studies?
Some perspective on the trade-offs involved. E.g. maybe I can choose between a 5% increased chance of depression vs. a 100% increased chance of blood clots. But maybe basically no one gets blood clots anyway, and then I’d rather take the increased blood clot risk! But because the medical system cares more about death than me, my doctor will never recommend me the blood clot one, or something like that.
If there wasn’t already a post on this (but I think there is), info on that it’s totally fine to *not* take 7 day pill breaks every months, but that you can just take the pill all the time. (Although I think it might be recommended to take a short break every X months)
Some realistic outlook on how much pain and effects on menstruation I should expect
Various potential benefits from contraceptives aside from contraception
On the UTI side: Is the cranberry stuff a myth or is it a myth that it’s a myth or is it a myth that it’s a myth that it’s a myth?
Alternatively: If there actually already are really good resources on this topic out there, please let me know!
I think this would be really valuable and would be happy to pay $500 to a post that is good here.
This is a public service. I think you could write this up as a post/question for more visibility.
Thanks! I felt kind of sheepish about making a top-level post/question out of this but will do so now. Feel free to delete my comment here if you think that makes sense.
Hello! I’ve been lurking for a little while but finally decided to create an account. Mostly because I had questions. But before I ask them, my name is Max, I’m 18 years old and I want to do science for living. I haven’t decided yet what exact area is the most appealing to me, but one of those that I really like is theoretical astronomy (not sure if I spelled it right, since English isn’t my native language). I came here from the hpmor podcast and I’m really glad that I have discovered this community of like-minded people, thanks to you. So, to my questions. One of them is—what are the posts here? Like, are they just random user’s thoughts or some scientific articles, both? I’ve read “humans are not automatically strategic” or something like that and the post that it was referring to, from that I’ve got the idea that people here are exchanging their thoughts on certain subjects, trying to learn more about them. But still I don’t exactly understand how posts work, like some of them are pinned and recommend, some aren’t. Anyway, if you could explain how things work around here, I’d really appreciate it. Thank you all once again.
Hey, welcome. You might want to check out the About Page and FAQ.
www.lesswrong.com/about
www.lesswrong.com/faq
Thanks!
Hi!
I’m Daniel. I’m living in Japan and currently working on a SaaS as a CTO of a startup. I have a blockchain background as well. Specifically, I used to develop smart contracts on Ethereum. Which kind of let me to this community. I found this community through the podcast show Rationally Speaking, which I found when Vitalik Buterin (co-founder of Ethereum) was on it.
I’m a self-taught programmer so I don’t have experience in academia but I would like to be involved in this community and academia in general.
I’m interested in a lot of topics that are talked about in this community, but I would especially like to learn more about how academia works, and what the dynamics are like in the context of the relation to startups, scientific/technological evolution, and evolution of society in general.
I was born in Canada and moved to Japan when I was one year old, so I’m looking forward to being involved with the LW community in Japan as well!
Welcome!
Do you have thoughts on Solidity as opposed to Vyper? I’ve been learning Chialisp, and after which I want to focus on Solidity.
Hi!
The last time I worked on smart contracts is almost 2 years ago so I’m definitely not qualified to give you advice now, but I hope this would be useful in some way.
I think Solidity has the most mature ecosystem of libraries/development tools, but other new languages like Vyper have that additional security/modern features which were adapted by learning from (the mistakes of) other older languages. (I might be wrong)
Solidity shouldn’t be a hard language to learn, so just giving it a try and see how you feel about it could be a good option!
Hi Samuel! Nice to meet you too!
Yes, it would be nice if we can connect.
I do have a similar experience in turning down a job. I got a job offer in the DeFi space, but I turned it down since it wasn’t much aligned with what I want to do long term.
You can DM me anytime!
Are we going to be doing Petrov Day this year? I don’t see anything currently about it here.
My guess is we are going to do some Petrov Day thing again, but not confirmed. We tend to usually plan it a week or two before it goes live.
FWIW I like this idea, and would be cool if there was some fanfare on the site for it.
I’m trying to find an article on lesswrong? I swear I read but can’t find via google.
It was a different analogy around Chesterton’s fence where the town comes together to discuss a recently erected lamp post. Everyone is unhappy for different reasons. Some people want it to be taller and brighter some people want it to be shorter and dimmer and some people want it removed so they can do evil things in the dark. Then a monk appears and tells everyone what they need to do is think about what it means to have light.
Then a mob forms, tears the light post down, someone gets stabbed maybe or robbed. And then everyone has to sit there and think about what happened, and what it means to have light, but now they have to do it in the dark.
Did I dream this up? I can’t find it anywhere.
This is a long shot, and a completely different metaphor, but are you perhaps thinking about the Parable of the Dammed?
It wasn’t but you helped!
All I needed to fix my googling was the word Parable :). Turns out it was from Chesterton’s own writings:
”Suppose that a great commotion arises in the street about something, let us say a lamp-post, which many influential persons desire to pull down. A grey-clad monk, who is the spirit of the Middle Ages, is approached upon the matter, and begins to say, in the arid manner of the Schoolmen, “Let us first of all consider, my brethren, the value of Light. If Light be in itself good—” At this point he is somewhat excusably knocked down. All the people make a rush for the lamp-post, the lamp-post is down in ten minutes, and they go about congratulating each other on their un-mediaeval practicality. But as things go on they do not work out so easily. Some people have pulled the lamp-post down because they wanted the electric light; some because they wanted old iron; some because they wanted darkness, because their deeds were evil. Some thought it not enough of a lamp-post, some too much; some acted because they wanted to smash municipal machinery; some because they wanted to smash something. And there is war in the night, no man knowing whom he strikes. So, gradually and inevitably, to-day, to-morrow, or the next day, there comes back the conviction that the monk was right after all, and that all depends on what is the philosophy of Light. Only what we might have discussed under the gas-lamp, we now must discuss in the dark.”
Hi,
I discovered LessWrong last week, coming across a link to this post by @johnswentworth; Core Pathways of Aging—LessWrong It was sent to me by a kind stranger on the lifespan discord, where I was looking for science based methods of increasing healthy lifespan.
So far I have found the field of longevity extremely difficult to navigate. Research papers, commercial interests and anecdotal evidence are mixed togheter in one big bowl. Everyone is selling a book, youtube channel, podcast or supplement.
It reminds me of walking down a street of resteaurants with barkers trying to entice entice passersby (usually tourists) to come in to dine. As one could expect, the food is overly priced and of poor quality.
The excellent text by johnswentworth led me to read a lot more of the articles posted on lesswrong and I truly enjoy the calm, rational and intelligent texts. To search for the truth, admitting when one is in doubt and stribing to be objective.
Truly a fresh breath of air in my badly polluted environment.
I intend to use this site for self-improvement, especially english language and try to learn methods for solving complex problems and process-optimization in the factory where I work.
Thank you all.
Welcome! :)
Some readers might have noticed that “Rough notes on the Sam Altman Q&A: GPT and AGI” is not currently on the site. The LW team has taken it down as a default while we and the author can decide whether it should be posted or not, given that maybe Sam requested that things like this not be shared and we generally ought to respect such requests.
There are a few important principles in conflict around the publishing of this post. I’m trying to figure out where the balance lies. Ideally I’d write up the current state of my thinking, but doing so is proving to be equivalent to reaching the final state of my thinking, so it’ll have to wait a bit longer.
Did Sam ask at the Q&A that it not be shared or did he contact LW and ask that it be removed? If that’s top secret, I’m okay with out an answer. More just curious.
Question on LW norms: When do you strongly upvote your own comments? Never? Always? If you’re very confident in the comment? If you think the comment is particularly valuable? If the comment was time-consuming to write?
Posts are strong-upvoted by default and comments are not. I usually stick with the defaults. I have strong-upvoted my own comments, because this is allowed, but I do so pretty rarely, much less often than I strong-upvote comments from others. You don’t get any extra Karma for it, and may get downvoted even more if people think the score is too high. I feel like I need a higher threshold for mine. Strong upvotes as a feature are valuable (in part) because they are optional and rare. I don’t strong upvote because a comment was time consuming, for myself or others. I might if I think the comment is particularly valuable and wouldn’t be noticed otherwise, or if I feel it was downvoted unfairly, to give others a chance to notice it and vote.
I personally have never upvoted my own comment, though not because of some principle objection to doing it. I think as long as you don’t do it all the time it can be useful when you think a comment is particularly important/relevant/whatever and you think people should read it. Being confidant in the comment or the comment being time-consuming don’t seem like good reasons to upvote your own comment. Also, my guess is you might get more downvotes if people think you shouldn’t have strongly upvoted your own comment—I’m not sure to what extent though.
Of course, the norm would be very different if comments were automatically strongly upvoted like posts, so even if this is the current norm it doesn’t mean it’s the one that “should” be.
Something like this seems right. It’s not the worst thing ever to do it, but it’s a bit of a faux pas in my books I’d only do if it really did seem important.
(mod note: I edited this post to have the standard Open Thread text)
I remember someone (Paul Christiano, I think?) commenting somewhere on LessWrong, saying that Ian Goodfellow got the first GAN working the on the same day that he had the idea, with a link to an article.
Does anyone happen to remember that comment, or have a link to that article?
This comment.
Not an article, but I have a link to an interview where Ian tells that story (timestamp around 3:40 if you only want that part, 2:44 if you want it as part of the complete story).
Hello, people!
My name is Arturo and I work at a public health institution in Mexico. I actually discovered LessWrong by a post that no longer exists in the site: yes, the one about the AI Basilisk. I watched a YouTube video about that post yesterday, they said it had been originally posted here, so I came to take a look. I read the first couple of articles from Rationality from A to Z and I immediately got hooked. It is true that I had never seen a website like this one, and I have only read a couple of posts but the potential to expand and perfect my understanding of rationality here seems endless.
I am pretty excited of beginning a journey of learning alongside this community!
See you around,
Arturo Morales
Hey all, my name is Rishi, a rising freshman studying computer engineering in college. I am also the co-founder of a lyric assistant platform called Poetic, and I’m spending the summer with my two other co-founders working on our web app! Keep in mind though, I have very little experience with coding, so I’ve been looking everything up as I go—as a result, it’s been an incredible learning experience for me. It is mind-boggling to think that it is only just the beginning of my journey...
I noticed that one of my co-founders is very inquisitive, curious, and vocal with his many solid pieces of reasoning. It is impressive to me that he can create and describe such solid opinions seemingly out of nowhere! He stands his ground, and he oftentimes makes a lot of sense. He is one of the greatest thinkers I know because of that, and I recently learned that he loves to read! I am getting into reading more and more now myself too, so I got excited when I learned this about him. I asked him what he reads, and he mentioned lesswrong. And now I am here as a newbie!
I’ve done a little bit of poking around, and I can tell that this is the right place for me. From the lesswrong community, I hope to learn how to become a more rational thinker so that I can give more constructive input for my team, as well as produce great content myself!
My first language is Russian, I have not free level of English, and translating will remove all good emotions from reading, so I use Google Translate for websites. I try to write in English to train it, but only when it’s not difficult. This story will be long, but I don’t think that it need my own post. Next I will talk in chronological order. I just start read (listen) fanfics about my favourite book—Harry Potter, when I’ve seen a comment to average fic, that it isn’t really great fanfic, really great is Harry Potter and the Methods of Rationality. My favourite universe plus rationality, that I also like… I want to listen it. But it doesn’t have audiobook. Before that moment I never read anything by my own will, but I just try to read first lines, combination was too interesting. I like it from first chapter and love it next, I read chapter by chapter, and don’t do anything else next ten days. It was amazing, it is (and now too) The Best Book, no, The Best Fiction In My Life Ever. Now I regret, but I was too interested to stop reading and start to think, I’d not solve riddles in books before (because I was never read) and miss the warning. After reading I can’t understand, what lesswrong need to be if Hpmor is just a shadow of it, but I try to read first chapters. I began not from start and think that it is just funny stories, good, but not greater that “really great”. And I don’t read it next. After three months I read hpmor second time. And nect don’t read it two years, because I’m afraid that if I will read it too often, it will make me nausea, and it was too good book to I can make it possible. Five years after first read of hpmor I read it again, understand that it have many good/rational ideas, but I need more, just anything. And I decide read lesswrong, now from start. ‘And the magnitude of his own folly was at last laid bare’. I understand that I can get all my mental achievements and more five years ago if I just read lesswrong some more. Other sequences wasn’t boring or not-very-great-looking. It was not greater that hpmor, but more, it have more count of ideas. I read all russian sequences and try to write my result of thinking about it. Next I have to read English sequences. I tried translating a little at the beginning, but it was going slowly. Actually, it had been excruciatingly slow. I began learn language in special app and decide return to English sequences after 2.5-5 years, when I learn it fully and free. After five, now not years, only months, I make stupid mistake and understand that just forget this moment from sequences. I decide start my every morning from lesswrong and live in rhythm of rationality. But I don’t have too many forgotten sequences. And I think, that I can use translator, last time I read this method English wiki and find that translate is just stunning for machine. I decide to first read in English one bad translated sequence and don’t have momental idea what I can write, so I save Welcome Message to other tab, next I miss it and write only now. In the next day I watch a video debates about is coronavirus is natural or from laboratory, and get notification about publication on lesswrong. I was amazed/shocked that debates can be so different. After it I never watch ordinary debates, I can’t, because now I’ve seen awful culture of discussion. Also I’ve seen that Eliezer Yudkowsky thinks that autopilot automobiles maybe will not stay normal in market when the world will ended. Before i think that everything is in control. But now i’ve seen, that if i’ve decided to learn English five years, I could don’t know what was killed me. And I don’t know what can just I do if Yudkowsky, lesswrong and MIRI donates don’t have time. I try to start just earn money for donates NOW, in the next day, but after a month I don’t have normal access. I use lesswrong.com site and I with surprise find that it looks like created in result of Intelligent Design, not how other sites. It is so surprising smart, how if I seriously create my own site, it has so many functions that I want in other sites, simple, but ordinary unrealised, and surprising difficult, but really logically consistent if just think. Seriously, less wrong is amazing at all if except voting. I’ve never seen a site or an app that looks like created by Intelligent Design. Always new graphical updates and extremely rare something useful. Why do you need a rationalist to create a normal site with all functions that users want? It’s difficult to me to change my comment function optimisation from YouTube, where you need to write something and don’t need to think about meaning of it, to lesswrong, where you also have rules of productive, no politics talk. Also I can say that from childhood I like science and rationality, but it was Spoke’s “rationality” (about Spoke I know from lesswrong, that’s just metaphorical). Now I understand that it was a mistake and I need to train other principles of thinking, about emotions, empathy, sport, tsuyoku naritai and others. I get a lot of interesting and new information from english (not translated to Russian) sequence about multiverse interpretation of QM, but i have problems with understanding Bayes (i read explanation of Yudkowsky).
Any chance we could get a “book review” icon to decorate post titles in lists so that people don’t feel they need to flag them with “[book review]...”? This could be based on the presence of the “book review” tag.
That’s an interesting idea! I’ll think about it.
Hello, I would like to ask whether there is any summary/discussion of necessary/sufficient criteria according to which a reason for whatever (belief, action, goal, …) is sufficient. If not, I would like to discuss it.
I’m sure there’s people here who could give a better answer. My take would be, from the rationalist/Bayesian perspective, is that you have a probability assigned to each belief based on some rationale, which may be subjective and involve a lot of estimation.
The important part is that when new relevant evidence is brought to your attention about that belief, you “update.” In the Bayesian sense thinking “given the new evidence B, and the probability of my old belief A, what is the probability of A given B?”
But in practice that’s really hard to do because we have all of these crazy biases. Scott’s recent blog post was good on this point.
OK, thanks, but then one of my additional questions is: what is the reasonable threshold for the probability of my belief A given all available evidence B1, B2, .., Bn? And why?
Are you suggesting that beliefs must be binary? Either believed or not? E.g. if the probability of truth is over 50% then you believe it and don’t believe if it’s under 50%? Dispense with the binary and use the probability as your degree of belief. You can act with degrees of uncertainty. Hedge your bets, for example.
Ok, thanks. This is very interesting, and correct in theory (I guess). And I would be very glad to apply it. But before doing my first steps in it on my own by the trial-&-error method, I would like to know some best practices in doing so, if they are available at all. I strongly doubt this is a common practice in a common population and I slightly doubt that it is the common practice also for a “common” attendee of this forum, but I would still like to make this my (usual) habit.
And the greatest issue I see in this is how to talk to common people around me about common uncertain things that are probabilistic if they actually think of the common things as they would be certain. Should I try to gradually and unnoticeably change their paradigm? Or should I use double language: probabilistic inside, but confidential outside?
(I am aware that these questions might be difficult, and I don’t necessarily expect direct answers.)
I’m not sure what to say besides “Bayesian thinking” here. This doesn’t necessarily mean plugging in numbers (although that can help), but develop habits like not neglecting priors or base rates, considering how consistent the supposed evidence is with the converse of the hypotheses and so forth. I think normal, non-rationalist people reason in a Bayesian way at least some of the time. People mostly don’t object to good epistemology, they just use a lot of bad epistemology too. Normal people understand words like “likely” or “uncertain”. These are not alien concepts, just underutilized.
I’m not sure what you mean by “threshold for the probability of belief in A.”
Say A is “I currently have a nose on my face.” You could assign that .99 or .99999 and either expresses a lot of certainty that it’s true, there’s not really a threshold involved.
Say A is “It will snow in Denver on or before October 31st 2021.” Right now, I would assign that a .65 based on my history of living in Denver for 41 years (it seems like it usually does).
But I could go back and look at weather data and see how often that actually happens. Maybe it’s been 39 out of the last 41 years, in which case I should update. Or maybe there’s an El Niño-like weather pattern this year or something like that… so I would adjust up or down accordingly.
The idea being, overtime, encountering evidence and learning to evaluate the quality of the evidence, you would get closer to the “true probability” of whatever A is.
Maybe you’re more asking about how should certain kinds of evidence change the probability of a belief being true? Like how much to update based on evidence presented?
I’ve recently become interested in DeFi, but I’m not entirely sure where to start. What exactly have you been doing with it?
Can you short a crypto asset on DeFi without exposing yourself to unlimited risk? How can you trust a dapp isn’t a scam, or buggy or insecure? Are there any trustworthy derivatives like futures or options?