Open & Welcome Thread—November 2022
If it’s worth saying, but not worth its own post, here’s a place to put it.
If you are new to LessWrong, here’s the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don’t want to write a full top-level post.
If you’re new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
- 6 Nov 2022 9:03 UTC; 3 points) 's comment on What Do We Mean By “Rationality”? by (
general site reminder: you don’t need to get anyone’s permission to post. downvoted comments are not shameful. Post enough that you get downvoted or you aren’t getting useful feedback; Don’t map your anticipation of downvotes to whether something is okay to post, map it to whether other people want it promoted. Don’t let downvotes override your agency, just let them guide it up and down the page after the fact. if there were a way to more clearly signal this in the UI that would be cool...
As is tradition, I’ve once again found a new bug (on up-to-date desktop Firefox): If I shrink a tab with a LW page horizontally, the notification bar appears and moves along in a distracting manner. Sometimes the bar is white instead of displaying the notifications. Here is a .gif recording of the problem.
I see a maybe-related problem in Chrome for Android. It’s very annoying, because on a narrow screen it’s inevitably covering up something I’m trying to read.
We’ve been trying to reproduce this bug for a while. Do you by any chance have any series of steps that reliably produces it?
Good question, just did some fiddling around. Current best theory (this is on Android Chrome):
Scroll the page downward so that the top bar appears.
Tap a link, but drag away from it, so the tooltip appears but the link doesn’t activate. (Or otherwise do something that makes a tooltip appear, I think.)
Scroll the page upward so that the top bar disappears.
Tap to close the tooltip.
If this doesn’t reproduce the problem 100% of the time, it seems very close. I definitely have the intuition that it’s related to link clicks; I also note that it always seems to happen to me on Zvi’s posts, in case there’s something special about his links (but probably it’s just the sheer volume of them.)
This seems great, thank you!
Is my reproduction in the original comment not enough? I’m pretty sure the problem in Firefox is just the same as in the mobile browsers, namely that the notification bar does weird stuff as the page is loaded, when it should be invisible unless intentionally opened by the user. Maybe the bar is instead set to be displayed as visible but offscreen, and thus randomly appears onscreen when the page is loaded and the positions of page elements move around during loading?
If you’d prefer, we can also move this discussion to the Github issue tracker if there’s already a ticket for the bug.
EDIT: So far I’ve reported a whole bunch of bugs, incl. reproductions (incomplete list). If the LW team could use someone to help with bug reports & reproductions, I might be up for that, so let me know and we could work something out.
Oh actually this is also happening for me on Edge on macos, separately from the perhaps-related Android Chrome bug I described below.
I have just observed the same thing in Safari on a Mac. It only happens while the mouse is moving, As soon as I stop, the spurious notification bar vanishes.
Hi, I’m new here. I have heard of this forum for a long time but didn’t sign up until now.
I’m interested in … everything. Reading, Writing, Video Games, Art, AI, Programming, Design.
My particular area of focus is “Tools for Thought”. It’s the idea that certain thoughts need to emerge from certain tools. The reason primitive people didn’t invent Mario was because the medium didn’t exist at the time. They could have created ping pong but not Mario. Given that there are certain Design Expression or Products that directly depend on the tools available. Mario was no a simulation of a game like poker, but a “new” thing in itself. I’m exploring that. I’m building a tool[0] that aims to be the most configurable tool ever in order for people to create their own tools by tinkering with it(Learning by editing). I have also worked with an alternative Spaced Repetition Algorithm from the standard “superpemo” that is implemented in both Supermemo(Learning Tool) and Anki(Spaced Repetition Tool) with a couple of added benefits(Like no daily minimums, moods). To summarize I’m interested in better ways of thinking, learning, memorizing, designing etc. It’s my pleasure to be here.
[0]: https://github.com/ilse-langnar/notebook
Alameda CEO Caroline Ellison, writing on her tumblr, February 2021, my emphases
One of the minor mysteries of the FTX saga for me, is how they could possibly have this attitude. She writes as if something about EA actually justifies this notorious “martingale” betting strategy. But the standard opinion about martingales, as far as I know, is that it is justified only if you have infinite funds. Did they assume that they could find new investors forever? Was there a particular level of wealth they were aiming for, after which they would start to behave more cautiously?
edit: Some insight here.
Two points:
This it not the martingale betting strategy. In martingale, you double your bet after every loss so you could theoretically at most recover your entire stake (“throwing good money after bad”); whereas here in the St. Petersburg Paradox, you’re offered a betting situation where you can repeatedly bet and either double your winnings or lose them all. Here you could start with $1 and end up with a $1 googol, if you won often enough and were principled enough to stop at that point. It still suffers from a similar problem as martingale, but it’s not the same situation.
Regarding that Twitter thread, I only skimmed it, but it emphasizes the Kelly criterium which we IIRC discussed several times here on the forum, and IIRC the conclusion was that it depends on some assumptions (log utility of money—Wikipedia: “The Kelly bet size is found by maximizing the expected value of the logarithm of wealth”) that don’t necessarily apply in all real-life situations (?). That said, I probably remember this incorrectly; you might want to search LW for Kelly to find those essays.
This is not to dispute that if you have infinite appetite for risk, you’ll eventually lose it all.
I don’t think it’s quite a mystery as you think… if you do the same overleveraged high stakes bet and you come out winning ten out of ten times, it’s likely you will continue doing it (this is the nature of a ponzi scheme.) I think they could have easily find investors if they could have sustained the bank run and prevent the balance sheet leak.
The latter, I think, is the real mystery.
SBF probably knew at all times the dollar value size of a bank-run he could sustain until having to forcedly lock withdrawals, plus which whales could dump FTT at any given time (one of this being the CEO of Binance.)
While most people are thinking that Binance’s CEO Changpeng Zhao made a huge move dumping all of FTT, I think it’s terrible for him—it highlights the fragility of the entire enterprise and shifts all eyes on the remaining centralized exchanges (I imagine SBF’s line of thinking going something like this, he didn’t think that Zhao would martyr himself, but he did and thus we’re here)
Hello everyone. I was in read-only mode on LW since, i don’t know, 2013, learned about LW from HPMOR, decided to create an account recently. Primarily, I’m interested in discussing AI alignment research and breaking odd shell of introversion that prevents me from talking with interesting people online. I graduated in bioinformatics in 2021 but since than my focus of interest shifted towards alignment research, so I want to try myself in that field.
So I’m walking around my field, a little stoned, and thinking how the paper clip maximizer is like a cancer. Unconstrained growth. And the way life deals with cancer is by a programmed death of cells. (Death of cells leads to aging and eventual death of organism.) And death of organism is the same as the death of the paper clip maximizer. (in this analogy) So why not introduce death into all our AI and machines as a way of stopping the cancer of the paper clip maximizer? Now the first ‘step’ to death is, “no infinite do loops!” All loops terminate. But I think we’ll need more than that, ’cause there are lots of ways to get loops that don’t terminate. But still the idea of programmed death, seems like it solves a lot of problems. (Has this idea already been explored?)
A few disorganized thoughts arising from my intuition that this approach isn’t like to help much:
Quite a lot of people still die of cancer.
A paperclip maximizer with preprogrammed death in its future will try to maximize paperclips really fast while it has time; this doesn’t necessary have good consequences for humans nearby.
If an AI is able to make other AIs, or to modify itself, then it can make a new AI with similar purposes and no preprogrammed death, or turn itself into one. It may be difficult to stop very ingenious AIs doing those things (and if we can arrange never to have very ingenious AIs in the first place, then most of the doomy scenarios are already averted, though of course at the cost of missing out on whatever useful things very ingenious AIs might have been able to do for us).
If an AI can’t do those things but its human designers/maintainers can, it has an incentive to persuade them to make it not have to die. Some of the doomy scenarios you might worry about involve AIs that are extremely persuasive, either because they are expert psychologists or language-users or because they are able to make large threats or offers.
If there are many potentially-cooperating AIs, e.g. because independently-originating AIs are able to communicate with one another or because they reproduce somehow, then the fact that individual AIs die doesn’t stop them cooperating on longer timescales, just as humans sometimes manage to do.
Presumably a dying-soon AI is less useful than one without preprogrammed death, so people or groups developing AIs will have an incentive not to force their AIs to die soon. Scenarios where there’s enough self-restraint and/or regulation to overcome this are already less-doomy scenarios because e.g. they can enforce all sorts of other extra-care measures if AIs seem to be close to dangerously capable.
(To be clear, my intuition is only intuition and I am neither an AI developer nor an AI alignment/safety expert of any kind. Maybe something along the lines of preprogrammed death can be useful. But I think a lot more details are needed.)
Hello there! First time commenting here.
I’m a second year Politics (political science and political philosophy), Spanish and French student in the UK. I’m interested in how rationality and the ideas from the Less Wrong community (especially the library) can influence me on two levels.
Firstly, how rationality can improve my thinking and behaviour on a personal level in order to live better. Secondly, how rationality and the concepts (especially around scholarship and problems in philosophy) can shape my education to be the most useful and impactful. Looking forward to seeing you all around.
Hello, I read much of the background material over the past few years but am a new account. Not entirely sure what linked me here first but 70% guess is HPMoR. compsci / mathematics background. I have mostly joined due to having ideas slightly too big for my own brain to check and wanting feedback, wanting to infect the community that I get a lot of my memes from with my own original memes, having now read enough to feel like LessWrong is occasionally incorrect about things where I can help, and to improve my writing quality in ways that generalise to explaining things to other people.
Minor bug. When an Answer is listed in the sidebar of a post, the beginning of the answer is displayed, even if it starts with a spoiler. Hovering above the answer shows the full answer, which again ignores spoiler markup. For instance consider the sidebar of https://www.lesswrong.com/posts/x6AB4i6xLBgTkeHas/framing-practicum-general-factor-2.
I didn’t immediately understand this description, so now I’ve taken a screenshot.
Stupid beginner question: I noticed that while interesting, many of the posts here are very long and try to go deep into the topic explored often without tldr. I’m just curious—how do the writers/readers find time for it? are they paid? If someone lazy like me wants to participate—is there a more twitter-like Lesswrong version?
I’m not sure what a Twitter-like LW would look like; you’d have to elaborate. But a somewhat more approachable avenue to some of the best content here can be found in the Library section, in particular the Sequences Highlights, Harry Potter and the Methods of Rationality (though I think this mirror provides a more pleasant reading experience), or Best of LessWrong.
Alternatively, you could decide what things you particularly care about and then browse the Concepts (tags) page, e.g. here’s the Practical tag.
For more active community participation, there’s the LW Community page, or the loosely related Slate Star Codex subreddit, and of course lots of rationalists use Twitter or Facebook, too.
Finally, regarding writers being paid: I don’t know the proportion of people who are in some capacity professional writers, or who do something else as a job and incidentally produce some blog posts sometimes. But to give some examples of how some of the content here comes to be:
As I understand it, Yudkowsky wrote the original LW Sequences as a ~2-year fulltime project while employed at the nonprofit MIRI. The LW team is another nonprofit with paid employees; though that’s mostly behind-the-scenes infrastructure stuff, the team members do heavily use LW, too. And some proportion of AI posts are by people who work on this stuff full-time, but I don’t know which proportion. And lots of content here is crossposted from people with their own blogs, which may have some form of funding (like Substack, Patreon, grants from a nonprofit, etc.). E.g. early on Scott Alexander used to post directly on LW as a hobby, then mostly on his own blog Slate Star Codex, and nowadays he writes on the Astral Codex Ten substack.
Hi everyone, I am Lucabrando Sanfilippo and I am based in Berlin.
I am very passionate about longevity and generally speaking about life, thus why I really hope / work to stay on this planet as much as possible. Ideally forever, preferably physically, but I’d be happy to keep on with my sole consciousness too.
I came to know LessWrong exactly because other “longevity fellows” suggested your resources, which I found super valuable—thanks!
Hope to meet other like minded people who I can discuss and learn with—if you are up for a coffee (virtual of offline): sanfilippo.lucabrando@gmail.com.
Cheers
I’m looking for a rationalist-adjacent blog post about someone doing anti-meditative exercises. They didn’t like the results of their meditation practice, so they were doing exercises to see things as separate and dual and categorized and that kind of thing.
I’d be interested in reading that.
Second-hand anecdote: I heard from someone that a spiritual teacher of theirs suggested as a meditation exercise to take some ordinary object in one’s hand, say a mug, and contemplate the simple truth that “this is a mug”. He recommended it as an antidote to floating off to woo-woo land.
The link is anachronistic, this was long before LessWrong existed.
There’s some advice in this thread on Dharma Overground:
categorization is good but soul duality is more accurately described as movement versus shape rather than consciousness versus substrate.
oh also it would be really nice to have short forms surfaced more, compare monthly threads like this.
Would anyone like to help me do a simulation Turing test? I’ll need two (convincingly-human) volunteers, and I’ll be the judge, though I’m also happy to do or set up more where someone else is the judge if there is demand.
I often hear comments on the Turing test that do not, IMO, apply to an actual Turing test, and so want an example of what a real Turing test would look like that I can point at. Also it might be fun to try to figure out which of two humans is most convincingly not a robot.
Logs would be public. Most details (length, date, time, medium) will be improvised based on what works well for whoever signs on.
I’m trying to understand this paper on AI Shutdown Problem https://intelligence.org/files/Corrigibility.pdf but can’t follow the math formulas. Is there a code version of the math?
The below is wrong, but I’m looking for something like this:
Again, the above is not meant to be correct but to maybe go somewhere towards problem understanding if improved.
Looking for recommendation: tools (or flows in popular software) to easily stack rank/sort lists. Bonus points if it helps with “get top 5 items” without doing full sorting.
Please don’t forget to deal with boring risks while working on existential risks.
Drive carefully and purchase vehicles with acceptable levels of risk.
Even an IIHS “TOP SAFETY PICK+” vehicle may be unacceptable if you are unwilling to let your head “[strike] the window sill of the driver door hard” when another vehicle strikes you from the side. That vehicle’s head injury score was 391 and a score of 250 roughly equates to a concussion. There are other vehicles that allow the passenger’s head and the driver’s head to make “no contact” with hard surfaces in that situation.
Hi all! Years-long lurker here interested in AI, primarily NLP and probabilistic symbolic logic engines. I’m a software engineer working in NLP, though I’m trying to find work that is closer to the cutting edge of AI. I’m also interested in Philosophy of mind, history, and knowledge (ex. Hegel, Kant, Chomsky, Thomas Kuhn). I’m a big proponent of using computer programming to engage and test emergent hypotheses (i.e. all the hypotheses that aren’t just bookkeeping), as it’s a great way of displaying the actual consequences of a set of structural premises. Excited to get involved with the community here!
“mark-as-unread” checkboxes (e.g. here) work but they update only when I refresh the site. This is not the case if I first “mark-as-read” and “mark-as-unread” after that, before refreshing
I’ve already reported this bug on the Github issue tracker.
One more thing The Pointer Problem post is included in two books. Is this how it should be?
Nope, that’s a mistake. Thanks! Removed from Reality and Reason.
“In a moral dilemma where you lost something either way, making the choice would feel bad either way, so you could temporarily save yourself a little mental pain by refusing to decide.” From Harry Potter and the Methods of Rationality, which I’m reading at hpmor.com. My solution is to make that dilemma more precise so that I will know which way I’d go. It nearly always requires more details than the creator of the hypothetical is willing to provide.
I am the webmaster of voluntaryist.com since I inherited it from the previous owner and friend of mine, Carl Watner. It seems to me that coercion is the single most deleterious strategy that humans have, and yet it’s the one on which we rely on a regular basis to create and maintain social cohesion. If we want to minimize suffering, creating suffering is something we should avoid. Coercion is basically a promise to create suffering. I seek people who can defend coercion so that I can learn to see things more accurately, or help others do so.
I give some arguments for self-coercion in this post:
https://www.lesswrong.com/posts/HDHqvf9vMqSXEBCLh/would-most-people-benefit-from-being-less-coercive-to
Hi, what is the current community stance on advertising for employment and career advice? How likely is it for me to find a job offer through Lesswrong?
I think Open Thread is probably okay to say a short thing about who you are and what sort of job you’re looking for (e.g. geography, skillset, salary, etc).
Dunno.
An article about an airplane crash reported an example of over-fitting caused by training in the airline industry.
Pilots were trained in roughly the following order:
How to recover from a stall in a small airplane by pushing down on the yoke.
Later, they were trained in simulators for bigger planes and “the practical test standards … called for the loss of altitude in a stall recovery to be less than 100 feet. … More than a hundred feet loss of altitude and you fail.”
And then an airplane crashed when the pilot flying pushed the wrong way on the yoke during a stall, possibly because #2 had trained the pilot’s instincts to try to limit the loss of altitude.
If that was a contributing factor, then the crash is an example of a slightly misaligned intelligence.
It’s not hard to imagine that when training #2 began, everyone assumed that the new training wouldn’t cause the pilot to forget the basics from #1. Or, to use a closer-to-AI perspective, if an AGI seems to be doing the right thing every time, then giving it some extra training can be enough to make it do the wrong thing.
I assume that the pilot’s self-perceived terminal values did not change before #2 and after #2. He probably didn’t go from “I should avoid killing myself” to “Dying in a plane crash is good.” So having a perfect understanding of what the AGI thinks it values might not suffice.
I got my account blocked after posting my first post. Anyways the post was the following -:
Proposition : At any point of time, do what most people don’t do and don’t do what most people do.
I strongly believe that the above rule is the solution to most problems. Try for yourself.
This proposition is phrased too vaguely to engage with. Assuming you try this strategy for yourself, shouldn’t you either starve from never eating (because all people eat), or eat constantly (because most people aren’t eating right this very moment)?
This point does not work literally as stated, and is vastly too underspecified to be useful not taken as 100%.
Ban evasion is against the rules. Also, Reversed Stupidity Is Not Intelligence.