Open & Welcome Thread — February 2023
If it’s worth saying, but not worth its own post, here’s a place to put it.
If you are new to LessWrong, here’s the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don’t want to write a full top-level post.
If you’re new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
To give an update about what the Lightcone Infrastructure team has been working on: we recently decided to close down a big project we’d been running for the last 1.5 years, an office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021.
At some point I hope we publish a postmortem, but for now here’s a copy of the announcement I wrote in the office slack announcing its closure 2-3 week ago.
jesus o.o
The link seems to be missing.
Also: Looking forward to the postmortem.
As it should be, because the anonymous survey about the SF offices is not for you. It’s for the people who were using the offices in question and thus have access to the original Slack channel posting with the link intact. (Obviously, you can’t filter out rando Internet submissions of ‘feedback’ if it’s anonymous.)
Hi all,
nice to meet you! I’ve been silently reading LW for the past 4ish years (really enjoyed it), even went to CFAR workshops (really enjoyed it), I regard myself as a rationalist overall (usually enjoy it, sometimes, as expected, it’s a bit hard) but so far felt too shy to comment. But now I have a question, so well, it leaves me with a very little choice ;).
Is it possible to export LW series to PDF? I don’t like reading on my computer, accessing forum from my e-reader each time is a pain, and I really would like to read, for example, the whole “2022 MIRI alignment discussion”. Any advice here?
Yes, for example here (announcement post).
Many more exist.
Thank you!
LessWrong open thread, I love it. I hope they become as lively as the ACX Open Threads (if they have the same intention).
I’m reading the sequences this year (1/day, motivated by this post) and am enjoying it so far. Lmk if I’m wasting my time by not “just” reading the highlights.
PS: In case you or someone you know is looking for a software engineer, here’s my profile: https://cv.martinmilbradt.de/. Preferably freelance, but I’m open to employment if the project is impactful or innovative.
Not a waste of time, generally good idea :P
I remember seeing a short fiction story about a EURISKO-type AI system taking over the world in the 1980s (?) here on LessWrong, but I can’t find it via search engines, to the point that I wonder whether I hallucinated this. Does anyone have an idea where to find this story?
I’ve been on lesswrong every day for almost a year now, and I’m really interested in intelligence amplification/heavy rationality boosting.
I have a complicated but solid plan to read the sequences and implement the CFAR handbook over the next few months (important since you can only read them the first time once).
I need a third thing to do simultaneously with the sequences and the CFAR handbook. It’s gotta be three. What is the best thing I can do for heavy intelligence/rationality amplification? Is it possible to ask a CFAR employee/alumni without being a bother? (I do AI policy, not technical alignment)
A third thing might be forecasting. Go to Metaculus or GJOpen and train to make predictions.
You could also make predictions relevant to your work. Will XY attend the meeting? Will this meeting result in outcome X?
This is a good idea and I should have done it anyway a long time a go. But in order for that to work for this particular plan, I’d need a pretty concentrated dose, at least at the beginning. Do you have anything to recommend for a quick intense dose of forecasting education?
I think Tetlock’s Superforcasting book is great.
I’m still looking for things! Categories of things that work are broad, even meditation, so long as rapid intelligence amplification is the result. The point is to do all of them at once (with plenty of sleep and exercise and breaks, but no medications, not even caffeine). If I’m on to something then it could be an extremely valuable finding.
This sequence has been a favorite of mine for finding little drills or exercises to practice overcoming biases.
Feature suggestion: Allow one to sort a user’s comments by the number of votes.
Context: I saw a comment by Paul Christiano, and realized that probably a significant portion of the views expressed by a person lie in comments, not top-level posts. However, many people (such as Christiano) have written a lot of comments, so sorting them would allow one to find more valuable comments more easily.
Hm, you can already browse comments by a user, though. I don’t think high-voted comments being more easily accessible would make things worse (especially since high-voted comments are probably less-likely to contain politically sensitive statements).
I don’t agree, but for a separate reason from trevor.
Highly-upvoted posts are a signal of what the community agrees with or disagrees with, and I think being able to more easily track down karma would cause reddit-style internet-points seeking. How many people are hooked on Twitter likes/view counts?
Or “ratio’d”.
Making it easier to track these stats would be counterproductive, imo.
Is there anyone who’d be willing to intensively (that is, over a period of possibly months, at least one hour a day) help someone with terminal spaghetti brain organize a mountain of ideas about how to design a human hive mind (that is: system for maximizing collective human intelligence and coordination ability) operating without BCIs, using only factored cognition, prediction markets, behaviorist psychology, and LLMs?
I cannot pay you except with the amusement inherent to my companionship—I literally have no money and live with my parents lol—but one of the things for which I’ve been using my prodigious free time the past few years is inventing-in-my-head a system which at this point is too complicated for me to be able to figure out how to explain clearly in one go, although it feels to me as if it’s rooted ultimately in simple principles.
I’ve never been able to organize my thoughts, in general, on any topic, better than a totally unordered ramble that I cannot afterward figure out any way to revise—and I mean, I am pathologically bad at this, as a personality trait which has remained immune to practice, which is why I rarely write top level posts here—so I actually need someone who is as good at it as I am bad and can afford to put a lot of time and effort into helping me translate my rambles into a coherent, actionable outline or better yet, wiki full of pseudocode and diagrams.
Obviously I will find some way to reciprocate, but probably in another way; I doubt I can do much good towards organizing your stuff if I can’t organize my own lol. But who knows, maybe it’s my closeness to the topic that makes me unable to do so? Anyway, thanks in advance if you decide to take this on.
Hello, I’m new here and still working on reading the Sequences and the other amazing content on here; hopefully then I’ll feel more able to join in some of the discussions and things. For now, I have what I’m sure is an embarrassingly basic question, but I can’t find an answer on here anywhere and it keeps distracting me from focusing on the content: would someone please tell me what’s the deal with the little degree symbols after some links but not others?
Thank you in advance, and warm regards to you all.
AFAIU, the symbols signify links to other content on LessWrong, showing that you can hover over the link and see a pop-up of the content.
Oh, I see. Thank you very much.
LW/AF did something very recently that broke/hindered the text to speech software I use for a lot of my LW reading.
This is a considerable inconvenience.
Which software?
Google Assistant and/or Microsoft Edge (on mobile).
Microsoft Edge flat out cannot narrate LW posts anymore.
Google Assistant sometimes(?) fails to fetch the entire text of the main post
Feature request: An option to sort comments by agreement-karma.
As someone who doesn’t know web-development I’m curious what would be the obstacle to letting the user write their own custom comment sorting algorithm? I’m assuming comment sorting is done on the client machine, so it would not be extra burden on the server. I would like to sort top level comments lexicographically first by whether they were written by friends (or at least people whose posts I’m subscribed to) or having descendant replies written by friends, then whether they were written in the last 24h then by total karma and lower level comments lexicographically first by whether they were written by friends or having descendant replies written by friends then by submission time (older first). In spite of the numerous people craving this exact sorting algorithm I doubt the lesswrong team will implement it any time soon, so it would be cool if I could.
I have a vague preference to switch from my nickname here to my real name. Are there any unexpected upsides or downsides? (To the extent that matters, I’m no-one important, probably nobody knows me, and in any case it’s probably already easy to find out my real name.)
Plus in case I go through with it, are there any recommendations for how to ease the transition? (I’ve seen some users temporarily use nicknames like <Real Name, formerly MondSemmel> or something, though preferably as a shorter version.)
Another option: if memory serves, the mods said somewhere that they’re happy for people to have two accounts, one pseudonymous and one real-named, as long as you avoid voting twice on the same posts / comments.
In the past I’ve encouraged more people to use their real names to more stand behind their writing, nowadays I feel more like encouraging people to use pseudonyms to feel less personal social cost for their writing.
Question for people working in AI Safety: Why are researchers generally dismissive of the notion that a subhuman level AI could pose an existential risk? I see a lot of attention paid to the risks a superintelligence would pose, but what prevents, say, an AI model capable of producing biological weapons from also being an existential threat, particularly if the model is operated by a person with malicious or misguided intentions?
I think in the standard X-risk models that would be a biosafety X-risk. It’s a problem but it has little to do with the alignment problems on which AI Safety researchers focus.
Some thoughts:
Those who expect fast takeoffs would see the sub-human phase as a blip on the radar on the way to super-human
The model you describe is presumably a specialist model (if it were generalist and capable of super-human biology, it would plausibly count as super-human; if it were not capable of super-human biology, it would not be very useful for the purpose you describe). In this case, the source of the risk is better thought of as the actors operating the model and the weapons produced; the AI is just a tool
Super-human AI is a particularly salient risk because unlike others, there is reason to expect it to be unintentional; most people don’t want to destroy the world
The actions for how to reduce xrisk from sub-human AI and from super-human AI are likely to be very different, with the former being mostly focused on the uses of the AI and the latter being on solving relatively novel technical and social problems
“Games we play”: civilians helping troops from different sides of conflict. As in, Army I entered the village; N sold his neighbors, the neighbors died horribly. Then Army II chased away Army I. Would N be reported as a collaborationist? Commonly, no. But everybody knows that everybody knows. And everybody knows who knows what everybody knows, which means N is probably going to sell a lot of people if another opportunity arises.