Meta: I like the snappiness of “The LeastWrong”, but also feel a bit weird about it. I feel like it makes sense with the site title and everything, but also liked some of the straightforwardness of “Best of LessWrong”. Feel free to use this thread to give takes.
My overall sense is that both “Best of LessWrong” and “The LeastWrong” are both exaggerations. “LessWrong Review Winners” feels most accurate, but is too long for most places in the UI. I prefer “The LeastWrong” over “Best of LessWrong” because it’s also more obviously in slight jest.
I really dislike “The LeastWrong”. it takes something I was already on edge about and elevates it to an explicit phrase. How about just “Review Winners” or “Voted Best” or something else that doesn’t have weird implications if someone fails to interpret it as a joke?
Just to confirm, where the “on edge” thing is presumably the name “LessWrong” (which can be interpreted in a kind of arrogant way of implying we are less wrong than other people)?
I second the concern that using “LeastWrong” on the site grants undue legitimacy to the bad “than others” interpretation of the brand name (as contrasted to the intended “all models are wrong, but” meaning). “Best Of” is clear and doesn’t distort the brand.
Yeah, I can kind of see that, though I don’t feel like I can super make explicit what the implicit line of reasoning for that association is. If you have an idea of spelling it out, I would find it helpful for thinking about the tradeoffs.
Edit: An initial attempt is “The LeastWrong” feels a bit like a global claim of “these are the least wrong things on the internet”. It also puts emphasis on the posts on the site being the kind of thing that’s described as “less wrong” on a kind of factual level as opposed to the people aspiring to be less wrong themselves in a more holistic sense.
But this doesn’t feel super coherent to me. Like, the central interpretation I have for the “LessWrong” name is on the about page:
The road to wisdom? Well, it’s plain and simple to express:
Err and err and err again but less and less and less.
– Piet Hein
And the meaning in here feels compatible with having some posts that are less wrong than others, and a collective effort to try to find the “least wrong” content. In general, I feel pretty good with having a thing we collective contribute that’s trying to be “least wrong”, and it doesn’t feel super like it makes a comparative claim to the rest of the world. Though I do still feel like there is something real here.
Edit: An initial attempt is “The LeastWrong” feels a bit like a global claim of “these are the least wrong things on the internet”.
This is how it feels to me.
Whether you can find a logic in which that interpretation is not coherent doesn’t seem relevant to me. You can always construct a story according to which a particular association is actually wrong, but that doesn’t stop people from having that association. (And I think there are reasonable grounds for people to be suspicious about such stories, in that they enable a kind of motte-and-bailey: using a phrasing that sends the message X, while saying that of course we don’t mean to send that message and here’s an alternative interpretation that’s compatible with that phrasing. So I think that a lot of the people who’d find the title objectionable would be unpersuaded by your alternative interpretation, even assuming that they bothered to listen to it, and they would not be unreasonable to reject it.)
Comment retracted because right after writing it, I realized that the “leastwrong” is a section on LW, not its own site. I thought there was a separate leastwrong.com or something. In this case, I have much less of a feeling that it makes a global claim.
I still feel that that is the claim it is making. It’s obviously in jest to me, but my thinking about it is marred by this already being the main complaint people make about the name “lesswrong”, and reifying it further seems like going the wrong direction.
What bothers me about it is that I don’t know if it’s on-brand for the lesswrong vision of the future.
A futuristic cityscape with flying vehicles? How do you expect that in the lifetime of any site patron without, you know, the superintelligence that either gets paused, fails to work, or replaces this vision with identical cuboid mega-buildings full of incomprehensible equipment. With desperate human refugees looking gaunt huddled around a fire at the base, if anyone is alive.
The lesswrong Utopia looks like ordinary buildings, but richer with greenery, and you see an ambulance hauling a patient out in a cryonics capsule. There are still cars in the street, they are finally all electric, but anything really advanced isn’t available, it’s all still in safety review. Thousands of years will pass like that, coordinated humans aren’t going to turn the solar system into a dyson swarm of ravenous robots with any urgency.
After all, new technology has side effects and risks extinction. Why take a risk?
(I don’t think I am getting what this comment is about. Is it critiquing the name, or is it critiquing the frontpage image/cover image for the optimization book?)
The cover art. It looks like something e/acc would use as their cover. Rapid progress is their vision. Flying vehicles above an urban area are actually pretty risky! They will crash and people will be killed.
I definitely want a dope future with flying cars and terra formed planets and everything. Pretty sure this is true of most people on LW. Technological progress is great, and AI is a large outlier in the relative risks and benefits.
...just because some random tribal line is sometimes being drawn in some social scene doesn’t mean it carves reality at its joints. Nothing you said is any argument that progress is bad, and by default I am a big fan of visions of technological progress. See my longer comment here for more details on my position on thinking about random tribal lines.
Rationality from EY: pause AI for 30 years at the level of GPT-4. Nuclear war for defectors. After 30 years, very slow, careful adoption and slow forward progress for anything you need AI to accomplish. Very similar world to today.
e/acc : AI is now proven feasible, race to exploit it as fast as possible, if it results in human extinction that’s what the laws of physics wanted. Many negative outcomes are the cost of progress*, similar to the 1960s in the USA. Geohot will expressly mention this when plotting the slowing of progress in the 1970s.
eg asbestos, still one of the most fireproof substances found.
If you thought reality makes { flying cars, futuristic cities, fusion vtol shuttles, terra forming, life extension to see this in person } require AI, for the reason that humans have not made much progress in these domains without it in 50+ years, and these things are all incredibly expensive, doesn’t this “carve reality at the joints”?
Without the intelligent robotic labor you need AI for, it won’t happen, in the same way humans would not have reached this point without engines.
Am I using “carving reality” wrong in this context, ignoring the details of the way I think future technology all has AI as a pre-requisite?
I think there was a miscommunication; I don’t mean to say that you are inaccurately describing some tribal lines that were going on in some other social scene, I mean that I have little faith that these tribal lines will be a remotely accurate guide to good ideas. Like national politics in the US which has two parties, I think it is wrong to think that one party has all the questions exactly right and the other party has all the questions exactly wrong, there are too many other social and political forces on where tribal lines are drawn for that to be a feasible outcome, and I think that when asking yourself a specific question it’s better to just look at how reality pertains to that question (e.g. “What is the optimal tax rate? Which societies have done better and worse with different rates? What makes sense ethically from first principles?”) rather than asking what different tribes say about it.
I think future technology all has AI as a pre-requisite?
My high conviction hot take goes further: I think all positive future timelines have AI as a pre-requisite. I expect that, sans AI, our future—our immediate future: decades, not centuries—is going to be the ugliest, and last, chapter in our civilization’s history.
Meta: I like the snappiness of “The LeastWrong”, but also feel a bit weird about it. I feel like it makes sense with the site title and everything, but also liked some of the straightforwardness of “Best of LessWrong”. Feel free to use this thread to give takes.
My overall sense is that both “Best of LessWrong” and “The LeastWrong” are both exaggerations. “LessWrong Review Winners” feels most accurate, but is too long for most places in the UI. I prefer “The LeastWrong” over “Best of LessWrong” because it’s also more obviously in slight jest.
new solution: BestWrong
TopWrong, PeakWrong, MountWrong, …
I really dislike “The LeastWrong”. it takes something I was already on edge about and elevates it to an explicit phrase. How about just “Review Winners” or “Voted Best” or something else that doesn’t have weird implications if someone fails to interpret it as a joke?
Just to confirm, where the “on edge” thing is presumably the name “LessWrong” (which can be interpreted in a kind of arrogant way of implying we are less wrong than other people)?
I second the concern that using “LeastWrong” on the site grants undue legitimacy to the bad “than others” interpretation of the brand name (as contrasted to the intended “all models are wrong, but” meaning). “Best Of” is clear and doesn’t distort the brand.
Yeah, I can kind of see that, though I don’t feel like I can super make explicit what the implicit line of reasoning for that association is. If you have an idea of spelling it out, I would find it helpful for thinking about the tradeoffs.
Edit: An initial attempt is “The LeastWrong” feels a bit like a global claim of “these are the least wrong things on the internet”. It also puts emphasis on the posts on the site being the kind of thing that’s described as “less wrong” on a kind of factual level as opposed to the people aspiring to be less wrong themselves in a more holistic sense.
But this doesn’t feel super coherent to me. Like, the central interpretation I have for the “LessWrong” name is on the about page:
And the meaning in here feels compatible with having some posts that are less wrong than others, and a collective effort to try to find the “least wrong” content. In general, I feel pretty good with having a thing we collective contribute that’s trying to be “least wrong”, and it doesn’t feel super like it makes a comparative claim to the rest of the world. Though I do still feel like there is something real here.
This is how it feels to me.
Whether you can find a logic in which that interpretation is not coherent doesn’t seem relevant to me. You can always construct a story according to which a particular association is actually wrong, but that doesn’t stop people from having that association. (And I think there are reasonable grounds for people to be suspicious about such stories, in that they enable a kind of motte-and-bailey: using a phrasing that sends the message X, while saying that of course we don’t mean to send that message and here’s an alternative interpretation that’s compatible with that phrasing. So I think that a lot of the people who’d find the title objectionable would be unpersuaded by your alternative interpretation, even assuming that they bothered to listen to it, and they would not be unreasonable to reject it.)
Comment retracted because right after writing it, I realized that the “leastwrong” is a section on LW, not its own site. I thought there was a separate leastwrong.com or something. In this case, I have much less of a feeling that it makes a global claim.
I still feel that that is the claim it is making. It’s obviously in jest to me, but my thinking about it is marred by this already being the main complaint people make about the name “lesswrong”, and reifying it further seems like going the wrong direction.
What bothers me about it is that I don’t know if it’s on-brand for the lesswrong vision of the future.
A futuristic cityscape with flying vehicles? How do you expect that in the lifetime of any site patron without, you know, the superintelligence that either gets paused, fails to work, or replaces this vision with identical cuboid mega-buildings full of incomprehensible equipment. With desperate human refugees looking gaunt huddled around a fire at the base, if anyone is alive.
The lesswrong Utopia looks like ordinary buildings, but richer with greenery, and you see an ambulance hauling a patient out in a cryonics capsule. There are still cars in the street, they are finally all electric, but anything really advanced isn’t available, it’s all still in safety review. Thousands of years will pass like that, coordinated humans aren’t going to turn the solar system into a dyson swarm of ravenous robots with any urgency.
After all, new technology has side effects and risks extinction. Why take a risk?
In case it’s unclear, the side image is temporary, just around to inform folks about the new Best Of collections and artworks.
(I don’t think I am getting what this comment is about. Is it critiquing the name, or is it critiquing the frontpage image/cover image for the optimization book?)
The cover art. It looks like something e/acc would use as their cover. Rapid progress is their vision. Flying vehicles above an urban area are actually pretty risky! They will crash and people will be killed.
I definitely want a dope future with flying cars and terra formed planets and everything. Pretty sure this is true of most people on LW. Technological progress is great, and AI is a large outlier in the relative risks and benefits.
the only disagreements are about how to get there
...just because some random tribal line is sometimes being drawn in some social scene doesn’t mean it carves reality at its joints. Nothing you said is any argument that progress is bad, and by default I am a big fan of visions of technological progress. See my longer comment here for more details on my position on thinking about random tribal lines.
I thought the “tribal” line was:
Rationality from EY: pause AI for 30 years at the level of GPT-4. Nuclear war for defectors. After 30 years, very slow, careful adoption and slow forward progress for anything you need AI to accomplish. Very similar world to today.
e/acc : AI is now proven feasible, race to exploit it as fast as possible, if it results in human extinction that’s what the laws of physics wanted. Many negative outcomes are the cost of progress*, similar to the 1960s in the USA. Geohot will expressly mention this when plotting the slowing of progress in the 1970s.
eg asbestos, still one of the most fireproof substances found.
If you thought reality makes { flying cars, futuristic cities, fusion vtol shuttles, terra forming, life extension to see this in person } require AI, for the reason that humans have not made much progress in these domains without it in 50+ years, and these things are all incredibly expensive, doesn’t this “carve reality at the joints”?
Without the intelligent robotic labor you need AI for, it won’t happen, in the same way humans would not have reached this point without engines.
Am I using “carving reality” wrong in this context, ignoring the details of the way I think future technology all has AI as a pre-requisite?
I think there was a miscommunication; I don’t mean to say that you are inaccurately describing some tribal lines that were going on in some other social scene, I mean that I have little faith that these tribal lines will be a remotely accurate guide to good ideas. Like national politics in the US which has two parties, I think it is wrong to think that one party has all the questions exactly right and the other party has all the questions exactly wrong, there are too many other social and political forces on where tribal lines are drawn for that to be a feasible outcome, and I think that when asking yourself a specific question it’s better to just look at how reality pertains to that question (e.g. “What is the optimal tax rate? Which societies have done better and worse with different rates? What makes sense ethically from first principles?”) rather than asking what different tribes say about it.
My high conviction hot take goes further: I think all positive future timelines have AI as a pre-requisite. I expect that, sans AI, our future—our immediate future: decades, not centuries—is going to be the ugliest, and last, chapter in our civilization’s history.