The organizer wound up posting their own event: https://www.lesswrong.com/events/ndqcNdvDRkqZSYGj6/ssc-meetups-everywhere-1
Taymon Beal
$1,000 Bounty for Pro-BLM Policy Analysis
Petrov Day in Boston
This looks like a duplicate.
Nit: I think this game is more standardly referred to in the literature as the “traveler’s dilemma” (Google seems to return no relevant hits for “almost free lunches” apart from this post).
Irresponsible and probably wrong narrative: Ptolemy and Simplicius and other pre-modern scientists generally believed in something like naive realism, i.e., that the models (as we now call them) that they were building were supposed to be the way things really worked, because this is the normal way for humans to think about things when they aren’t suffering from hypoxia from going up too many meta-levels, so to speak. Then Copernicus came along, kickstarting the Scientific Revolution and with it the beginnings of science-vs.-religion conflict, spurring many politically-motivated clever arguments about Deep Philosophical Issues. Somewhere during that process somebody came up with scientific anti-realism, and it gained traction because it was politically workable as a compromise position, being sufficiently nonthreatening to both sides that they were content to let it be. Except for Galileo, who thought it was bullshit and refused to play along, which (in conjunction with his general penchant for pissing people off, plus the political environment having changed since Copernicus due to the Counter-Reformation) got him locked up.
Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.
This essay argues against the idea of “saving the phenomenon”, and suggests that the early astronomers mostly did believe that their models were literally true. Which rings true to me; the idea of “it doesn’t matter if it’s real or not” comes across as suspiciously modern.
For EAs and people interested in discussing EA, I recommend the EA Corner Discord server, which I moderate along with several other community members. For a while there was a proliferation of several different EA Discords, but the community has now essentially standardized on EA Corner and the other servers are no longer very active. Nor is there an open EA chatroom with comparable levels of activity on any other platform, to the best of my knowledge.
I feel that we’ve generally done a good job of balancing access needs associated with different levels of community engagement. A number of longtime EAs with significant blogosphere presences hang out here, but the culture is also generally newcomer-friendly. Discussion topics range from 101 stuff to open research questions. Speaking only for myself, I generally strive to maintain civic/public moderation norms as much as possible.
Also you can get a pretty color for your username if you donate 10% or do direct work.
The Slate Star Codex sidebar is now using
localStartTime
to display upcoming meetups, fixing a longstanding off-by-one bug affecting displayed dates.
You probably want to configure this such that anyone can read and subscribe but only you can post.
I don’t feel like much has changed in terms of evaluating it. Except that the silliness of the part about cryptocurrency is harder to deny now that the bubble has popped.
I linked this article in the EA Discord that I moderate, and made the following comments:
Posting this in #server-meta because it helps clarify a lot of what I, at least, have struggled to express about how I see this server as being supposed to work.
Specifically, I feel pretty strongly that it should be run on civic/public norms. This is a contrast to a lot of other rationalsphere Discords, which I think often at least claim to be running on guest norms, though I don’t have a super-solid understanding of the social dynamics involved.
The standard failure mode of civic/public norms is that the people in charge, in the interest of not having a too-high standard of membership (as this set of norms requires), are overly tolerant of behaviors with negative externalities.
The problem with this is not simply that negative externalities are bad, it’s that if you have too many of them it ceases to be worth good actors’ while to participate, at which point they leave because the whole thing is voluntary. Whatever the goals of the space are, you probably can’t achieve them if there’s nobody left but trolls.
Thus it is occasionally argued that civic/public norms are self-defeating. In particular, in the rationalsphere something like this has become accepted wisdom (“well-kept gardens die by pacifism”), and attempts to make spaces more civic/public are by default met with suspicion.
(Of course, it can also hard to tell a principled attempt at civic/public norms apart from a simple bias towards inaction on the part of the people in charge. Such a bias can stem from aversion to social conflict. Certainly, I myself am so averse.)
The way we deal with this on this server, I think, is to identify patterns that if left unchecked would cause productive people to leave (not specific productive people, but rather in the abstract), and then as principledly as possible tweak the rules to officially discourage and/or prohibit those behaviors.
It’s a fine line to walk, but I don’t think it’s impossible to do well. And there are advantages; I suspect that insecure and/or conflict-averse people may have an easier time in this kind of space, especially if they don’t have a guest or coalitional space that happens to favor them and so makes them feel safe. (Something something typical mind fallacy.)
Also, civic/public norms are the best at preventing forks and schisms. Guest norms are the worst at this. One can of course argue about whether it’s worth it, but these do very much have costs.
The other thing I found especially interesting was this quote: “Asking for “inclusiveness” is usually a bid to make the group more Civic or Coalitional.”
I found this interesting because recently I made an ex cathedra statement that almost used the word “inclusive” in reference to what this server strives to be. By this I meant civic/public. I took it out because the risk of misinterpretation seemed high, because in the corners of the internet that many of us frequent, “inclusive” more often means coalitional.
I fear that this system doesn’t actually provide the benefits of a breadth-first search, because you can’t really read half a comment. If I scroll down a comment page without uncollapsing it, I don’t feel like I got much of a picture of what anyone actually said, and also repeatedly seeing what people are saying cut off midsentence is really cognitively distracting.
Reddit (and I think other sites, but on Reddit I know I’ve experienced this) makes threads skimmable by showing a relatively small number of comments, rather than a small snippet of each comment. At least in my experience, this actually works, in that I’ve skimmed threads this way and felt like I got a good picture of the overall gist of the thread without having to read every comment.
I know you don’t like Reddit’s algorithm because it feeds the Matthew effect. But if most comments were hidden entirely and only a few were shown, you could optimize directly for whatever it is you’re trying to do, by tweaking the algorithm that determines which comments to show. As a degenerate example, if you wanted to optimize for strict egalitarianism, you could just show a uniform random sample of comments.
You don’t currently expand comments that are positioned below the clicked comment but not descendants of it.
Idea: If somebody has expanded several comments, there’s a good chance they want to read the whole thread, so maybe expand all of them.
Would you mind saying in non-metaphorical terms what you thought the point was? I think this would help produce a better picture of how hard it would have been to make the same point in a less inflammatory way.
There’s an argument to be made that even if you’re not an altruist, that “societal default” only works if the next fifty years play out more-or-less the same way the last fifty years did; if things change radically (e.g., if most jobs are automated away), then following the default path might leave you badly screwed. Of course, people are likely to have differing opinions on how likely that is.
I don’t have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community’s trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it well you have to allocate a lot of skill points to it, which means allocating them away from the organization’s core competencies. I’ve reached the point where I no longer find even gross failures of this kind surprising.
(I think you already appreciate this but it seemed worth saying explicitly in public anyway.)