This is probably my favorite link post that’s appeared on LW thus far. I’m kinda disappointed more people haven’t checked it out and voted it upward.
TheAltar
Having the best posts be taken away from the area where people can easily see them is certainly a terrible idea architecture wise.
The solution to this is what all normal subreddit do: sticky and change the color of the title so that it both stands out and is in the same visual range as everything.
“You can deduce that verbally. But I bet you can’t predict it from visualizing the scenario and asking what you’d be suprised or not to see.”
I like this.
In my mind, this plugs into Eliezer’s recent facebook post regarding thinking about the world in mundane terms or in terms of what is merely-real or in terms of how you personally would go and fix a sink or how you go and buy groceries at the store VS. the way you think about everything else in the world. I think these methods of thought in which you are visualizing actual objects and physics in the real world, thinking of them in terms of bets, and checking your surprise at what you internally simulate all point at a mindset that is extremely important to learn and possess as a skill.
I hadn’t sufficiently considered the long term changes of LW to have occurred within the context of the overall changes in the internet before. Thank you very much for pointing it out. Reversing the harm of Moloch on this situation is extremely important.
I remember posting in the old vbulletin days where a person would use a screenname, but anonymity was much higher and the environment itself felt much better to exist in. Oddly enough, the places I posted at back then were not non-hostile and had a subpopulation who would go out of their way to deliberately and intentionally insult people as harshly as possible. And yet… for some reason I felt substantially safer, more welcome, and accepted there than I have anywhere else online.
To at least some extent there was a sort of compartmentalization going on in those places where serious conversation was in one area while pure-fluffy, friendly, jokey banter-talk was going on in another. Attempting to use a single area for both sounds like a bad idea to me and is the sort of thing that LessWrong was trying to avoid (for good reason) in order to maintain high standards and value of conversation but places like Tumblr allow and possibly encourage. (I don’t really know about tumblr since I avoid it, but that’s what it looks like from the outside.) There may also have been a factor that I had substantially more in common with the people who were around at that time whereas the internet today is full of a far mroe diverse set of people who have far less interest in acculturating into strange new environments.
Short-term thinking, slight pain/fear avoidance, and trivial conveniences that shifted everyone from older styles like vbulletin or livejournal to places like reddit and tumblr ultimately pattern matches to Moloch in my mind if it leads to things like less common widescale discussion of rationality or decreased development of rationalist-beloved areas. Ending or slowing down open, long-term conversations on important topics is very bad and I hope that LW does get reignited to change the progression of that.
A separate action that could be taken by bloggers who are interested in it (especially people just starting new blogs) is to continue posting where they do, but disable comments on their posts and link people to corresponding LW link post to comment on. This is far less ideal, but allows them to post elsewhere and to have the comments content appear here on LW.
I have visual snow from trying out a medication. I can confirm that it sucks and is annoying. It’s not debilitating though and is mostly just inconvenient.
Then again, it may be slightly harming my ability to focus while reading books. Still checking that out.
Meetup : San Antonio Meetup
I went through similar thought processes before attending and decided that it was extremely unlikely that I would ask for my money back even if I didn’t think the workshop had been worth the cost. That made me decide that the offer wasn’t a legitimate one for me to consider as real and I ignored it when making my final considerations of whether to go or not.
I ultimately went and thought it was fully worth it for me. I know 3+ people who follow that pattern who I spoke to shortly after the workshop and 1 who thought that it hadn’t actually been worth it but did not ask for their money back.
Normally I say get plenty of sleep, but I think you asked a bit late to get that answer.
This looks like it. Thank you!
I saw a link in an open thread several months back about an organization in the past that was quite similar to the Rationality movement but eventually fell apart. It was science based self-improvement and people trying to make rational choices back in the 1920s or earlier. I’ve tried searching for the link again but can’t find it. Does anyone know which one I’m referring to?
I was reading through a link on an Overcoming Bias post about the AK Model and came across the idea that, ” the Social return on many types of investments far exceed their private return”. To rephrase this: there are investments you can make such as getting a college education which benefit others more than they benefit you. These seem like they could be some good skills to focus on which might be often ignored. Obvious examples I can think of would be the Heimlich maneuver, CPR, and various social skills.
Do you know of any good low hanging fruit in terms of skills or time investments a person can make which can provide a lot of benefit to the people around them (company, family, friends, etc.) but don’t actually benefit themselves?
EY was attempting to spread his ideas since his first post on overcomingbias. This pattern was followed through entire Sequences. Do you regard this as different from then?
I have a similar aesthetic. What areas of weirdness are present in the people you like the most?
I think this is closest to what I thought Hanson was trying to say and it was close to what I was hoping people were interpreting his writing as saying. The way other people were interpreting his statements wasn’t clear from some comments I’ve read I thought it was worth checking in to.
This is an example of why I’m curious about everyone else’s parsing. I bet Robin Hanson does talk about status in the pursuit of status, however I bet he also enjoys going around examining social phenomenon in terms of status and that he is quite often on to something. These aren’t mutually exclusive. People may have an original reason for doing something, but they may have multiple reasons that develop over time and their most strongly motivating reason can change.
Could you expand on this? Is this just an idea you generally hold to be true or are there specific areas you think people should conform far less in (most especially the LW crowd)?
This makes me wonder whether lots of people who are socially awkward or learning about socialization (read: many LWers) need not only social training but conformity coaches.
I’ve been reading a lot of Robin Hanson lately and I’m curious at how other people parse his statements about status. Hanson often says something along the lines of: “X isn’t about what you thought. X is about status.”
I’ve been parsing this as: “You were incorrect in your prior understanding of what components make up X. Somewhere between 20% and 99% of X is actually made up of status. This has important consequences.”
Does this match up to how you parse his statements?
edit
To clarify: I don’t usually think anything is just about one thing. I think there are a list of motivations towards taking an action for the first person who does it and that one motivation is often stronger than the others. Additionally, new motivations are created or disappear as an action continues over time for the original person. For people who come later, I suspect factors of copying successful patterns (also for a variety of reasons including status matching) as well as the original possible reasons for the first person. This all makes a more complicated pattern and generational system than just pointing and yelling “Status!” (which I hope isn’t the singular message people get from Hanson).
This has a long list of sound arguments in it which exist in tandem with a narrative that may not actually be true. Most of the points are valid regardless, but whether they have high importance in aggregate or whether any of the conclusions reached actually matter depends heavily on what lens we’re looking through and what actually has been going on in reality at Open Phil and Open AI.
I can imagine a compelling and competing narrative where Open Phil has decided that AI safety is important and thinks that the most effective thing they can do with a ton of their money is to use it to make the world safer against that x-risk. They lack useful information on the topic (since it is a very hard topic) so they export the actual research of the thing and spending of the money to an organization that seems better suited to doing just that: Open AI. (Open AI may not be a good source for that, but that’s a separate discussion). However, since they’re donating so much money and don’t really know what Open AI might do with it in practice in the future, they ensure that they get a person they trust business-wise on the board of directors to ensure that it ends up getting spent in ways that are in line with their original desires. (A good backup plan when there’s open questions of whether any group working on AI is doing more to help or harm it.)
Gwern makes a quick fermi estimate on here about how much Open AI actually costs to run per year and reminds us that while $1 billion has been “committed” to Open AI, that’s really just a press release social-statement about a pseduo-promise by people who are known to be flaky and aren’t under any obligation to give them that money. If we’re estimating Open AI to be running on $9 million per year, then $30 million is a very hefty donation which gives the company three years more runway to work on things. That’s a big deal to Open AI being in existence or not in existence, and if they already have $9 million coming in per year from another source then that could potentially double their income per year and allow them to expand into lots of new areas as a result.
~
There are a number of inductive leaps going on within the large model presented in the original post that I think are worth pointing out and examining. I’ll also stick what I think is the community affect/opinion on the end of them because I’ve been up all night and think it’s worth denoting.
Open Phil is now taking AI Safety as a serious threat to the world and pledged $30 million of money donated to them on it. (Yay! Finally!)
Open Phil is giving that money to Open AI. (Boo! Give it to MIRI!)
Holden is now going to be a board member at Open AI as part of the deal. (Boo! We don’t like him because he screwed up #2 and we don’t respect his judgments about AI. Someone better should be on the board instead!) (Yay! He didn’t write the people we don’t like a blank check. That’s a terrible idea in this climate!)
These are the parts that actually matter. Whether the money is going to a place that is actually useful for reducing x-risk and whether Holden as board member is there to just ensure the money isn’t be wasted on useless projects or whether he’ll be messing with the distribution of funds larger than $30 million in ways that are harmful (or helpful!) to AI Safety. He could end up spending them wisely in ways that make the world directly safer, directly less safe, safer because it was spent badly versus alternatives that would have been bad, or less safe because they weren’t spent on better options.
Insofar that I think any of us should particularly care about all of this it will have far more to do with these points than other things. They also sound nicely far more tractable since the other problems you mention about Open Phil sound pretty shitty and I don’t expect a lot of those things to change much at this point.