non.io is a reddit clone that costs 1$ to subscribe, and then it splits the money towards those users you upvote more of. I think it’s an interesting idea worth watching.
Drake Morrison
Maybe? I’ve not played it all that much, honestly. I was simply struck by the neat way it interacted with multiple players.
I think it could be easily tweaked or houseruled to be a peavewager game by just revealing all the hidden information. Next time I play I’ll probably try it out this way.
War of Whispers is a semi-cooperative game where you play as cults directing nations in their wars. The reason it’s cooperative is because each player’s cult can change the nation they are supporting. So you can end up negotiating and cooperating with other players to boost a particular nation, because you both get points for it.
Both times I’ve played people started on opposite sides, then ended up on the same or nearly the same side. In one of the games two players tied.
There is still the counting of points so it doesn’t quite fit what you are going for here, but it is the closest game I know of where multiple players can start negotiating for mutual aid and both win.
I think this is pointing at something real. Have you looked at any of the research with the MDA Framework used in video game development?
There are lots of reasons a group (or individual) goes to play a game. This framework found the reasons clustering into these 8 categories:the tactile senses (enjoying the shiny coins, or the clacking of dice)
Challenge (the usual “playing to win” but also things like speedrunners)
Narratives (playing for the story, the characters and their actions)
Fantasy (enjoyment of a make-believe world. Escapism)
Fellowship (hanging out with your buds, insider jokes, etc.)
Discovery (learning new things about the game, revealing a world and map, metroidvania-style games)
Expression (spending 4 hours in the character creation menu)
Abnegation (cookie cutter games, games to rest your mind and not think about things)
The categories are not mutually exclusive by any means, and I think this is pointing at the same thing this post is pointing at. Namely, where the emotional investment of the player is.
oh, that’s right. I keep forgetting the LessWrong karma does the weighing thing.
Has anyone tried experimenting with EigenKarma? It seems like it or something like it could be a good answer for some of this.
I think this elucidates the “everyone has motives” issue nicely. Regarding the responses, I feel uneasy about the second one. Sticking to the object level makes sense to me. I’m confused how psychoanalysis is supposed to work without devolving.
For example, let’s say someone thinks my motivation for writing this comment is [negative-valence trait or behavior]. How exactly am I supposed to verify my intentions?
In the simple case, I know what my intentions are and they either trust me when I tell them or they don’t.
It’s the cases when people can’t explain themselves that are tricky. Not everyone has the introspective skill, or verbal fluency, to explain their reasoning. I’m not really sure what to do in those cases other than asking the person I’m psychoanalyzing if that’s what’s happening.
Someone did a lot of this already here. Might be worth checking their script to use yourself.
I think what you are looking for is prediction markets. The ones I know of are:
Manifold Markets—play-money that’s easy and simple to use
Metaculus—more serious one with more complex tools (maybe real money somehow?)
PredictIt—just for US politics? But looks like real money?
I don’t see all comments as criticism. Many comments are of the building up variety! It’s that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times.
Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.
The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post.
Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking.
Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind anyway. (update: prediction market)
Inspired by this post I propose enforcing some kind of ratio between posts and comments. Say you get 3 comments per post before you get rate-limited?[1] This way if you have a disagreement or are misunderstanding a post there is room to clarify, but not room for demon threads. If it takes more than a few comments to clarify that is an indication of a deeper model disagreement and you should just go ahead and write your own post explaining your views. ( as an aside I would hope this creates an incentive to write posts in general, to help with the inevitable writer turn-over)
Obviously the exact ratio doesn’t have to be 3 comments to 1 post. It could be 10:1 or whatever the mod team wants to start with before adjusting as needed.
- ^
I’m not suggesting that you get rate-limited site-wide if you start exceeding 3 comments per post. Just that you are rate-limited on that specific post.
- ^
If you feel like it should be written differently, then write it differently! Nobody is stopping you. Write a thousand roads to Rome.
Could Eliezer have written it differently? Maybe, maybe not. I don’t have access to his internal writing cognition any more than you do. Maybe this is the only way Eliezer could write it. Maybe he prefers it this way, I certainly do.
Light a candle, don’t curse the darkness. Build, don’t burn.
I used this link to make my own, and it seems to work nicely for me thus far.
This sequence has been a favorite of mine for finding little drills or exercises to practice overcoming biases.
https://www.lesswrong.com/posts/gBma88LH3CLQsqyfS/cultish-countercultishness
Cult or Not-Cult aren’t two separate categories. They are a spectrum that all human groups live on.
I agree wholeheartedly that the intent of the guidelines isn’t enough. Do you have examples in mind where following a given guideline leads to worse outcomes than not following the guideline?
If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better.
An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team. Unless you have a weird special-case, it’s just straight up more useful to work with other people’s code where the type signatures are explicit. There’s less guessing, and therefore less mistakes. Yes, there are tradeoffs. You gain better understanding at the slight cost of implementation code.The thing is, you pay that cost anyway. You either pay it upfront, and people can make smoother progress with less mistakes, or they make mistakes and have to figure out the type signatures the hard way.
People either distinguish between their observations and inferences explicitly, or you spend extra time, and make predictable mistakes, until the participants in the discourse figure out the distinction during the course of the conversation. If they can’t, then the conversation doesn’t go anywhere on that topic.I don’t see any way of getting around this if you want to avoid making dumb mistakes in conversation. Not every change is an improvement, but every improvement is necessarily a change. If we want to raise the sanity waterline and have discourse that more reliably leads to us winning, we have to change things.
Whether you are building an engine for a tractor or a race car, there are certain principles and guidelines that will help you get there. Things like:
measure twice before you cut the steel
Double check your fittings before you test the engine
keep track of which direction the axle is supposed to be turning for the type of engine you are making
etc.
The point of the guidelines isn’t to enforce a norm of making a particular type of engine. They exist to help groups of engineer make any kind of engine at all. People building engines make consistent, predictable mistakes. The guidelines are about helping people move past those mistakes so they can actually build an engine that has a chance of working.
The point of “rationalist guidelines” isn’t to enforce a norm of making particular types of beliefs. They exist to help groups of people stay connected to reality at all. People make consistent, predictable mistakes. The guidelines are for helping people avoid them. Regardless of what those beliefs are.
As always, the hard part is not saying “Boo! conspiracy theory!” and “Yay! scientific theory!”
The hard part is deciding which is which
Wow, this hit home in a way I wasn’t expecting. I … don’t know what else to say. Thanks for writing this up, seriously.
You get money for writing posts that people like. Upvoting posts doesn’t get you money. I imagine that creats an incentive to write posts. Maybe I’m misunderstanding you?