How are you going to prevent gaming the system and collusion?
Keep tweaking the rules until you’ve got a system where the easiest way to get karma is to make quality contributions?
There probably exist karma systems which are provably non-gameable in relevant ways. For example, if upvotes are a conserved quantity (i.e. by upvoting you, I give you 1 upvote and lose 1 of my own upvotes), then you can’t manufacture them from thin air using sockpuppets.
However, it also seems like for a small community, you’re probably better off just moderating by hand. The point of a karma system is to automatically scale moderation up to a much larger number of people, at which point it makes more sense to hash out details. In other worse, maybe I should go try to get a job on reddit’s moderator tools team.
Keep tweaking the rules until you’ve got a system where the easiest way to get karma is to make quality contributions?
This will never ever work. Predicting this in advance.
There probably exist karma systems which are provably non-gameable in relevant ways.
You should tell Google and academia, they will be most interested in your ideas. Don’t you think people already thought very hard about this? This is such a typical LW attitude.
Don’t you think people already thought very hard about this?
Can you show me 3 peer-reviewed papers which discuss discussion site karma systems that differ meaningfully from reddit’s, and 3 discussion sites that implement karma systems that differ from reddit’s in interesting ways? If not, it seems like a neglected topic to me.
Maybe I’m just not very good at doing literature searches. I did a search on Google Scholar for “reddit karma” and found only one paper which focuses on reddit karma. It’s got brilliant insights such as
The aforementioned conflict between idealistically and quantitatively motivated contributions has however led to a discrepancy between value assessments of content.
...
This is such a typical LW attitude.
I believe Robin Hanson when he says academics neglect topics if they are too weird-seeming. Do you disagree?
It’s certainly plausible that there is academic research relevant to the design of karma systems, but I don’t see why the existence of such research is a compelling reason to not spend 5 minutes thinking about the question from first principles on my own. Relevant quote.
Coincidentally, just a couple days ago I was having a conversation with a math professor here at UC Berkeley about the feasibility of doing research outside of academia. The professor’s opinion was that this is very difficult to do in math, because math is a very “vertical” field where you have to climb to the top before making a contribution, and as long as you are going to spend half a decade or more climbing to the top, you might as well do so within the structure of academia. However, the professor did not think this was true of computer science (see: stuff like Bitcoin which did not come out of academia).
Maybe I’m just not very good at doing literature searches. I did a search on Google Scholar for “reddit karma” and found only one paper which focuses on reddit karma.
You can’t do lit searches with google. Here’s one paper with a bunch of references on attacks on reputation systems, and reputation systems more generally:
You are right that lots of folks outside of academia do research on this, in particular game companies (due to toxic players in multiplayer games). This is far from a solved problem—Valve, Riot and Blizzard spend an enormous amount of effort on reputation systems.
I don’t see why the existence of such research is a compelling reason to not spend 5 minutes thinking about the question from first principles on my own.
I don’t think there is a way to write this in a way that doesn’t sound mean: because you are an amateur. Imo, the best way for amateurs to proceed is to (a) trust experts, (b) read expert stuff, and (c) mostly not talk. Changes are, your 5 minute thoughts on the matter are only adding noise to the discussion. In principle, taking expert consensus as the prior is a part of rationality. In practice, people ignore this part because it is not a practice that is fun to follow. It’s much more fun to talk than to read papers.
LW’s love affair with amateurism is one of the things I hate most about its culture.
My favorite episode in the history of science is how science “forgot” what the cure of scurvy was. In order for human civilization not to forget things, we need to be better about (a), (b), (c) above.
What expert consensus are you referring to? I see an unsolved engineering problem, not an expert consensus.
My view of amateurism has been formed, in a large part, from reading experts on the topic:
The clash of domains is a particularly fruitful source of ideas. If you know a lot about programming and you start learning about some other field, you’ll probably see problems that software could solve. In fact, you’re doubly likely to find good problems in another domain: (a) the inhabitants of that domain are not as likely as software people to have already solved their problems with software, and (b) since you come into the new domain totally ignorant, you don’t even know what the status quo is to take it for granted.
Introspection, and an examination of history and of reports of those who have done great work, all seem to show typically the pattern of creativity is as follows. There is first the recognition of the problem in some dim sense. This is followed by a longer or shorter period of refinement of the problem. Do not be too hasty at this stage, as you are likely to put the problem in the conventional form and find only the conventional solution.
Synthesize new ideas constantly. Never read passively. Annotate, model, think, and synthesize while you read, even when you’re reading what you conceive to be introductory stuff.
This past summer I was working at a startup that does predictive maintenance for internet-connected devices. The CEO has a PhD from Oxford and did his postdoc at Stanford, so probably not an amateur. But working over the summer, I was able to provide a different perspective on the problems that the company had been thinking about for over a year, and a big part of the company’s proposed software stack ended up getting re-envisioned and written from scratch, largely due to my input. So I don’t think it’s ridiculous for me to wonder whether I’d be able to make a similar contribution at Valve/Riot/Blizzard.
The main reason I was able to contribute as much as I did was because I had the gumption to consider the possibility that the company’s existing plans weren’t very good. Basically by going in the exact opposite direction of your “amateurs should stay humble” advice.
Here are some more things I believe:
If you’re solving a problem that is similar to a problem that has already been solved, but is not an exact match, sometimes it takes as much effort to re-work an existing solution as to create a new solution from scratch.
Noise is a matter of place. A comment that is brilliant by the standards of Yahoo Answers might justifiably be downvoted on Less Wrong. It doesn’t make sense to ask that people writing comments on LW try to reach the standard of published academic work.
In computer science, industry is often “ahead” of academia in the sense that important algorithms get discovered in industry first, then academics discover them later and publish their results.
(a) They also laughed at Bozo the Clown. (I think this is Carl Sagan’s quote).
(b) Outside view: how often do outsiders solve a problem in a novel way, vs just adding noise and cluelessness to the discussion? Base rates! Again, nothing that I am saying is controversial, having good priors is a part of “rationality folklore” already. Going with expert consensus as a prior is a part of “rationality folklore” already. It’s just that people selectively follow rationality practices only when they are fun to follow.
(c) “In computer science, industry is often “ahead” of academia in the sense that important algorithms get discovered in industry first”
Yes, this sometimes happens. But again, base rates. Google/Facebook is full of academia-trained PhDs and ex-professors, so the line here is unclear. It’s not amateurs coming up with these algorithms. John Tukey came up with the Fast Fourier Transform while at Bell Labs, but he was John Tukey, and had a math PhD from Princeton.
Changes are, your 5 minute thoughts on the matter are only adding noise to the discussion.
This is where we differ; I think the potential for substantial contribution vastly outweighs any “noise” that may be be caused by amateurs taking stabs at he problem. I do not think all the low hanging fruit are gone (and if they were, how would we know so?), I think that amateurs are capable of substantial contributions in several fields. I think that optimism towards open problems is a more productive attitude.
I support “LW’s love affair with amateurism”, and it’s a part of the culture I wouldn’t want to see disappear.
You should tell Google and academia, they will be most interested in your ideas. Don’t you think people already thought very hard about this? This is such a typical LW attitude.
This reply contributes nothing to the discussion of the problem at hand, and is quite uncharitable, I hope such replies were discouraged, and if downvoting was enabled, I would have downvoted it.
If thinking that they can solve the problem at hand (and making attempts at it) is a “typical LW attitude”, then it is an attitude I want to see more of and believe should be encouraged (thus, I’ll be upvoting /u/John_Maxwell_IV ’s post). A priori assuming that one cannot solve a problem (that hasn’t been proven/isnt known to be unsolvable) and thus refraining from even attempting the problem, isn’t an attitude that I want to see become the norm in Lesswrong. It’s not an attitude that I think is useful, productive, optimal or efficient.
It is my opinion, that we want to encourage people to attempt problems of interest to the community (the potential benefits are vast (e.g the problem is solved, and/or significant improvements are made on the problem, and future endeavours would have a better starting point), and the potential demerits are of lesser impact (time (ours and whoever attempts it) is wasted on an unpromising solution).
Coming back to the topic that was being discussed, I think methods of costly signalling are promising (for example, when you upvote a post you transfer X karma to the user, and you lose k*X (k < 1)).
I have been here for a few years, I think my model of “the LW mindset” is fairly good.
I suppose the general thing I am trying to say is: “speak less, read more.” But at the end of the day, this sort of advice is hopelessly entangled with status considerations. So it’s hard to give to a stranger, and have it be received well. Only really works in the context of an existing apprenticeship relationship.
(“A priori” suggests lack of knowledge to temper an initial impression, which doesn’t apply here.)
There are problems one can’t by default solve, and a statement, standing on its own, that it’s feasible to solve them is known to be wrong. A “useful attitude” of believing something wrong is a popular stance, but is it good? How does its usefulness work, specifically, if it does, and can we get the benefits without the ugliness?
that hasn’t been proven/isnt known to be unsolvable)
An optimistic attitude towards problems that are potentially solvable is instrumentally useful—and dare I argue—instrumentally rational. The drawbacks of encouraging an optimistic attitude towards open problems are far outweighed by the potential benefits.
(The quote markup in your comment designates a quote from your earlier comment, not my comment.)
You are not engaging the distinction I’ve drawn. Saying “It’s useful” isn’t the final analysis, there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya).
The problem of improving over the stance of an “optimistic attitude” might be solvable.
(The quote markup in your comment designates a quote from your earlier comment, not my comment.)
I know: I was quoting myself.
Saying “It’s useful” isn’t the final analysis
I guess for me it is.
there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya)
The beliefs aren’t known to be false. It is not clear to me, that someone believing they can solve a problem (that isn’t known/proven or even strongly suspected to be unsolvable) is a false belief.
What do you propose to replace the optimism I suggest?
Keep tweaking the rules until you’ve got a system where the easiest way to get karma is to make quality contributions?
There probably exist karma systems which are provably non-gameable in relevant ways. For example, if upvotes are a conserved quantity (i.e. by upvoting you, I give you 1 upvote and lose 1 of my own upvotes), then you can’t manufacture them from thin air using sockpuppets.
However, it also seems like for a small community, you’re probably better off just moderating by hand. The point of a karma system is to automatically scale moderation up to a much larger number of people, at which point it makes more sense to hash out details. In other worse, maybe I should go try to get a job on reddit’s moderator tools team.
This will never ever work. Predicting this in advance.
You should tell Google and academia, they will be most interested in your ideas. Don’t you think people already thought very hard about this? This is such a typical LW attitude.
Can you show me 3 peer-reviewed papers which discuss discussion site karma systems that differ meaningfully from reddit’s, and 3 discussion sites that implement karma systems that differ from reddit’s in interesting ways? If not, it seems like a neglected topic to me.
Maybe I’m just not very good at doing literature searches. I did a search on Google Scholar for “reddit karma” and found only one paper which focuses on reddit karma. It’s got brilliant insights such as
...
I believe Robin Hanson when he says academics neglect topics if they are too weird-seeming. Do you disagree?
It’s certainly plausible that there is academic research relevant to the design of karma systems, but I don’t see why the existence of such research is a compelling reason to not spend 5 minutes thinking about the question from first principles on my own. Relevant quote.
Coincidentally, just a couple days ago I was having a conversation with a math professor here at UC Berkeley about the feasibility of doing research outside of academia. The professor’s opinion was that this is very difficult to do in math, because math is a very “vertical” field where you have to climb to the top before making a contribution, and as long as you are going to spend half a decade or more climbing to the top, you might as well do so within the structure of academia. However, the professor did not think this was true of computer science (see: stuff like Bitcoin which did not come out of academia).
You can’t do lit searches with google. Here’s one paper with a bunch of references on attacks on reputation systems, and reputation systems more generally:
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36757.pdf
You are right that lots of folks outside of academia do research on this, in particular game companies (due to toxic players in multiplayer games). This is far from a solved problem—Valve, Riot and Blizzard spend an enormous amount of effort on reputation systems.
I don’t think there is a way to write this in a way that doesn’t sound mean: because you are an amateur. Imo, the best way for amateurs to proceed is to (a) trust experts, (b) read expert stuff, and (c) mostly not talk. Changes are, your 5 minute thoughts on the matter are only adding noise to the discussion. In principle, taking expert consensus as the prior is a part of rationality. In practice, people ignore this part because it is not a practice that is fun to follow. It’s much more fun to talk than to read papers.
LW’s love affair with amateurism is one of the things I hate most about its culture.
My favorite episode in the history of science is how science “forgot” what the cure of scurvy was. In order for human civilization not to forget things, we need to be better about (a), (b), (c) above.
I appreciate the literature pointer.
What expert consensus are you referring to? I see an unsolved engineering problem, not an expert consensus.
My view of amateurism has been formed, in a large part, from reading experts on the topic:
Paul Graham
Richard Hamming
Edward Boyden
This past summer I was working at a startup that does predictive maintenance for internet-connected devices. The CEO has a PhD from Oxford and did his postdoc at Stanford, so probably not an amateur. But working over the summer, I was able to provide a different perspective on the problems that the company had been thinking about for over a year, and a big part of the company’s proposed software stack ended up getting re-envisioned and written from scratch, largely due to my input. So I don’t think it’s ridiculous for me to wonder whether I’d be able to make a similar contribution at Valve/Riot/Blizzard.
The main reason I was able to contribute as much as I did was because I had the gumption to consider the possibility that the company’s existing plans weren’t very good. Basically by going in the exact opposite direction of your “amateurs should stay humble” advice.
Here are some more things I believe:
If you’re solving a problem that is similar to a problem that has already been solved, but is not an exact match, sometimes it takes as much effort to re-work an existing solution as to create a new solution from scratch.
Noise is a matter of place. A comment that is brilliant by the standards of Yahoo Answers might justifiably be downvoted on Less Wrong. It doesn’t make sense to ask that people writing comments on LW try to reach the standard of published academic work.
In computer science, industry is often “ahead” of academia in the sense that important algorithms get discovered in industry first, then academics discover them later and publish their results.
Interested to learn more about your perspective.
(a) They also laughed at Bozo the Clown. (I think this is Carl Sagan’s quote).
(b) Outside view: how often do outsiders solve a problem in a novel way, vs just adding noise and cluelessness to the discussion? Base rates! Again, nothing that I am saying is controversial, having good priors is a part of “rationality folklore” already. Going with expert consensus as a prior is a part of “rationality folklore” already. It’s just that people selectively follow rationality practices only when they are fun to follow.
(c) “In computer science, industry is often “ahead” of academia in the sense that important algorithms get discovered in industry first”
Yes, this sometimes happens. But again, base rates. Google/Facebook is full of academia-trained PhDs and ex-professors, so the line here is unclear. It’s not amateurs coming up with these algorithms. John Tukey came up with the Fast Fourier Transform while at Bell Labs, but he was John Tukey, and had a math PhD from Princeton.
(Upvoted).
This is where we differ; I think the potential for substantial contribution vastly outweighs any “noise” that may be be caused by amateurs taking stabs at he problem. I do not think all the low hanging fruit are gone (and if they were, how would we know so?), I think that amateurs are capable of substantial contributions in several fields. I think that optimism towards open problems is a more productive attitude.
I support “LW’s love affair with amateurism”, and it’s a part of the culture I wouldn’t want to see disappear.
This reply contributes nothing to the discussion of the problem at hand, and is quite uncharitable, I hope such replies were discouraged, and if downvoting was enabled, I would have downvoted it.
If thinking that they can solve the problem at hand (and making attempts at it) is a “typical LW attitude”, then it is an attitude I want to see more of and believe should be encouraged (thus, I’ll be upvoting /u/John_Maxwell_IV ’s post). A priori assuming that one cannot solve a problem (that hasn’t been proven/isnt known to be unsolvable) and thus refraining from even attempting the problem, isn’t an attitude that I want to see become the norm in Lesswrong. It’s not an attitude that I think is useful, productive, optimal or efficient.
It is my opinion, that we want to encourage people to attempt problems of interest to the community (the potential benefits are vast (e.g the problem is solved, and/or significant improvements are made on the problem, and future endeavours would have a better starting point), and the potential demerits are of lesser impact (time (ours and whoever attempts it) is wasted on an unpromising solution).
Coming back to the topic that was being discussed, I think methods of costly signalling are promising (for example, when you upvote a post you transfer X karma to the user, and you lose k*X (k < 1)).
I have been here for a few years, I think my model of “the LW mindset” is fairly good.
I suppose the general thing I am trying to say is: “speak less, read more.” But at the end of the day, this sort of advice is hopelessly entangled with status considerations. So it’s hard to give to a stranger, and have it be received well. Only really works in the context of an existing apprenticeship relationship.
Status games outside, the sentiment expressed in my reply are my real views on the matter.
(“A priori” suggests lack of knowledge to temper an initial impression, which doesn’t apply here.)
There are problems one can’t by default solve, and a statement, standing on its own, that it’s feasible to solve them is known to be wrong. A “useful attitude” of believing something wrong is a popular stance, but is it good? How does its usefulness work, specifically, if it does, and can we get the benefits without the ugliness?
An optimistic attitude towards problems that are potentially solvable is instrumentally useful—and dare I argue—instrumentally rational. The drawbacks of encouraging an optimistic attitude towards open problems are far outweighed by the potential benefits.
(The quote markup in your comment designates a quote from your earlier comment, not my comment.)
You are not engaging the distinction I’ve drawn. Saying “It’s useful” isn’t the final analysis, there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya).
The problem of improving over the stance of an “optimistic attitude” might be solvable.
I know: I was quoting myself.
I guess for me it is.
The beliefs aren’t known to be false. It is not clear to me, that someone believing they can solve a problem (that isn’t known/proven or even strongly suspected to be unsolvable) is a false belief.
What do you propose to replace the optimism I suggest?