It seems to me that the next level will be one of the following, or its combination:
a nicer, more cooperative society
prediction markets
superhuman artificial intelligence
eugenics
genetically modified humans
cyborgs
ems
In some sense, these are all just different strategies for “becoming smarter”, the main difference is between creating individuals that are smarter (AI, mutants, cyborgs), creating more of the smart individuals (eugenics, ems), or improving cooperation between existing smart individuals (niceness, prediction markets).
In current situation, I see the problem as the following:
most people are not smart; most smart people are not sane
most people are kinda nice, but the bad actors are somehow over-represented
a lot of energy is wasted in zero-sum competitions, and we do not have good mechanisms for cooperation
we do not even have good ways to aggregate information about people
The first point is obvious. Maybe not if you live in the Bay Area, but I assume that everywhere else the lack of smart and sane people is visible and painful. I have no idea how to build a nicer society with stupid and insane people. Democracy selects for ideas that appeal to many, i.e. to the stupid and insane majority. Dictatorship selects for people who are not nice. My best guess would be to bet on creating a smart and sane subculture, and hope that it will inspire other people. But outside of Bay Area we probably don’t have enough people to start it, and within Bay Area most people are on drugs or otherwise crazy.
The second point… I wish there was a short explanation I could point at, but the concept is approximately in the direction of “you are not hiring the top 1%” and “the asshole filter”. It’s a combination of “less desirable people circulate more (because they have to)” and “naive defenses against bad people are actually more likely to discourage good ones (because they respect your intent to be left alone) and less likely to discourage bad ones (because predators have a motive to get to you)”. As a result, our interactions with random people are more likely to be unpleasant than the statistics of the population would suggest.
The fourth point… I often feel like there should be some mechanism for “people reviewing people”, where we could get information about other people in ways more efficient and reliable than gossip. Otherwise, trust can’t scale. But when I imagine such review system, there are many obvious problems with no obvious solutions. For starters, people lie. If I say “avoid him, he punched me for no reason”, nothing prevents the other person to write exactly the same thing about me, even if that did not happen. A naive solution might be to expect that people will reciprocate both good and bad reviews, and thus treat all relations as directionless; we cannot know whether X or Y is the bad guy, but we can know that X and Y don’t like each other. Then look at the rest of the graph, and if X has conflicts with dozen different people, and Y only has a conflict with X, then probably X is the bad guy. Except, nope, maybe Y just asked all his friends to falsely accuse X. Also, maybe Y has a lot of money and threatens to sue everyone who gives him a bad rating in the system.
most people are kinda nice, but the bad actors are somehow over-represented
I think that people who want power tend to get it. Power isn’t a nice thing to want, and it’s kind of not even a sane thing to want. The same sort of applies to a platform or a loud voice, another form of overrepresentation.
I agree that one way to better cooperation is a better asshole filter and better way to share information about people. Reputation systems to date have suffered the shortcomings you mention, but that doesn’t mean there aren’t solutions.
If these people frequently reciprocate bad reviews, corroborated by the same people, their votes can be deprecated to the point of being worthless. Any algorithm can be gamed, but if it’s kept private, it might be made very hard to game. And shifting the algorithm retrospectively discourages gaming, by risking your future vote by any gaming attempts.
The stakes are quite high here, but I don’t know of any work suggesting that the problem is unsolvable.
It’s bad that getting power is positively correlated with wanting power, rather than with being competent, nice, and sane. But that’s the natural outcome; there would have to be some force acting in the opposite direction to get a different outcome.
People instinctively hate those who have power, but there are a few problems with the instinct. First, the world is complicated—if someone magically offered me a position to rule something, I would be quite aware that I am not competent enough to take it. (If the options were either me, or a person who I believe is clearly a bad choice, I would probably accept the deal, but the next step would be desperately trying to find smart and trustworthy advisors; ideally hoping that I would also get a budget to pay them.) Second, the well-known trick for people who want power is to make you believe that they will fight for you, against some dangerous outgroup.
it’s kind of not even a sane thing to want.
If the stakes are small, and I am obviously the most competent person in the room, it makes sense to attempt to take the power to make the decisions. But with big things, such as national politics, there are too many people competing, too big cost to pay, too many unpredictable influences… you either dedicate your entire life to it, or you don’t have much of a chance.
I think the reasonable way to get power is to start with smaller stakes and move up gradually. That way, however far you get, you did what you were able to do. Many people ignore it; they focus on the big things they see in TV, where they have approximately zero chance to influence something, while ignoring less shiny options, such as municipal politics, where there is a greater chance to succeed. But this is a problem for rationalists outside of Bay Area, if each of us lives in a different city, we do not have much of an opportunity to cooperate on the municipal level.
It seems to me that the next level will be one of the following, or its combination:
a nicer, more cooperative society
prediction markets
superhuman artificial intelligence
eugenics
genetically modified humans
cyborgs
ems
In some sense, these are all just different strategies for “becoming smarter”, the main difference is between creating individuals that are smarter (AI, mutants, cyborgs), creating more of the smart individuals (eugenics, ems), or improving cooperation between existing smart individuals (niceness, prediction markets).
In current situation, I see the problem as the following:
most people are not smart; most smart people are not sane
most people are kinda nice, but the bad actors are somehow over-represented
a lot of energy is wasted in zero-sum competitions, and we do not have good mechanisms for cooperation
we do not even have good ways to aggregate information about people
The first point is obvious. Maybe not if you live in the Bay Area, but I assume that everywhere else the lack of smart and sane people is visible and painful. I have no idea how to build a nicer society with stupid and insane people. Democracy selects for ideas that appeal to many, i.e. to the stupid and insane majority. Dictatorship selects for people who are not nice. My best guess would be to bet on creating a smart and sane subculture, and hope that it will inspire other people. But outside of Bay Area we probably don’t have enough people to start it, and within Bay Area most people are on drugs or otherwise crazy.
The second point… I wish there was a short explanation I could point at, but the concept is approximately in the direction of “you are not hiring the top 1%” and “the asshole filter”. It’s a combination of “less desirable people circulate more (because they have to)” and “naive defenses against bad people are actually more likely to discourage good ones (because they respect your intent to be left alone) and less likely to discourage bad ones (because predators have a motive to get to you)”. As a result, our interactions with random people are more likely to be unpleasant than the statistics of the population would suggest.
The fourth point… I often feel like there should be some mechanism for “people reviewing people”, where we could get information about other people in ways more efficient and reliable than gossip. Otherwise, trust can’t scale. But when I imagine such review system, there are many obvious problems with no obvious solutions. For starters, people lie. If I say “avoid him, he punched me for no reason”, nothing prevents the other person to write exactly the same thing about me, even if that did not happen. A naive solution might be to expect that people will reciprocate both good and bad reviews, and thus treat all relations as directionless; we cannot know whether X or Y is the bad guy, but we can know that X and Y don’t like each other. Then look at the rest of the graph, and if X has conflicts with dozen different people, and Y only has a conflict with X, then probably X is the bad guy. Except, nope, maybe Y just asked all his friends to falsely accuse X. Also, maybe Y has a lot of money and threatens to sue everyone who gives him a bad rating in the system.
I very much agree that
I think that people who want power tend to get it. Power isn’t a nice thing to want, and it’s kind of not even a sane thing to want. The same sort of applies to a platform or a loud voice, another form of overrepresentation.
I agree that one way to better cooperation is a better asshole filter and better way to share information about people. Reputation systems to date have suffered the shortcomings you mention, but that doesn’t mean there aren’t solutions.
If these people frequently reciprocate bad reviews, corroborated by the same people, their votes can be deprecated to the point of being worthless. Any algorithm can be gamed, but if it’s kept private, it might be made very hard to game. And shifting the algorithm retrospectively discourages gaming, by risking your future vote by any gaming attempts.
The stakes are quite high here, but I don’t know of any work suggesting that the problem is unsolvable.
It’s bad that getting power is positively correlated with wanting power, rather than with being competent, nice, and sane. But that’s the natural outcome; there would have to be some force acting in the opposite direction to get a different outcome.
People instinctively hate those who have power, but there are a few problems with the instinct. First, the world is complicated—if someone magically offered me a position to rule something, I would be quite aware that I am not competent enough to take it. (If the options were either me, or a person who I believe is clearly a bad choice, I would probably accept the deal, but the next step would be desperately trying to find smart and trustworthy advisors; ideally hoping that I would also get a budget to pay them.) Second, the well-known trick for people who want power is to make you believe that they will fight for you, against some dangerous outgroup.
If the stakes are small, and I am obviously the most competent person in the room, it makes sense to attempt to take the power to make the decisions. But with big things, such as national politics, there are too many people competing, too big cost to pay, too many unpredictable influences… you either dedicate your entire life to it, or you don’t have much of a chance.
I think the reasonable way to get power is to start with smaller stakes and move up gradually. That way, however far you get, you did what you were able to do. Many people ignore it; they focus on the big things they see in TV, where they have approximately zero chance to influence something, while ignoring less shiny options, such as municipal politics, where there is a greater chance to succeed. But this is a problem for rationalists outside of Bay Area, if each of us lives in a different city, we do not have much of an opportunity to cooperate on the municipal level.