Why don’t you discuss the status quo solution? LessWrong is a system for allowing rationalists to think together.
It’s not highly formalized but that makes it a lot more flexible.
If you say you want groups of rationalists to solve problems together, which problems are you thinking about? What sort of problems do you want to solve?
LW is a venue for rationalists to think together, but not a technique for rationalists to think together. Despite the term “disputation arenas”, what almkglor is talking about is the latter, not the former.
I suppose you could say that “start a discussion in LW and see what happens” is a “disputation arena” in almkglor’s sense. So, therefore, is “just get a bunch of people together in a room and let them talk about it”. Presumably the techniques almkglor describes were designed because just putting people together in a room has been found not to work very well. Do you have grounds for thinking that putting people together in an LW thread works better? Or that rationalists are immune to groupthink and failure to reach consensus?
Or that rationalists are immune to groupthink and failure to reach consensus?
I think that those two criteria’s are insufficent to judge the quality of a “disputation arena” for rationalists. The core problem is that encouraging participation isn’t one of the criteria.
If you want to get things done in the real world than it’s vitally important to encourage participation. A disputation arena without participants is worthless.
I also doubt that reaching consensus is always a good thing. Singularity is one of the topic that almkglor thinks about. If you would take a year to get all LessWrong participants to have a consensus belief about the singularity I think that would be bad.
In year two you will have massive group think problems when you continue to discuss the singularity because all participants know the consensus belief of year one.
I would prefer a system with more diversity in opinions.
As far as avoiding group think, there are other strategies. Encouraging more members of the community to play devil’s advocate would be one way.
The core problem is that encouraging participation isn’t one of the criteria.
I think you may have been led astray by the terminology into thinking that “disputation arena” means, y’know, an arena for disputation, when in fact it seems to mean a technique for discussing things. Techniques like the Delphi method are intended for groups that already exist and need to do some thinking.
I also doubt that reaching consensus is always a good thing.
Is anyone claiming it is? My understanding is that these “disputation arenas” are methods a group can use to arrive at consensus when they need to do so. (Also #1: I’d think most of them are adaptable to the case where you don’t particularly need a consensus as such. Also #2: a consensus can be a complicated one with probabilities and things in, and it seems to me that agreement on such a consensus would avoid many of the perils of the usual sort of groupthink.)
I prefer “disputation arena” because “group thinking” is too close to “groupthinking”.
Is there a better term for “techniques for discussing things so that lots of thinking people can give their input and get a single coherent set of probabilities for what are the best possible choices for action” other than “disputation arena” or “group thinking technique”?
I do want to be precise, and “disputation arena” sounded kewl, but whatever.
I don’t know of any other term with that meaning. Making one up wouldn’t really be any worse than using “disputation arena”, I think, because to an excellent first approximation no one knows what “disputation arena” means anyway.
Techniques like the Delphi method are intended for groups that already exist and need to do some thinking.
I don’t think that’s the goal layed out in the first paragarph of the post. It ends with:
This makes it not only desirable to find ways to effectively get groups of rationalists to think together, but also increasingly necessary.
Getting groups of rationalists to think together is a goal where it’s important to design the system in a way that makes it easy and motivates participants to participate.
Okay, so that’s a sub-goal that I didn’t think about. I will think about this a little more.
Still, assuming that group exists and needs to do some thinking together, I think techniques like Delphi are fine.
Anyway, I assumed that LW’s groups are more cohesive and willing to cooperate in thinking exercises in groups (this is what I was thinking when I said “This makes it not only desirable to find ways to effectively get groups of rationalists to think together, but also increasingly necessary.”), but apparently it’s not as cohesive as I thought.
Successful online communities have a low bar to entry. As a result they aren’t as cohesize as a hierachical institution where you can simply order a group to make some decision via Delphi.
LessWrong is a network. It’s no hierachical institution and isn’t market driven.
LessWrong is one way of implementing groups of rationalists thinking together. One might say that it provides a centripetal phase: the discussion forums. But what centrifugal phase exists that prevents groupthink? Yes, we have “hold off on proposing solutions”—but remember that no current rationalist is perfect, and LW may grow soon (indeed, spreading rationality may require growing LW).
Also remember that people—including LessWrong members—tend to favor status quos, and given a chance, people tend to defend status quos to the death.
At the very least, we need to consider what other systems are available, and specifically de-emphasize the local status quo, since we might not be thinking perfectly rationally about it.
It’s not highly formalized but that makes it a lot more flexible.
The Turing machine is highly formalized and is the most flexible possible computational machine. I get “false dichotomy” signals from this statement.
If you say you want groups of rationalists to solve problems together, which problems are you thinking about? What sort of problems do you want to solve?
insane governments, insane societies, insane individuals, and the singularity, in that rough order of priority.
The Turing machine is highly formalized and is the most flexible possible computational machine. I get “false dichotomy” signals from this statement.
I don’t think you understand what I mean with the word highly formalized in this context. LessWrong has also a bunch of rules. Those rules are however made in a way where they don’t constrain the way one can use LessWrong as much as the rules of Delphi constrain it’s participants.
At the very least, we need to consider what other systems are available, and specifically de-emphasize the local status quo, since we might not be thinking perfectly rationally about it.
No, if you propose an alternative it makes sense to explain how it would improve the status quo. Ignoring the status quo that provides a system that actually works in practice is a bad idea.
At the moment there no working Delphi system that allows rationalists to discuss solutions for handling insane governments.
The cases where Delphi was used successfully are cases where it gets implented top-down. Whether the same approach works in an online community is up for discussion. I don’t know of a single case where such a system got enough users to work.
insane governments, insane societies, insane individuals, and the singularity, in that rough order of priority.
InTrade style prediction markets have the issue that predictions need to be able to be judged as true or false within a reasonable timeframe.
If you want to discuss how to tackle “insane governments” restristing yourself to claims that can be judged as true or false in short time frames probably removes most of the interesting questions from the discussion.
If you think otherwise, please illustrate how you would tackle the issue you brought forward in your post with Prediction Markets. How to tackle it with Delphi would also be interesting.
I’m also not clear about why we need to find consensus on “insane governments, insane societies, insane individuals, and the singularity”.
I don’t think you understand what I mean with the word highly formalized in this context. LessWrong has also a bunch of rules. Those rules are however made in a way where they don’t constrain the way one can use LessWrong as much as the rules of Delphi constrain it’s participants.
Okay, what exactly do you mean by “highly formalized”?
Constraints on behavior are not necessarily bad, in much the same way that there are more things in heaven and earth than are dreamt of in our philosophy: constraining things to a subset that can be shown to work can help. So I don’t really see “current LW has more freedom!!” as a significant advantage—because it might have more freedom to err. Of course, the probability of that being true is low—but can we at least try to show that?
After all, LW code is derived from Reddit. Of course, the online system is just part of the overarching system, and the system as a whole (including current community members) is different (there are more stringent rules for acceptance into the community here than on Reddit), but it might do well to consider that things may be made better.
At the very least, we need to consider what other systems are available, and specifically de-emphasize the local status quo, since we might not be thinking perfectly rationally about it.
No, if you propose an alternative it makes sense to explain how it would improve the status quo. Ignoring the status quo that provides a system that actually works in practice is a bad idea.
I said “de-emphasize”, not ignore. What I mean by “de-emphasize” is, acknowledge its existence, but treat it as an idea you have already thought about, i.e. keep it on hand and don’t forget about it, but don’t keep thinking about it at the expense of other, external ideas. In any case, I thought that it would be unnecessary to have to discuss the local status quo, since I would assume that members already know it.
Should I discuss the current status quo? I am not a regular member, despite reading OB before and LW for years, so I don’t feel qualified to get into its details. I mostly read the sequences and hardly look at discussion. Or even comments on the articles, anyway. So my knowledge of LW informal rules are minimal to say the least. Can you describe the status quo for me?
At the moment there no working Delphi system that allows rationalists to discuss solutions for handling insane governments. The cases where Delphi was used successfully are cases where it gets implented top-down. Whether the same approach works in an online community is up for discussion. I don’t know of a single case where such a system got enough users to work.
So should we, at this point, completely discard Delphi methods? How about NGT?
I suspect that it’s possible to modify LW’s polls to add some kind of Real-Time Delphi Method, as I mentioned in the article: (1) allow members to change their chosen options (2) require members to give a short justification for their chosen option (3) give randomized samples of justifications from other members. We can even have a flag that specifies normal forum polls or Delphi-style polls. But if the cost of making this modification is higher than the expected probability of that kind of Delphi being successful times the expected utility of that kind of Delphi methods in general for the rest of LW’s lifetime, then fine—let’s not do it.
If you think otherwise, please illustrate how you would tackle the issue you brought forward in your post with Prediction Markets. How to tackle it with Delphi would also be interesting.
I don’t know how to tackle it with Prediction Markets other than by futarchy: first vote on what measurements are to be used, then run a prediction market about whether particular policy decisions will improve or reduce those measurements. Insane governments are more sane if they have less corruption, better bureacratic efficiency blah blah—we may need to vote on that. Then we need to propose actual policy decisions and predict if they will lead to less corruption etc. or not. Unfortunately, I don’t understand enough of futarchy yet to make a proper judgment about it—it’s currently a mostly black box to me. I’m disturbed that futarchy_discuss appears to be defunct—I’m not sure if it’s because prediction markets have turned out to fail badly, or what.
Assuming those same measures can be agreed upon—less corruption, better bureacratic efficiency—then I suppose a Delphi Method can be made with “what policies should reduce corruption blah blah? How can we impose those policies from below? What feasible actions can we use to get those policies accepted?” as the questions.
(if you think that my definition of “insane government” isn’t very good, please understand that I live in a shitty little third-world country where the most troubling problems of the government is corruption and inefficiency, not whether or not the government should raise taxes)
I’m also not clear about why we need to find consensus on “insane governments, insane societies, insane individuals, and the singularity”.
You did write a long post on different systems for discussion and you did ignore it in that post.
Within your list you didn’t discuss systems that have shown to work in the real world to solve the kind of issues that you want to solve.
If you don’t like LessWrong as an example take an online community like Wikipedia as an example. If you don’t know the specifcs of any system that actually works in the real world, you are in a poor position to propose new system.
(if you think that my definition of “insane government” isn’t very good, please understand that I live in a shitty little third-world country where the most troubling problems of the government is corruption and inefficiency, not whether or not the government should raise taxes)
I would say that in the US corruption and government ineffiency are also central problems.
If you however want to solve those kinds of problems in your country than you have to choose. One way would be to get the IWF to promote some Good Government program in your country in a top-down way. The other way involves finding supporters in your own country.
For both strategies I doubt that the LessWrong public is the right audience. Join/found some Liquid Feedback based political party in your country.
You might even try to adept Liquid Feedback to be more Delphi like.
Because I think lack of consensus is one reason why our kind can’t cooperate.
One of the most effective calls for support to highly intelligent nerds was probably Julian Assange’s call that among other thing involved him telling the audience that they won’t get Christmas presents when they don’t cooperate. Julian Assange didn’t try to organise some vote to get consenus.
You did write a long post on different systems for discussion and you did ignore it in that post.
I thought it would be unnecessary, as I thought the people here would already know and it would be repetitive to do reiterate what is already known here. I’ll try to see if I can come up with some description of the local status quo, then, and edit the article to include it. I’m a little busy, Christmas is important in this country.
Within your list you didn’t discuss systems that have shown to work in the real world to solve the kind of issues that you want to solve.
Huh? These are techniques that have been studied with papersbackingthem (at least according to some very basic searches through Google). I have no idea how good those papers are, but maybe you do. Can you show some study specifically showing that Delphi works worse then typical internet forums?
take an online community like Wikipedia as an example.
Again, since LW also has a Wiki, I thought it would be superfluous to add it to the article too. I’ll find time to update it then.
If you however want to solve those kinds of problems in your country than you have to choose. One way would be to get the IWF to promote some Good Government program in your country in a top-down way. The other way involves finding supporters in your own country.
For both strategies I doubt that the LessWrong public is the right audience. Join/found some Liquid Feedback based political party in your country.
Thank you for this information.
One of the most effective calls for support to highly intelligent nerds was probably Julian Assange’s call that among other thing involved him telling the audience that they won’t get Christmas presents when they don’t cooperate. Julian Assange didn’t try to organise some vote to get consenus.
Why don’t you discuss the status quo solution? LessWrong is a system for allowing rationalists to think together. It’s not highly formalized but that makes it a lot more flexible.
If you say you want groups of rationalists to solve problems together, which problems are you thinking about? What sort of problems do you want to solve?
LW is a venue for rationalists to think together, but not a technique for rationalists to think together. Despite the term “disputation arenas”, what almkglor is talking about is the latter, not the former.
I suppose you could say that “start a discussion in LW and see what happens” is a “disputation arena” in almkglor’s sense. So, therefore, is “just get a bunch of people together in a room and let them talk about it”. Presumably the techniques almkglor describes were designed because just putting people together in a room has been found not to work very well. Do you have grounds for thinking that putting people together in an LW thread works better? Or that rationalists are immune to groupthink and failure to reach consensus?
I think that those two criteria’s are insufficent to judge the quality of a “disputation arena” for rationalists. The core problem is that encouraging participation isn’t one of the criteria. If you want to get things done in the real world than it’s vitally important to encourage participation. A disputation arena without participants is worthless.
I also doubt that reaching consensus is always a good thing. Singularity is one of the topic that almkglor thinks about. If you would take a year to get all LessWrong participants to have a consensus belief about the singularity I think that would be bad.
In year two you will have massive group think problems when you continue to discuss the singularity because all participants know the consensus belief of year one. I would prefer a system with more diversity in opinions.
As far as avoiding group think, there are other strategies. Encouraging more members of the community to play devil’s advocate would be one way.
I think you may have been led astray by the terminology into thinking that “disputation arena” means, y’know, an arena for disputation, when in fact it seems to mean a technique for discussing things. Techniques like the Delphi method are intended for groups that already exist and need to do some thinking.
Is anyone claiming it is? My understanding is that these “disputation arenas” are methods a group can use to arrive at consensus when they need to do so. (Also #1: I’d think most of them are adaptable to the case where you don’t particularly need a consensus as such. Also #2: a consensus can be a complicated one with probabilities and things in, and it seems to me that agreement on such a consensus would avoid many of the perils of the usual sort of groupthink.)
I prefer “disputation arena” because “group thinking” is too close to “groupthinking”.
Is there a better term for “techniques for discussing things so that lots of thinking people can give their input and get a single coherent set of probabilities for what are the best possible choices for action” other than “disputation arena” or “group thinking technique”?
I do want to be precise, and “disputation arena” sounded kewl, but whatever.
I don’t know of any other term with that meaning. Making one up wouldn’t really be any worse than using “disputation arena”, I think, because to an excellent first approximation no one knows what “disputation arena” means anyway.
I don’t think that’s the goal layed out in the first paragarph of the post. It ends with:
Getting groups of rationalists to think together is a goal where it’s important to design the system in a way that makes it easy and motivates participants to participate.
Okay, so that’s a sub-goal that I didn’t think about. I will think about this a little more.
Still, assuming that group exists and needs to do some thinking together, I think techniques like Delphi are fine.
Anyway, I assumed that LW’s groups are more cohesive and willing to cooperate in thinking exercises in groups (this is what I was thinking when I said “This makes it not only desirable to find ways to effectively get groups of rationalists to think together, but also increasingly necessary.”), but apparently it’s not as cohesive as I thought.
Successful online communities have a low bar to entry. As a result they aren’t as cohesize as a hierachical institution where you can simply order a group to make some decision via Delphi.
LessWrong is a network. It’s no hierachical institution and isn’t market driven.
If you want some high level understanding of the network paradigma, I recommend “In Search of How Societies Work” by David Ronfeld.
LessWrong is one way of implementing groups of rationalists thinking together. One might say that it provides a centripetal phase: the discussion forums. But what centrifugal phase exists that prevents groupthink? Yes, we have “hold off on proposing solutions”—but remember that no current rationalist is perfect, and LW may grow soon (indeed, spreading rationality may require growing LW).
Also remember that people—including LessWrong members—tend to favor status quos, and given a chance, people tend to defend status quos to the death.
At the very least, we need to consider what other systems are available, and specifically de-emphasize the local status quo, since we might not be thinking perfectly rationally about it.
The Turing machine is highly formalized and is the most flexible possible computational machine. I get “false dichotomy” signals from this statement.
insane governments, insane societies, insane individuals, and the singularity, in that rough order of priority.
I don’t think you understand what I mean with the word highly formalized in this context. LessWrong has also a bunch of rules. Those rules are however made in a way where they don’t constrain the way one can use LessWrong as much as the rules of Delphi constrain it’s participants.
No, if you propose an alternative it makes sense to explain how it would improve the status quo. Ignoring the status quo that provides a system that actually works in practice is a bad idea.
At the moment there no working Delphi system that allows rationalists to discuss solutions for handling insane governments. The cases where Delphi was used successfully are cases where it gets implented top-down. Whether the same approach works in an online community is up for discussion. I don’t know of a single case where such a system got enough users to work.
If you want to discuss how to tackle “insane governments” restristing yourself to claims that can be judged as true or false in short time frames probably removes most of the interesting questions from the discussion.
If you think otherwise, please illustrate how you would tackle the issue you brought forward in your post with Prediction Markets. How to tackle it with Delphi would also be interesting.
I’m also not clear about why we need to find consensus on “insane governments, insane societies, insane individuals, and the singularity”.
Okay, what exactly do you mean by “highly formalized”?
Constraints on behavior are not necessarily bad, in much the same way that there are more things in heaven and earth than are dreamt of in our philosophy: constraining things to a subset that can be shown to work can help. So I don’t really see “current LW has more freedom!!” as a significant advantage—because it might have more freedom to err. Of course, the probability of that being true is low—but can we at least try to show that?
After all, LW code is derived from Reddit. Of course, the online system is just part of the overarching system, and the system as a whole (including current community members) is different (there are more stringent rules for acceptance into the community here than on Reddit), but it might do well to consider that things may be made better.
I said “de-emphasize”, not ignore. What I mean by “de-emphasize” is, acknowledge its existence, but treat it as an idea you have already thought about, i.e. keep it on hand and don’t forget about it, but don’t keep thinking about it at the expense of other, external ideas. In any case, I thought that it would be unnecessary to have to discuss the local status quo, since I would assume that members already know it.
Should I discuss the current status quo? I am not a regular member, despite reading OB before and LW for years, so I don’t feel qualified to get into its details. I mostly read the sequences and hardly look at discussion. Or even comments on the articles, anyway. So my knowledge of LW informal rules are minimal to say the least. Can you describe the status quo for me?
So should we, at this point, completely discard Delphi methods? How about NGT?
I suspect that it’s possible to modify LW’s polls to add some kind of Real-Time Delphi Method, as I mentioned in the article: (1) allow members to change their chosen options (2) require members to give a short justification for their chosen option (3) give randomized samples of justifications from other members. We can even have a flag that specifies normal forum polls or Delphi-style polls. But if the cost of making this modification is higher than the expected probability of that kind of Delphi being successful times the expected utility of that kind of Delphi methods in general for the rest of LW’s lifetime, then fine—let’s not do it.
I don’t know how to tackle it with Prediction Markets other than by futarchy: first vote on what measurements are to be used, then run a prediction market about whether particular policy decisions will improve or reduce those measurements. Insane governments are more sane if they have less corruption, better bureacratic efficiency blah blah—we may need to vote on that. Then we need to propose actual policy decisions and predict if they will lead to less corruption etc. or not. Unfortunately, I don’t understand enough of futarchy yet to make a proper judgment about it—it’s currently a mostly black box to me. I’m disturbed that futarchy_discuss appears to be defunct—I’m not sure if it’s because prediction markets have turned out to fail badly, or what.
Assuming those same measures can be agreed upon—less corruption, better bureacratic efficiency—then I suppose a Delphi Method can be made with “what policies should reduce corruption blah blah? How can we impose those policies from below? What feasible actions can we use to get those policies accepted?” as the questions.
(if you think that my definition of “insane government” isn’t very good, please understand that I live in a shitty little third-world country where the most troubling problems of the government is corruption and inefficiency, not whether or not the government should raise taxes)
Because I think lack of consensus is one reason why our kind can’t cooperate.
Can we at least try to pull together on this one?
You did write a long post on different systems for discussion and you did ignore it in that post.
Within your list you didn’t discuss systems that have shown to work in the real world to solve the kind of issues that you want to solve. If you don’t like LessWrong as an example take an online community like Wikipedia as an example. If you don’t know the specifcs of any system that actually works in the real world, you are in a poor position to propose new system.
Being a heretic is hard work.
I would say that in the US corruption and government ineffiency are also central problems.
If you however want to solve those kinds of problems in your country than you have to choose. One way would be to get the IWF to promote some Good Government program in your country in a top-down way. The other way involves finding supporters in your own country.
For both strategies I doubt that the LessWrong public is the right audience. Join/found some Liquid Feedback based political party in your country.
You might even try to adept Liquid Feedback to be more Delphi like.
One of the most effective calls for support to highly intelligent nerds was probably Julian Assange’s call that among other thing involved him telling the audience that they won’t get Christmas presents when they don’t cooperate. Julian Assange didn’t try to organise some vote to get consenus.
I thought it would be unnecessary, as I thought the people here would already know and it would be repetitive to do reiterate what is already known here. I’ll try to see if I can come up with some description of the local status quo, then, and edit the article to include it. I’m a little busy, Christmas is important in this country.
Huh? These are techniques that have been studied with papers backing them (at least according to some very basic searches through Google). I have no idea how good those papers are, but maybe you do. Can you show some study specifically showing that Delphi works worse then typical internet forums?
Again, since LW also has a Wiki, I thought it would be superfluous to add it to the article too. I’ll find time to update it then.
Thank you for this information.
Okay.