I’m unimpressed by this method. First, the procedure as given does more to reinforce pre-existing beliefs and point one to people who will reinforce those beliefs than anything else. Second, the sourcing used as experts is bad or outright misleading. For example, consider global warming. Wikipedia is listed as an expert source. But Wikipedia has no expertise and is itself an attempt at a neutral summary of experts. Even worse, Conservapedia is used both on the global warming and 9-11 pages. Considering that Conservapedia is Young Earth Creationist and thinks that the idea that Leif Erickson came to the the New World is a liberal conspiracy, I don’t think any rational individual will consider them to be a reliable source (and the vast majority of American right-wingers I’ve ever talked to about this cringe when Conservapedia gets mentioned. So this isn’t even my own politics coming into play). On cryonics we have Benjamin Franklin listed as pro. Now, that’s roughly accurate. But it is also clear that he was centuries too early to have anything resembling relevant expertise. Looking at many of the fringe subjects a large number of the so-called experts who are living today have no intrinsic justification for their expertise (actors are not experts on scientific issues for example). TakeOnIt seems devoted if anything to blurring the nature of expert knowledge to the point where it becomes almost meaningless. The Bayesian Conspiracy would not approve.
TakeOnIt records the opinions of BOTH experts and influencers—not just experts. Perhaps I confused you by not being clear about this in my original comment. In any case, TakeOnIt groups opinions by the expertise of those who hold the opinions. This accentuates—not blurs—the distinction between those who have relevant expertise and those who don’t (but who are nonetheless influential). It also puts those who have expertise relevant to the question topic at the top of the page. You seem to be saying readers will easily mistake an expert for an influencer. I’m open to suggestions if you think it could be made clearer than it is.
I don’t think they are doing as good a job as you think separating experts from non-experts. For example, they describe Conservapedia as an “encyclopedia” with no other modifier. Similarly they describe Deepak Chopra as an “expert on alternative medicine.” If they want to make a clear distinction I’d suggest having different color schemes (at minimum). Overall, to even include some of these people together is simply to give weight to views which should have effectively close to zero weight.
If Deepak Chopra is blatantly flagged as a “fake expert”, it will alienate people who are initially impressed with his arguments, and they will not participate, and they will not see all the opposing opinions. Color schemes indicating how much the site administrators believe someone to be a real expert would be mind-killing.
Upvoting for making a very valid point. I’m not completely sure though that’s necessarily the perfect solution. Wikipedia for example specifically has a set of very careful rules to handle minority viewpoints and what constitutes a reliable source or relevant expert. But it may be that that sort of thing works better in an encylopedia format (also even Wikipedia will quite Deepak on alt med things even if we spend a lot of time making clear what the science says).
to even include some of these people together is simply to give weight to views which should have effectively close to zero weight.
No no no! It’s vital that the opinions of influential people—even if they’re completely wrong—are included on TakeOnIt. John Stuart Mill makes my point perfectly:
...the peculiar evil of silencing the expression of an opinion is… If an opinion is right, [people] are deprived of the opportunity of exchanging error for truth: if wrong, they lose what is almost as great a benefit, the clearer perception and livelier impression of the truth, produced by its collision with error.
P.S. I updated the tag line for Conservapedia from “Encyclopedia” to “Christian Encyclopedia”. Thanks for pointing that out.
I’ve been playing with the site and from my perspective there are two problems. One is that there’s a lot of chaff. The other is that there doesn’t seem to be enough activity yet.
If there were a lot of activity, I wouldn’t necessarily mind that there are “experts” I don’t respect; it would still be extremely useful as a microcosm of the world’s beliefs. I do want to know which people the public considers to be “experts.” That’s a useful service in itself.
Censorship? Not in a political sense, of course. But there are privately owned institutions which have an interest in permitting a diversity of views. Universities, for instance. This is a site whose usefulness depends on it having no governing ideology. Blocking “unreliable” sources isn’t really censorship, but it makes the site less good at what it purports to do.
Stuff that I think is bad, and that I would say “reasonable” people agree is bad—celebrities as experts, Deepak Chopra, mentalists, and so on. But I don’t necessarily think that’s a problem for the site. If people really get their information from those sources, then I want to know that.
I’m almost inclined to say that calling Conservapedia a Christian Encyclopedia is an insult to Christianity more than it deserves (theism is very likely incorrect but Conservapedia’s attitude towards the universe is much more separated from reality than that of most Christians). Also, I don’t think that what John Stuart Mill is talking about is the same thing. First, note that I’m not saying one should censor Chopra, merely that he’s not worth including for this sort of thing. That’s not “silencing” by any reasonable definition. And there are other experts there who I disagree with whom I wouldn’t put in that category. Thus for example, in both the cryonics and Singularity questions there are included people whom I disagree with whom I don’t think are at all helpful. Or again consider Benjamin Franklin, whose opinion on cryonics I’m sympathetic with but whom just didn’t have any knowledge that would justify considering his opinion worthy of weight.
First, note that I’m not saying one should censor Chopra, merely that he’s not worth including for this sort of thing. That’s not “silencing” by any reasonable definition.
It should be noted that TakeOnIt is setup to allow the general public to suggest expert quotes, and with a short track record as a non-spammer, people get promoted to moderator status, and can directly add a quote. So some members of TakeOnIt are impressed with Chopra, and it would be counterproductive censorship to say that they are not allowed to add his quotes. What we get in exchange for allowing this is that the general public is helping to build the database of expert opinions, and may even include real experts that we would not have known to look at.
Or again consider Benjamin Franklin, whose opinion on cryonics I’m sympathetic with but whom just didn’t have any knowledge that would justify considering his opinion worthy of weight.
Franklin’s quote is more about cryonics being good if it were feasible than if it is feasible. Ben, do you think it should be moved to this question?
I see the argument for it being counterproductive which I’m tentatively convinced by. But it isn’t censorship by most definitions of the term. Saying “you can’t say X” is censorship saying “You can’t say X on my website” is not censorship. (Again, I am convinced by the counterproductivity argument so we seem to at this point be in more or less agreement if one is going to try to run TakeOnIt in a manner close to the intended general purpose).
Moving Franklin might make sense. Unfortunately, many of the people discussing cryonics are also talking about its general desirability. The questions seem to be frequently discussed together. Incidentally note that there’s a high correlation between having a moral or philosophical objection to cryonics and being likely to think it won’t work. This potentially suggests that there’s some belief overkill going on on one or both sides of this argument.
There is value in recording the opinions of anyone perceived as an expert by a segment of the general population, as it builds a track record for each supposed expert, so that the statistical analysis can reveal that the opinions of some so called experts are just noise, and give a result influenced mainly by the real experts.
That might work if we had major track records for people. Unfortunately for a lot of issues that could potentially matter (say the Singularity and Cryonics) we won’t have a good idea who was correct for some time. It seems like a better idea to become an expert on a few issues and then see how much a given expert agrees with you in the area of your expertise. If they agree with you, you should be more likely to give credence to them in their claimed areas of expertise.
Well, I would like to see more short term predictions on TakeOnIt, where after the event in question, comments are closed, and what really happened is recorded. From this data, we would extrapolate who to believe about the long term predictions.
That might work in some limited fields (economics and technological developement being obvious ones). Unfortunately, many experts don’t make short term predictions. In order for this to work one would need to get experts to agree to try to make those predictions. And they have a direct incentive not to do so since it can be used against them later (well up to a point. Psychics like Sylvia Brown make repeated wrong predictions and their followers don’t seem to mind). I give Ray Kurzweil a lot of credit for having the courage to make many relatively short term predictions (many which so far have turned out to be wrong but that’s a separate issue).
Yes, in some cases, there is no (after the fact) non-controversial set of issues to use to determine how effective an expert is. Which means that I can’t convince the general public of how much they should trust the expert, but I can still figure out how much I should trust em by looking at their positions that I can evaluate.
There is also the possibility of saying something about such an expert based on correlations with experts whose predictions can be non-controversially evaluated.
I’m unimpressed by this method. First, the procedure as given does more to reinforce pre-existing beliefs and point one to people who will reinforce those beliefs than anything else. Second, the sourcing used as experts is bad or outright misleading. For example, consider global warming. Wikipedia is listed as an expert source. But Wikipedia has no expertise and is itself an attempt at a neutral summary of experts. Even worse, Conservapedia is used both on the global warming and 9-11 pages. Considering that Conservapedia is Young Earth Creationist and thinks that the idea that Leif Erickson came to the the New World is a liberal conspiracy, I don’t think any rational individual will consider them to be a reliable source (and the vast majority of American right-wingers I’ve ever talked to about this cringe when Conservapedia gets mentioned. So this isn’t even my own politics coming into play). On cryonics we have Benjamin Franklin listed as pro. Now, that’s roughly accurate. But it is also clear that he was centuries too early to have anything resembling relevant expertise. Looking at many of the fringe subjects a large number of the so-called experts who are living today have no intrinsic justification for their expertise (actors are not experts on scientific issues for example). TakeOnIt seems devoted if anything to blurring the nature of expert knowledge to the point where it becomes almost meaningless. The Bayesian Conspiracy would not approve.
TakeOnIt records the opinions of BOTH experts and influencers—not just experts. Perhaps I confused you by not being clear about this in my original comment. In any case, TakeOnIt groups opinions by the expertise of those who hold the opinions. This accentuates—not blurs—the distinction between those who have relevant expertise and those who don’t (but who are nonetheless influential). It also puts those who have expertise relevant to the question topic at the top of the page. You seem to be saying readers will easily mistake an expert for an influencer. I’m open to suggestions if you think it could be made clearer than it is.
I don’t think they are doing as good a job as you think separating experts from non-experts. For example, they describe Conservapedia as an “encyclopedia” with no other modifier. Similarly they describe Deepak Chopra as an “expert on alternative medicine.” If they want to make a clear distinction I’d suggest having different color schemes (at minimum). Overall, to even include some of these people together is simply to give weight to views which should have effectively close to zero weight.
If Deepak Chopra is blatantly flagged as a “fake expert”, it will alienate people who are initially impressed with his arguments, and they will not participate, and they will not see all the opposing opinions. Color schemes indicating how much the site administrators believe someone to be a real expert would be mind-killing.
Upvoting for making a very valid point. I’m not completely sure though that’s necessarily the perfect solution. Wikipedia for example specifically has a set of very careful rules to handle minority viewpoints and what constitutes a reliable source or relevant expert. But it may be that that sort of thing works better in an encylopedia format (also even Wikipedia will quite Deepak on alt med things even if we spend a lot of time making clear what the science says).
No no no! It’s vital that the opinions of influential people—even if they’re completely wrong—are included on TakeOnIt. John Stuart Mill makes my point perfectly:
P.S. I updated the tag line for Conservapedia from “Encyclopedia” to “Christian Encyclopedia”. Thanks for pointing that out.
I’ve been playing with the site and from my perspective there are two problems. One is that there’s a lot of chaff. The other is that there doesn’t seem to be enough activity yet.
If there were a lot of activity, I wouldn’t necessarily mind that there are “experts” I don’t respect; it would still be extremely useful as a microcosm of the world’s beliefs. I do want to know which people the public considers to be “experts.” That’s a useful service in itself.
Censorship? Not in a political sense, of course. But there are privately owned institutions which have an interest in permitting a diversity of views. Universities, for instance. This is a site whose usefulness depends on it having no governing ideology. Blocking “unreliable” sources isn’t really censorship, but it makes the site less good at what it purports to do.
Thanks for the feedback.
Do you mean chaff as in “stuff that I personally don’t care about” or chaff as in “stuff that anyone would agree is bad”?
Yes, the site is still in the bootstrapping phase. Having said that, the site needs to have a better way of displaying recent activity.
Stuff that I think is bad, and that I would say “reasonable” people agree is bad—celebrities as experts, Deepak Chopra, mentalists, and so on. But I don’t necessarily think that’s a problem for the site. If people really get their information from those sources, then I want to know that.
I’m almost inclined to say that calling Conservapedia a Christian Encyclopedia is an insult to Christianity more than it deserves (theism is very likely incorrect but Conservapedia’s attitude towards the universe is much more separated from reality than that of most Christians). Also, I don’t think that what John Stuart Mill is talking about is the same thing. First, note that I’m not saying one should censor Chopra, merely that he’s not worth including for this sort of thing. That’s not “silencing” by any reasonable definition. And there are other experts there who I disagree with whom I wouldn’t put in that category. Thus for example, in both the cryonics and Singularity questions there are included people whom I disagree with whom I don’t think are at all helpful. Or again consider Benjamin Franklin, whose opinion on cryonics I’m sympathetic with but whom just didn’t have any knowledge that would justify considering his opinion worthy of weight.
It should be noted that TakeOnIt is setup to allow the general public to suggest expert quotes, and with a short track record as a non-spammer, people get promoted to moderator status, and can directly add a quote. So some members of TakeOnIt are impressed with Chopra, and it would be counterproductive censorship to say that they are not allowed to add his quotes. What we get in exchange for allowing this is that the general public is helping to build the database of expert opinions, and may even include real experts that we would not have known to look at.
Franklin’s quote is more about cryonics being good if it were feasible than if it is feasible. Ben, do you think it should be moved to this question?
Good call.
I see the argument for it being counterproductive which I’m tentatively convinced by. But it isn’t censorship by most definitions of the term. Saying “you can’t say X” is censorship saying “You can’t say X on my website” is not censorship. (Again, I am convinced by the counterproductivity argument so we seem to at this point be in more or less agreement if one is going to try to run TakeOnIt in a manner close to the intended general purpose).
Moving Franklin might make sense. Unfortunately, many of the people discussing cryonics are also talking about its general desirability. The questions seem to be frequently discussed together. Incidentally note that there’s a high correlation between having a moral or philosophical objection to cryonics and being likely to think it won’t work. This potentially suggests that there’s some belief overkill going on on one or both sides of this argument.
There is value in recording the opinions of anyone perceived as an expert by a segment of the general population, as it builds a track record for each supposed expert, so that the statistical analysis can reveal that the opinions of some so called experts are just noise, and give a result influenced mainly by the real experts.
See The Correct Contrarian Cluster.
That might work if we had major track records for people. Unfortunately for a lot of issues that could potentially matter (say the Singularity and Cryonics) we won’t have a good idea who was correct for some time. It seems like a better idea to become an expert on a few issues and then see how much a given expert agrees with you in the area of your expertise. If they agree with you, you should be more likely to give credence to them in their claimed areas of expertise.
Well, I would like to see more short term predictions on TakeOnIt, where after the event in question, comments are closed, and what really happened is recorded. From this data, we would extrapolate who to believe about the long term predictions.
That might work in some limited fields (economics and technological developement being obvious ones). Unfortunately, many experts don’t make short term predictions. In order for this to work one would need to get experts to agree to try to make those predictions. And they have a direct incentive not to do so since it can be used against them later (well up to a point. Psychics like Sylvia Brown make repeated wrong predictions and their followers don’t seem to mind). I give Ray Kurzweil a lot of credit for having the courage to make many relatively short term predictions (many which so far have turned out to be wrong but that’s a separate issue).
Yes, in some cases, there is no (after the fact) non-controversial set of issues to use to determine how effective an expert is. Which means that I can’t convince the general public of how much they should trust the expert, but I can still figure out how much I should trust em by looking at their positions that I can evaluate.
There is also the possibility of saying something about such an expert based on correlations with experts whose predictions can be non-controversially evaluated.