TakeOnIt: Database of Expert Opinions
Ben Albahari wrote to tell us about TakeOnIt, which is trying to build a database of expert opinions. This looks very similar to the data that would be required to locate the Correct Contrarian Cluster—though currently they’re building the expert database collaboratively, using quotes, rather than by directly polling the experts on standard topics. Searching for “many worlds” and “zombies” didn’t turn up anything as yet; “God” was more productive.
The site is open to the public, you can help catalog expert opinions, and Ben says they’re happy to export the data for the use of anyone interested in this research area.
Having this kind of database in standardized form is critical for assessing the track records of experts. TakeOnIt is aware of this.
- Rationality Power Tools by 19 Sep 2010 6:20 UTC; 27 points) (
- 14 Apr 2011 2:01 UTC; 4 points) 's comment on It is OK to publicly make a mistake and change your mind by (
- 6 Feb 2010 13:53 UTC; 3 points) 's comment on Debate tools: an experience report by (
I’d like to explain more about the motivation behind TakeOnIt. The ultimate goal is to be able to predict peoples’ opinions. It started with the ordinary observation that during a discussion with someone, you can rapidly form a picture of their world view. Specifically, the more opinions that a person divulges to you, the more accurately you can predict all the other opinions of that person. It then occurred to me—could a computer predict many opinions a person has based on a small subset of their opinions?
While we don’t like to be “put in a box”, the statistical reality is that many of our opinions are a predictable function of other opinions that we have. For example, if someone has the opinion that the Theory of Evolution is false, we can predict that they are far more likely to believe in God, and more likely to be in favor of banning abortion. If someone believes in homeopathy, they are far more likely to believe in a host of other alternative medicines, and even more generally, less likely to have opinions of a scientific nature.
With this in mind, let’s turn to a common problem: we want to form an opinion on a topic outside of our domain expertise. Consider how we form an opinion on Global Warming. We might attempt to familiarize ourselves with the facts and arguments, but it’s terribly time-inefficient, and is akin to becoming a doctor to fix one’s own medical conditions. So instead we outsource our opinion: we will believe what the experts tell us. But which ones? There are respectable experts on both sides of the debate. Now, there are many more climatologists who believe Global Warming is caused by humans, but why trust the consensus? Let’s imagine that your opinions on a wide range of issues resonated very well with climatologists with the minority opinion, and conflicted badly with climatologists with the majority opinion. Who would you believe? Of course you’d side with the minority. We trust the opinions of others whose opinions overlap with our own. In the case that we trust our own rationality this is the rational thing to do.
With respect to Global Warming, we will believe in the experts who have opinions that most overlap with our opinions. Reciprocally, we would expect such experts to believe us, in a domain that they knew little about but where we were the experts.
Here’s a specific example in the Global Warming debate. Roy Spencer (see http://www.takeonit.com/expert/238.aspx on TakeOnIt), a leading skeptical climatologist:
1) does not believe humans cause global warming
2) does not believe in evolution
3) does believe in the cosmological argument
The fact that I disagree with him on ‘2’ and ‘3’, where I have a reasonable understanding of the issues, make me less likely to trust his opinion on ‘1’, where I have a poorer understanding of the issue. This however is just one tiny example. The purpose of creating a database of opinions is ultimately to elevate this process from an anecdotal one to a statistical one. I want a system that can predict what I should believe, given what I already believe, before I even believe it!
I contacted Eliezer after reading his excellent post on the Correct Contrarian Cluster and realizing we were looking at a very similar problem.
A similar site, which you posted about last year, is Wrong Tomorrow, which tracks pundit predictions. There’s also this thing called PunditWatch, though it only tracks a small number of pundits.
Politifact’s Obameter is another sort of prediction tracker, albeit with a very specific scope.
TakeOnIt now predicts expert opinions, using collaborative filtering. It also identifes whether an expert’s known opinions are conforming or non-conforming relative to the group of experts they typically align with. It’s a first attempt. It should improve over time with 1) algorithm refinement 2) more data.
Examples:
Robin Hanson’s Opinions
Eliezer Yudkowsky’s Opinions
Barack Obama’s Opinions
Note that ambiguous questions like “Does God Exist?” dilute the quality of the predictions (in this case, many experts agree that a “metaphorical” God exists). It may therefore be a good idea to split that question into separate questions to avoid this issue.
This looks to be a wonderful source for data on both sides of issues that I often encounter (Consciousness, Free Will, “Proving” God, and the Zombie… Love those Zombies).
I was a little surprised to see nothing about the opinions of Searle (my favorite whipping post). I will need to read up on the site a bit more to see how to go about adding his expert opinion.
Thanks Matthew. Per your suggestion I just added Searle’s opinion on Zombies. Let me know if you have any difficulties using the website (feel free to email me at ben@takeonit.com ).
I am hoping that, should I get into Berkeley, to take a class from Searle, and claim to be a Zombie who has no real consciousness.
Another thing that some friends of mine came up with was the Simulated Prank Call, where a computer would have a program that simulated a prank phone call, yet it would be connected to a modem and voice software so that the simulation would actually call Searle’s office at Berkeley to deliver the Simulated Prank.
Thanks for adding his reply. There are some others (of Searle’s beliefs) that should probably be placed on that site (Such as his annoying Chinese Room, which is what started most of the Zombie stuff to begin with)
Is there anyone here on Less Wrong who knows about Collaborative Filtering? I’m using the TakeOnIt expert opinion database to make various kinds of predictions. One is to predict an expert’s opinions given a subset of their opinions. Another will be to predict correct vs. mistaken contrarianism.
I’ve created a couple of prototypes using the simpler CF algorithms, but the accuracy isn’t great. In one case it predicts Eliezer should believe in acupuncture! Having less sparse data will partly alieviate the problem. But I also need a better algorithm. However, the better algorithms such as the ones using Bayesian Networks look non-trivial. Some guidance would be appreciated.