To be frank, I didn’t expect you to based on our previous conversations on forecasting. You are too skeptical of it, and haven’t read some of the recent research on how effective it can be in a variety of situations.
is not true because they lack people smart enough to correctly process the data, interpret it, and arrive at the correct conclusions.
Exactly, this is the problem I’m solving.
So, what’s wrong with the stock price as the metric?
As I said, the signaling problem. Using previous performance as a metric means that there are lots of good forecasters out there who simply can’t get discovered—right now, it’s signaling all the way down (Top companies hire from top colleges, take from top highschools). Basically, I’m betting that there are lots of organizations and people out there who are good forecasters, but don’t have the right signals to prove it.
Besides, evaluating forecasting capability is… difficult. Both theoretically (out of many possible futures only one gets realized) and practically (there is no incentive for people to give you hard predictions they make).
You should read the linked article on prediction polls—they weren’t even paying people in Tetlock’s study (only giving giftcard gifts not at all comensurate to the work people are putting in) and they solved the problem to the point where they could beat prediction markets.
You are too skeptical of it, and haven’t read some of the recent research on how effective it can be in a variety of situations.
From my internal view I’m sceptical of it because I’m familiar with it :-/
it’s signaling all the way down (Top companies hire from top colleges, take from top highschools)
Um, hiring from top colleges is not quite all signaling. There is quite a gap between, say, an average Stanford undergrad and an average undergrad of some small backwater college.
You should read the linked article on prediction polls—they weren’t even paying people in Tetlock’s study
Um, I was one of Tetlocks’ forecasters for a year. I wasn’t terribly impressed, though. I think it’s a bit premature to declare that they “solved the problem”.
With people who claim to have awesome forecasting power or techniques, I tend to point at financial markets and ask why aren’t they filthy rich.
From my internal view I’m sceptical of it because I’m familiar with it :-/
You’re right, I was assuming things about you I shouldn’t have.
Um, hiring from top colleges is not quite all signaling. There is quite a gap between, say, an average Stanford undergrad and an average undergrad of some small backwater college.
Fair point. But the point is that they’re going on something like “the average undergrad” and discounting all the outliers. Especially problematic in this case because forecasting is an orthogonal skillset to what it takes to get into a top college.
With people who claim to have awesome forecasting power or techniques, I tend to point at financial markets and ask why aren’t they filthy rich.
Markets are one of the best forecasting tools we have, so beating them is hard. But using the market to get these types of questions answered is hard (liquidity issues in prediction markets) so another technique is needed.
Um, I was one of Tetlocks’ forecasters for a year. I wasn’t terribly impressed, though. I think it’s a bit premature to declare that they “solved the problem”.
What part specifically of that paper do you think was unimpressive?
Not necessarily. Recall that a slight shift in the mean of a normal distribution (e.g. IQ scores) results in strong domination in the tails.
Besides, searching for talent has costs. You’re much better off searching for talent at top tier schools than at no-name colleges hoping for a hidden gem.
using the market to get these types of questions answered is hard
What “types of questions” do you have in mind? And wouldn’t liquidity issues be fixed just by popularity?
forecasting is an orthogonal skillset to what it takes to get into a top college.
Let me propose IQ as a common cause leading to correlation. I don’t think the skillsets are orthogonal.
What part specifically of that paper do you think was unimpressive?
I read it a while ago and don’t remember enough to do a critique off the top of my head, sorry...
Besides, searching for talent has costs. You’re much better off searching for talent at top tier schools than at no-name colleges hoping for a hidden gem.
That’s the signalling issue—I’m trying to create a better signal so you don’t have to make that tradeoff
What “types of questions” do you have in mind? And wouldn’t liquidity issues be fixed just by popularity?
Question Example: “How many units will this product sell in Q1 2016?” (Where this product is something boring, like a brand of toilet paper)
This is a question that I don’t ever see being popular with the general public. If you only have a few experts in a prediction market, you don’t have enough liquidity to update your predictions. With prediction polls, that isn’t a problem.
Why do you call that “signaling”? A top-tier school has a real, actual, territory-level advantage over a backwater college. The undergrads there are different.
If you only have a few experts in a prediction market, you don’t have enough liquidity to update your predictions. With prediction polls, that isn’t a problem.
I don’t know about that not being a problem. Lack of information is lack of information. Pooling forecasts is not magical.
Why do you call that “signaling”? A top-tier school has a real, actual, territory-level advantage over a backwater college. The undergrads there are different.
Because you’re going by the signal (the college name), not the actual thing you’re measuring for (forecasting ability).
I don’t know about that not being a problem. Lack of information is lack of information. Pooling forecasts is not magical.
I meant a problem for frequent updates. Obviously, less participants will lead to less accurate forecasts—but by brier weighting and extremizing you can still get fairly decent results.
To be frank, I didn’t expect you to based on our previous conversations on forecasting. You are too skeptical of it, and haven’t read some of the recent research on how effective it can be in a variety of situations.
Exactly, this is the problem I’m solving.
As I said, the signaling problem. Using previous performance as a metric means that there are lots of good forecasters out there who simply can’t get discovered—right now, it’s signaling all the way down (Top companies hire from top colleges, take from top highschools). Basically, I’m betting that there are lots of organizations and people out there who are good forecasters, but don’t have the right signals to prove it.
You should read the linked article on prediction polls—they weren’t even paying people in Tetlock’s study (only giving giftcard gifts not at all comensurate to the work people are putting in) and they solved the problem to the point where they could beat prediction markets.
From my internal view I’m sceptical of it because I’m familiar with it :-/
Um, hiring from top colleges is not quite all signaling. There is quite a gap between, say, an average Stanford undergrad and an average undergrad of some small backwater college.
Um, I was one of Tetlocks’ forecasters for a year. I wasn’t terribly impressed, though. I think it’s a bit premature to declare that they “solved the problem”.
With people who claim to have awesome forecasting power or techniques, I tend to point at financial markets and ask why aren’t they filthy rich.
You’re right, I was assuming things about you I shouldn’t have.
Fair point. But the point is that they’re going on something like “the average undergrad” and discounting all the outliers. Especially problematic in this case because forecasting is an orthogonal skillset to what it takes to get into a top college.
Markets are one of the best forecasting tools we have, so beating them is hard. But using the market to get these types of questions answered is hard (liquidity issues in prediction markets) so another technique is needed.
What part specifically of that paper do you think was unimpressive?
Not necessarily. Recall that a slight shift in the mean of a normal distribution (e.g. IQ scores) results in strong domination in the tails.
Besides, searching for talent has costs. You’re much better off searching for talent at top tier schools than at no-name colleges hoping for a hidden gem.
What “types of questions” do you have in mind? And wouldn’t liquidity issues be fixed just by popularity?
Let me propose IQ as a common cause leading to correlation. I don’t think the skillsets are orthogonal.
I read it a while ago and don’t remember enough to do a critique off the top of my head, sorry...
That’s the signalling issue—I’m trying to create a better signal so you don’t have to make that tradeoff
Question Example: “How many units will this product sell in Q1 2016?” (Where this product is something boring, like a brand of toilet paper)
This is a question that I don’t ever see being popular with the general public. If you only have a few experts in a prediction market, you don’t have enough liquidity to update your predictions. With prediction polls, that isn’t a problem.
Why do you call that “signaling”? A top-tier school has a real, actual, territory-level advantage over a backwater college. The undergrads there are different.
I don’t know about that not being a problem. Lack of information is lack of information. Pooling forecasts is not magical.
Because you’re going by the signal (the college name), not the actual thing you’re measuring for (forecasting ability).
I meant a problem for frequent updates. Obviously, less participants will lead to less accurate forecasts—but by brier weighting and extremizing you can still get fairly decent results.