Naive measurements are still measurements. Personally, I would like to see more research into prediction as a science. It is difficult because you want to jump ahead decades, and five-year-out predictions have only so much value, but I (naively) predict that we’ll get better at prediction the more we practice concretely doing so. I would (naively) expect that the data used to calibrate predictions would come from interviews with researchers paired with advanced psychology.
Plotting your personal rate of advancement against the team and the field, we predict a breakthrough in April of next year.
Personally, I would like to see more research into prediction as a science.
You could try reading up on what is already known. Silver’s Signal and the Noise is not a bad start if you know nothing at all; if you already have some expertise in the area, the anthology Principles of Forecasting edited by Armstrong (available in Google & Libgen IIRC) covers a lot of topics.
More domain-specific, invention specific, plotting events with ever-narrower error bands. If you could isolate the specific month that news media gets in a buzz about uploading three years in advance, that would be a significant increase in prediction usefulness.
It may be impossible even with significant effort, but it’s certainly impossible without having first tried. I want to know, if nothing else, the limits of reasonable prediction. That itself would be useful information. Even if we can never get past 50% accuracy for predictions, knowing that all such predictions can only be 50% likely is actionable in the presence of any such prediction.
I’m afraid that the only methods I can think up require vast collection of many different types of data, far beyond what I can currently manage myself.
Actually, there are prediction markets. Unfortunately the most useful one, Intrade got closed (maybe the three-letter-agencies felt threatened?) but hopefully there will be others. Oh, far from accurate, don’t get me wrong.
But at least if you wanted to have some kind of starting estimate for something you knew nothing about, you could sometimes find one at Intrade.
I suspect it closed because it wasn’t giving the kind of powerful results necessary to get funding-type attention. I suppose to start off, a prediction agency should predict its own success. :P
Now I’m curious about any self-referential predictions Intrade made...
Well then, I can only assure you that I’m certain such research is being actively conducted. I’m pretty sure the Three-Letter Agencies are very much interested in prediction. Any political organisation is very much interested in prediction. Anyone who plays in the financial markets is very much interested in prediction. So no, it doesn’t look like there are too few resources committed to this problem.
Unfortunately, the problem seems to be really hard.
Naive measurements are still measurements. Personally, I would like to see more research into prediction as a science. It is difficult because you want to jump ahead decades, and five-year-out predictions have only so much value, but I (naively) predict that we’ll get better at prediction the more we practice concretely doing so. I would (naively) expect that the data used to calibrate predictions would come from interviews with researchers paired with advanced psychology.
You could try reading up on what is already known. Silver’s Signal and the Noise is not a bad start if you know nothing at all; if you already have some expertise in the area, the anthology Principles of Forecasting edited by Armstrong (available in Google & Libgen IIRC) covers a lot of topics.
If you want non-domain-specific, that’s called statistics, specifically statistical modeling.
I specifically want it to be more specific.
Do you mean more confidence in the prediction? Narrower error bands?
More domain-specific, invention specific, plotting events with ever-narrower error bands. If you could isolate the specific month that news media gets in a buzz about uploading three years in advance, that would be a significant increase in prediction usefulness.
That doesn’t sound realistic to me. It sounds impossible.
It may be impossible even with significant effort, but it’s certainly impossible without having first tried. I want to know, if nothing else, the limits of reasonable prediction. That itself would be useful information. Even if we can never get past 50% accuracy for predictions, knowing that all such predictions can only be 50% likely is actionable in the presence of any such prediction.
These are very much domain-specific plus are the function of available technology.
You seem to want psychohistory—unfortunately it’s entirely fiction.
I want exactly as I’ve stated: Research into a method.
So, um… go for it?
I’m afraid that the only methods I can think up require vast collection of many different types of data, far beyond what I can currently manage myself.
Actually, there are prediction markets. Unfortunately the most useful one, Intrade got closed (maybe the three-letter-agencies felt threatened?) but hopefully there will be others. Oh, far from accurate, don’t get me wrong.
But at least if you wanted to have some kind of starting estimate for something you knew nothing about, you could sometimes find one at Intrade.
I suspect it closed because it wasn’t giving the kind of powerful results necessary to get funding-type attention. I suppose to start off, a prediction agency should predict its own success. :P
Now I’m curious about any self-referential predictions Intrade made...
Well then, I can only assure you that I’m certain such research is being actively conducted. I’m pretty sure the Three-Letter Agencies are very much interested in prediction. Any political organisation is very much interested in prediction. Anyone who plays in the financial markets is very much interested in prediction. So no, it doesn’t look like there are too few resources committed to this problem.
Unfortunately, the problem seems to be really hard.
Maybe, but until then we still have to carry on with what we have got.