I’d like to become better calibrated via PredictionBook and other tools, but coming up with well-specified predictions can be very time-consuming.
I don’t think that’s true. I think it’s more that people don’t like making predictions and seek excuses for their inability.
Take predictionbook. I made a bunch of predictions about my own future weight. Making those predictions is quite easy for myself.
Other people can use that prediction predict whether I’m good at predicting my future weight, based on my past prediction book performance.
What’s the feedback that I get on predictionbook? Are people willing to predict how good I’m at predicting? No. I get accused of spamming predictionbook.
When it comes to training calibration it’s however probably good to have a claims where you don’t have to wait to see whether you are right or wrong. There are many facts which truth value can be clearly determined but where the average person isn’t sure whether the fact is true.
Take a good university textbook. Search the textbook for factual claims where a novice could think that either A or B are true but the textbook clearly specifis that either A or B is true.
It a lot more effective to train your calibration on textbook level questions than to train to be better calibrated at guessing which politician wins an election or which team wins the NBL.
What’s the feedback that I get on predictionbook? Are people willing to predict how good I’m at predicting? No. I get accused of spamming predictionbook.
We are willing to do that for a few predictions; but when you make a ton of predictions which you refuse to mark private and which have hit diminishing returns and which are actively interfering with the ability to monitor every other prediction’s activity by flooding them off the Happenstance page, then don’t be surprised if the language gets stronger!
I don’t doubt that it’s a hard problem. I doubt that it’s inherently time consuming. There are mental barrier that you have to cross. Crossing those mental barries is hard.
If you want 250 new predictions/month here’s something you can do:
Install RescueTime.
For your the 10 most visited websites you make predictions about the upper limit for average time that you spend on those websites for the next week and the next month. You make predictions at the 10% 25%, 50% 75% and 90% level.
At the end of every week you make new predictions for the next week. At the end of every month you make predictions for the next month.
Coming up with the idea of using RescueTime as basis for prediction takes creativity. Is something that’s hard for most people. I spent several years thinking about the problem to come to a place where it doesn’t take much time for me to come up with predictions.
Actually making those prediction is not very time consuming. It takes a lot less time per prediction than the approach of browsing the various sites lukeprog proposes to look for interesting predictions.
Most predictions in daily life don’t include making prediction about sports or about which politician get’s elected.
Most meaningful predictions that I make in my daily life aren’t of the type you would find on intrade.
How often do you make a decision in your daily life where it matters which sport team wins? In my life that doesn’t happen. Most of my personal decisions are also not depended on which politician’s win an election.
To get educated you sent students into university where they try to learn the knowledge in textbooks, Students who seek to study sport focus on studying sport statistics. Students who study politics don’t focus on studying which politician won which elections.
Most of the knowledge that people can aquire is outside of the category of predictions you find on Intrade.
If people want to learn how the world works reading textbooks is better than reading the news. On the same token it makes sense to calibrate on textbook knowledge.
Calibrating on actual personal events is also good. That means that you get better at predicting other personal events.
...aren’t textbook level questions either; the first two paragraphs of your reply strike me as irrelevant to my question.
Textbooks are indeed used in education; that doesn’t establish whether what educates most effectively, also happens to be what most effectively trains calibration. We have strong reason to doubt that: namely, that many well-educated people are also poorly calibrated.
On the other hand, I’m not aware of strong evidence to the effect that textbook questions are more effective in training calibration than any other type of question (including sports or world events or estimation quizzes, and so on).
[Most predictions in daily life]...aren’t textbook level questions either
That depends. For a student who spends 8 hours per day with learning for university many questions boil down to textbook knowledge.
For a scientist who does biology research it’s also very important that the scientist has a firm grasp about various biology questions that are based on textbook knowledge.
Good rationality training is supposed to make a scientist who studies biology better at biology.
We have strong reason to doubt that: namely, that many well-educated people are also poorly calibrated.
I don’t think that there are many people who are calibrated on their knowledge of textbook questions.
Let me give you an example:
Question: Which enzymes catalyse RNA synthesis?
A) RNA polymerases B) RNA telomerases
The person who answers the question has to say either A or B and predict how likely he’s right.
During most university causes students aren’t asked how likely they think they are right. As a result the students aren’t well calibrated on being right.
It seems to me this could be a smartphone app. Whenever a person wants to make a prediction about a personal event, they click on the app and speak, with a pause between the thing and how likely you think it is. The app could just store verbatim text, separating question/answer, and timestamping recordings in case you want to update your prediction later. If you learn to specify when you think the outcome will occur, it can make a sound to remind you to check off whether it happened; otherwise it could remind you periodically, like at the end of every day. Why couldn’t it have data analysis tools to let you visualize calibration, or find useful patterns and alert you? Seems a plausible app to me.
I don’t think that’s true. I think it’s more that people don’t like making predictions and seek excuses for their inability. Take predictionbook. I made a bunch of predictions about my own future weight. Making those predictions is quite easy for myself.
Other people can use that prediction predict whether I’m good at predicting my future weight, based on my past prediction book performance.
What’s the feedback that I get on predictionbook? Are people willing to predict how good I’m at predicting? No. I get accused of spamming predictionbook.
When it comes to training calibration it’s however probably good to have a claims where you don’t have to wait to see whether you are right or wrong. There are many facts which truth value can be clearly determined but where the average person isn’t sure whether the fact is true.
Take a good university textbook. Search the textbook for factual claims where a novice could think that either A or B are true but the textbook clearly specifis that either A or B is true.
It a lot more effective to train your calibration on textbook level questions than to train to be better calibrated at guessing which politician wins an election or which team wins the NBL.
CFAR’s Credence game would profit from moving into the direction of meaningful questions the way I describe in http://lesswrong.com/r/discussion/lw/fn0/credence_calibration_game_faq/7ymq
We are willing to do that for a few predictions; but when you make a ton of predictions which you refuse to mark private and which have hit diminishing returns and which are actively interfering with the ability to monitor every other prediction’s activity by flooding them off the Happenstance page, then don’t be surprised if the language gets stronger!
I’ve also found that it’s hard to come up with predictions which are both well-specified and interesting.
I don’t doubt that it’s a hard problem. I doubt that it’s inherently time consuming. There are mental barrier that you have to cross. Crossing those mental barries is hard.
If you want 250 new predictions/month here’s something you can do:
Install RescueTime. For your the 10 most visited websites you make predictions about the upper limit for average time that you spend on those websites for the next week and the next month. You make predictions at the 10% 25%, 50% 75% and 90% level.
At the end of every week you make new predictions for the next week. At the end of every month you make predictions for the next month.
Coming up with the idea of using RescueTime as basis for prediction takes creativity. Is something that’s hard for most people. I spent several years thinking about the problem to come to a place where it doesn’t take much time for me to come up with predictions.
Actually making those prediction is not very time consuming. It takes a lot less time per prediction than the approach of browsing the various sites lukeprog proposes to look for interesting predictions.
What makes you think so?
Most predictions in daily life don’t include making prediction about sports or about which politician get’s elected. Most meaningful predictions that I make in my daily life aren’t of the type you would find on intrade.
How often do you make a decision in your daily life where it matters which sport team wins? In my life that doesn’t happen. Most of my personal decisions are also not depended on which politician’s win an election.
To get educated you sent students into university where they try to learn the knowledge in textbooks, Students who seek to study sport focus on studying sport statistics. Students who study politics don’t focus on studying which politician won which elections.
Most of the knowledge that people can aquire is outside of the category of predictions you find on Intrade.
If people want to learn how the world works reading textbooks is better than reading the news. On the same token it makes sense to calibrate on textbook knowledge.
Calibrating on actual personal events is also good. That means that you get better at predicting other personal events.
...aren’t textbook level questions either; the first two paragraphs of your reply strike me as irrelevant to my question.
Textbooks are indeed used in education; that doesn’t establish whether what educates most effectively, also happens to be what most effectively trains calibration. We have strong reason to doubt that: namely, that many well-educated people are also poorly calibrated.
On the other hand, I’m not aware of strong evidence to the effect that textbook questions are more effective in training calibration than any other type of question (including sports or world events or estimation quizzes, and so on).
That depends. For a student who spends 8 hours per day with learning for university many questions boil down to textbook knowledge. For a scientist who does biology research it’s also very important that the scientist has a firm grasp about various biology questions that are based on textbook knowledge.
Good rationality training is supposed to make a scientist who studies biology better at biology.
I don’t think that there are many people who are calibrated on their knowledge of textbook questions.
Let me give you an example: Question: Which enzymes catalyse RNA synthesis? A) RNA polymerases B) RNA telomerases
The person who answers the question has to say either A or B and predict how likely he’s right.
During most university causes students aren’t asked how likely they think they are right. As a result the students aren’t well calibrated on being right.
It seems to me this could be a smartphone app. Whenever a person wants to make a prediction about a personal event, they click on the app and speak, with a pause between the thing and how likely you think it is. The app could just store verbatim text, separating question/answer, and timestamping recordings in case you want to update your prediction later. If you learn to specify when you think the outcome will occur, it can make a sound to remind you to check off whether it happened; otherwise it could remind you periodically, like at the end of every day. Why couldn’t it have data analysis tools to let you visualize calibration, or find useful patterns and alert you? Seems a plausible app to me.