The distinction seems arbitrary at first glance both because what’s personal for one person is impersonal for another and because causality is causality no matter where it occurs. However, if you meant that they’re different in kind in a more epistemic sense, that they’re different in kind from any particular perspective because of the way that they go through your reasoning process, then that seems plausible.
The question is then what types of data work best and why. You’re likely to have less total amounts of data in Near Mode, but you’ll be working with things that are important to you personally which it seems like evolution would favor (individual selection).
On the other hand, evolution seems to make biases more frequent and more intense when they’re about personal matters. But evolution wouldn’t do this if it hadn’t worked often in the past, so perhaps those biases are good? I think that this is fairly plausible, but I also think that these biases would only be “good” in a reproductive sense and not in the sense of epistemic accuracy. They would move you towards maximizing your social status, not the quality of your predictions. It’s unlikely those would overlap.
How likely is it that people are good at evaluating the credibility of the ideas of specific people? I would say that most people are probably bad at this when seeing others face to face because of things like the halo effect and because credibility is rather easy to fake. I would also say that people are rather good at this otherwise. Are these evaluations still accurate when they interact with social motivations, like rivalry? I would say that they probably end up even worse under those circumstances.
So, I believe that personal events and impersonal events should be considered differently because I believe trying to evaluate the accuracy of the views of specific experts would improve the accuracy of your predictions if and only if you avoided personal familiarity or intimacy with those experts, and that otherwise it would damage your accuracy.
I failed to consider the implications of social motivation for professional accuracy, and a bunch of other stuff.
I’m sorry, either I’m misunderstanding you or you misunderstood my comment. I don’t understand what you mean by the phrase “choosing types of data”. I think that although we’re better at dealing with some types of data, that doesn’t mean we should focus exclusively on that type of data. I think that becoming a skilled general forecaster is a very useful thing and something that should be pursued.
Well, I can give you an argument, though you’ll have to evaluate the strength of it yourself.
Forecasting, in a Bayesian sense, is a matter of repeated application of Bayes’ theorem. In short, I make an observation (B) and then ask—what are the chances of prediction (A), given observation (B)? (‘Prediction’ may be the wrong word, given that I may be predicting something unseen that has already happened). Bayes’ theorem states that this is equal to the following:
The chances of observation B, given prediction A, multiplied by the prior probability of prediction A, divided by the prior probability of observation B
Now, the result of the equation is only as good as the figures you feed into it. In your example of the freelancer, the new freelancer (just starting out) has poor estimates of the probabilities involved, though he can improve these estimates by asking a more experienced freelancer for help. The experienced freelancer, on the other hand, has got a better grasp of the input probabilities, and thus gets a more accurate output probability. The equation works for both large-scale, macro events and small-scale, personal events—the difference is, once again, a matter of the input numbers. For a macro event, you’ll have more people looking at, commenting on, discussing the situation; reading the words of others will improve your estimates of the probabilities involved, and putting better numbers in will get you better numbers out. Also, with macro events, you’re more likely to have more time to sit down with pencil and paper and work it out.
However, predicting macro events will help you to better practice the equation, and thus learn how to apply it more quickly and easily to micro events. Sufficient practice will also help you to more quickly and accurately estimate the result for a given set of inputs. So while it is true that the skill of guessing the input probabilities for macro events may have little to do with the skill of guessing the input probabilites for micro events (though there is some correlation there—the skill of accurately putting figures to the probability may transfer to some degree), the skill of practicing the application of the equation is transferable between the two realms.
To continue his line of argument, evolution has gifted us with social instincts superior to our best attempts at rationality. Allowing bias to have its way with us will make us better off socially than we could be otherwise, provided that certain other conditions are met. Forcing flawed attempts at rationality into our behavior may well just corrupt the success of our instincts.
I think I would sort of believe that, with some caveats. For individuals who are good looking and good conversationalists and who value social success over anything else, it probably makes sense to avoid rationality training, as there’s only a chance it can hurt you. So I agree with him in cases like that. But for other individuals, such as those who are unattractive or who are bad conversationalists or who value things other than social success, rationality might be the best strategy, because there’s only a chance it can help you. Learning about biases can hurt you, similarly, making your ability to predict things more rigorous can do the same.
I’m uncertain as to how much I believe that, but I believe the general idea is at least non-obviously false, and that the idea is ultimately more true than false. I believe most people would not do well if they suddenly started working on improving their rationality and predictive accuracy.
So I’m interested in forecasting. It’s an important skill. I’m going on about it because I want to be good, smart, well-calibrated, about what matters to me.
Well, to start with: what evidence do you have at the moment about how well calibrated you are?
Are they different in kind? I’m uncertain.
The distinction seems arbitrary at first glance both because what’s personal for one person is impersonal for another and because causality is causality no matter where it occurs. However, if you meant that they’re different in kind in a more epistemic sense, that they’re different in kind from any particular perspective because of the way that they go through your reasoning process, then that seems plausible.
The question is then what types of data work best and why. You’re likely to have less total amounts of data in Near Mode, but you’ll be working with things that are important to you personally which it seems like evolution would favor (individual selection).
On the other hand, evolution seems to make biases more frequent and more intense when they’re about personal matters. But evolution wouldn’t do this if it hadn’t worked often in the past, so perhaps those biases are good? I think that this is fairly plausible, but I also think that these biases would only be “good” in a reproductive sense and not in the sense of epistemic accuracy. They would move you towards maximizing your social status, not the quality of your predictions. It’s unlikely those would overlap.
How likely is it that people are good at evaluating the credibility of the ideas of specific people? I would say that most people are probably bad at this when seeing others face to face because of things like the halo effect and because credibility is rather easy to fake. I would also say that people are rather good at this otherwise. Are these evaluations still accurate when they interact with social motivations, like rivalry? I would say that they probably end up even worse under those circumstances.
So, I believe that personal events and impersonal events should be considered differently because I believe trying to evaluate the accuracy of the views of specific experts would improve the accuracy of your predictions if and only if you avoided personal familiarity or intimacy with those experts, and that otherwise it would damage your accuracy.
I failed to consider the implications of social motivation for professional accuracy, and a bunch of other stuff.
del
I’m sorry, either I’m misunderstanding you or you misunderstood my comment. I don’t understand what you mean by the phrase “choosing types of data”. I think that although we’re better at dealing with some types of data, that doesn’t mean we should focus exclusively on that type of data. I think that becoming a skilled general forecaster is a very useful thing and something that should be pursued.
What sort of questions did you have in mind?
del
Well, I can give you an argument, though you’ll have to evaluate the strength of it yourself.
Forecasting, in a Bayesian sense, is a matter of repeated application of Bayes’ theorem. In short, I make an observation (B) and then ask—what are the chances of prediction (A), given observation (B)? (‘Prediction’ may be the wrong word, given that I may be predicting something unseen that has already happened). Bayes’ theorem states that this is equal to the following:
The chances of observation B, given prediction A, multiplied by the prior probability of prediction A, divided by the prior probability of observation B
Now, the result of the equation is only as good as the figures you feed into it. In your example of the freelancer, the new freelancer (just starting out) has poor estimates of the probabilities involved, though he can improve these estimates by asking a more experienced freelancer for help. The experienced freelancer, on the other hand, has got a better grasp of the input probabilities, and thus gets a more accurate output probability. The equation works for both large-scale, macro events and small-scale, personal events—the difference is, once again, a matter of the input numbers. For a macro event, you’ll have more people looking at, commenting on, discussing the situation; reading the words of others will improve your estimates of the probabilities involved, and putting better numbers in will get you better numbers out. Also, with macro events, you’re more likely to have more time to sit down with pencil and paper and work it out.
However, predicting macro events will help you to better practice the equation, and thus learn how to apply it more quickly and easily to micro events. Sufficient practice will also help you to more quickly and accurately estimate the result for a given set of inputs. So while it is true that the skill of guessing the input probabilities for macro events may have little to do with the skill of guessing the input probabilites for micro events (though there is some correlation there—the skill of accurately putting figures to the probability may transfer to some degree), the skill of practicing the application of the equation is transferable between the two realms.
To continue his line of argument, evolution has gifted us with social instincts superior to our best attempts at rationality. Allowing bias to have its way with us will make us better off socially than we could be otherwise, provided that certain other conditions are met. Forcing flawed attempts at rationality into our behavior may well just corrupt the success of our instincts.
I think I would sort of believe that, with some caveats. For individuals who are good looking and good conversationalists and who value social success over anything else, it probably makes sense to avoid rationality training, as there’s only a chance it can hurt you. So I agree with him in cases like that. But for other individuals, such as those who are unattractive or who are bad conversationalists or who value things other than social success, rationality might be the best strategy, because there’s only a chance it can help you. Learning about biases can hurt you, similarly, making your ability to predict things more rigorous can do the same.
I’m uncertain as to how much I believe that, but I believe the general idea is at least non-obviously false, and that the idea is ultimately more true than false. I believe most people would not do well if they suddenly started working on improving their rationality and predictive accuracy.
Well, to start with: what evidence do you have at the moment about how well calibrated you are?