However, a lot of the community’s mid-term plans seem to be riding on the success of the SIAI project, and although I am not qualified to judge it’s probability...
This seems like something where there’s a high value-of-information. As a Bayesian, you can come up with a (subjective) probability of more or less anything, even if it’s just your original arbitrary prior probability. The feeling of “not being qualified to judge” corresponds (I think) to expecting that further information (which is currently available to experts) would be highly valuable when it comes to making decisions.
So do you feel that further information on this topic wouldn’t be valuable (e.g. no useful skills to contribute), or do you feel that the cost is too high (e.g. you’ve read everything that’s immediately available and it still doesn’t make any sense)?
As a Bayesian, you can come up with a (subjective) probability of more or less anything, even if it’s just your original arbitrary prior probability.
This is a misleading claim. Finding coherent probabilities on a set of logically linked claims is NP-complete, and therefore believed to be intractable. Just because you aspire to have probabilistic beliefs doesn’t mean you do, or even that you actually can, in a computationally realistic sense.
This is true. I’m not sure that the probabilities you assign need to be consistent in order to be useful though—you can update them at the point when you discover an inconsistency.
Also, there’s surely something analogous to value-of-information even when you don’t have probabilistic beliefs as such?
This seems like something where there’s a high value-of-information. As a Bayesian, you can come up with a (subjective) probability of more or less anything, even if it’s just your original arbitrary prior probability. The feeling of “not being qualified to judge” corresponds (I think) to expecting that further information (which is currently available to experts) would be highly valuable when it comes to making decisions.
So do you feel that further information on this topic wouldn’t be valuable (e.g. no useful skills to contribute), or do you feel that the cost is too high (e.g. you’ve read everything that’s immediately available and it still doesn’t make any sense)?
This is a misleading claim. Finding coherent probabilities on a set of logically linked claims is NP-complete, and therefore believed to be intractable. Just because you aspire to have probabilistic beliefs doesn’t mean you do, or even that you actually can, in a computationally realistic sense.
This is true. I’m not sure that the probabilities you assign need to be consistent in order to be useful though—you can update them at the point when you discover an inconsistency.
Also, there’s surely something analogous to value-of-information even when you don’t have probabilistic beliefs as such?