While I would like to know whether or not aliens visited earth, I think it’s more useful to simply take the stance “I don’t know” instead of thinking in terms of probability.
You can’t really “not think in terms of probability” by refusing to think about them explicitly. It can sometimes be a useful exercise to think about or write down everything in terms of numerical probabilities and likelihoods and then throw that away and “go with your gut”, but if your beliefs are coherent they imply an underlying probabilistic model, whether you acknowledge it or not.
The mental moves of directly rounding down to “my priors against aliens are high” → “no aliens” → “no need to do anything” is bad as if enough people hold it we won’t get more evidence.
Investigating the question of whether aliens have visited earth may be valuable enough to overcome the low prior. However, I predict in advance that congressional hearings are not going to yield much evidence on this question one way or the other, unless they’re focused on investigating e.g. the specific claims in the tweet thread above. In general, these kinds of hearings are not known for their truth-seeking or fact-finding ability.
Re: the tweet thread you linked to. One of the tweets is:
Given that the DoD was effectively infiltrated for years by people “contracting” for the government while researching dino-beavers, there are now a ton of “insiders” who can “confirm” they heard the same outlandish rumors, leading to stuff like this: [references Michael Schellenberger]
Maybe, but this doesn’t add up to me because Schellenberger said his sources had had multiple decades long careers in the gov agencies. It didn’t sound like they just started their careers as contractors in 2008-2012.
You can’t really “not think in terms of probability” by refusing to think about them explicitly.
That’s just mistakes how the human mind and human intelligence works. Our brain is not made to think in terms of probability.
if your beliefs are coherent they imply an underlying probabilistic model, whether you acknowledge it or not.
I think that most people who think that their beliefs are completely coherent delude themselves. Assuming that beliefs being completely coherent is a natural state of the human mind mistakes a lot about what goes on in human minds. When building an AI for a long-time there was the belief that AI will likely have coherent beliefs. With GPT we see that the best intellience we can build on computers doesn’t seem to have that feature of coherent beliefs either.
Julia Galef writes about how noticing confusion is a key skill of a rationalist. The state of noticing confusion is one where you see that the evidence doesn’t seem to really fit and you don’t have a good idea of the right hypothesis.
Confusion calls for more investigation. It’s normal that you don’t have clear hypnothesis when you investigate when you are confused.
Thomas Kuhn writes about how new scientific paradigms always start with seeing some anamolies and investigating them. If you don’t engage in that investigation because you don’t have a decent probability hypothesis of how the facts fit together, you are not going to find new paradigms because that involves working a decent amount of time in a space with a lot of unknowns.
That’s just mistakes how the human mind and human intelligence works. Our brain is not made to think in terms of probability.
I didn’t intend to claim anything about how the brain or human intelligence works. Rather, I’m saying probability theory points at a correct way to reason for ideal agents, which humans can try to approximate. I expect approximations which involve thinking explicitly in terms of probabilities (not necessarily only in terms of probabilities) will tend to outperform approximations that don’t.
Anyway, back to the object level: I would welcome more evidence on the question of aliens, but I personally don’t feel that confused by current observations, and believe they are well-explained by higher prior probability hypotheses that do not involve aliens.
Perhaps the reason this post received some downvotes: it reads somewhat as a call for others to do expensive investigatory work and / or deductive thinking. Personally, I feel I’ve already done enough investigation and deduction on my own on this topic, and more (by myself or others) is probably not worth the effort.
Note, there’s sometimes a tradeoff between gathering more facts and thinking longer to deduce more from the facts you already have. In this case, I think there’s already more than enough evidence available for an ideal agent to conclude from a cursory inspection that the observed evidence is not well-explained by actual aliens. But you don’t need to be an ideal agent to draw similar conclusions: you merely need to apply some effort and reasoning skills which are pretty common among LW readers, but not so common outside these circles (some of the skills I have in mind are those described by the bullet points in my reply here.)
Rather, I’m saying probability theory points at a correct way to reason for ideal agents, which humans can try to approximate.
Probability theory does not do that. It does not make your reasoning robust against unknown unknowns.
In this case, I think there’s already more than enough evidence available for an ideal agent to conclude from a cursory inspection that the observed evidence is not well-explained by actual aliens.
From my perspective it doesn’t look like there is an explanation that well-explains the available evidence. That goes both for alien-involving explanations and for non-alien-involving explanations. That’s what makes the situation confusing.
But you don’t need to be an ideal agent to draw similar conclusions: you merely need to apply some effort and reasoning skills which are pretty common among LW readers, but not so common outside these circles
I’m unsure why you believe that LW readers are that much better at reasoning than highly promoted intelligence analysts.
I see problem with the tweet thread that the main evidence it uses against UFO-cases is impossibility of dyno-bevers. But most plausible explanation is glitches in the matrix, and such glitches would equally likely create ‘alien crafts’ and ‘dyno-bevers’. Both are equally absurd.
more useful to simply take the stance “I don’t know”
if your beliefs are coherent they imply an underlying probabilistic model
Decision making motivates having some way of conditioning beliefs on influence at given points of intervention, that’s how you estimate the consequences of those interventions and their desirability. To take a stance of “don’t know” seems to me analogous to considering how a world model varies with (depends on) the thing you don’t know, how it depends on what it turns out to be, or on what the credences about the facts surrounding it are.
Here’s one plausible alternative explanation of the facts that I happened to come across on Twitter: https://twitter.com/erikphoel/status/1667197430197022722
You can’t really “not think in terms of probability” by refusing to think about them explicitly. It can sometimes be a useful exercise to think about or write down everything in terms of numerical probabilities and likelihoods and then throw that away and “go with your gut”, but if your beliefs are coherent they imply an underlying probabilistic model, whether you acknowledge it or not.
Investigating the question of whether aliens have visited earth may be valuable enough to overcome the low prior. However, I predict in advance that congressional hearings are not going to yield much evidence on this question one way or the other, unless they’re focused on investigating e.g. the specific claims in the tweet thread above. In general, these kinds of hearings are not known for their truth-seeking or fact-finding ability.
Re: the tweet thread you linked to. One of the tweets is:
Maybe, but this doesn’t add up to me because Schellenberger said his sources had had multiple decades long careers in the gov agencies. It didn’t sound like they just started their careers as contractors in 2008-2012.
Link to post with Schellenberger article details: https://www.lesswrong.com/posts/bhH2BqF3fLTCwgjSs/michael-shellenberger-us-has-12-or-more-alien-spacecraft-say
That’s just mistakes how the human mind and human intelligence works. Our brain is not made to think in terms of probability.
I think that most people who think that their beliefs are completely coherent delude themselves. Assuming that beliefs being completely coherent is a natural state of the human mind mistakes a lot about what goes on in human minds. When building an AI for a long-time there was the belief that AI will likely have coherent beliefs. With GPT we see that the best intellience we can build on computers doesn’t seem to have that feature of coherent beliefs either.
Julia Galef writes about how noticing confusion is a key skill of a rationalist. The state of noticing confusion is one where you see that the evidence doesn’t seem to really fit and you don’t have a good idea of the right hypothesis.
Confusion calls for more investigation. It’s normal that you don’t have clear hypnothesis when you investigate when you are confused.
Thomas Kuhn writes about how new scientific paradigms always start with seeing some anamolies and investigating them. If you don’t engage in that investigation because you don’t have a decent probability hypothesis of how the facts fit together, you are not going to find new paradigms because that involves working a decent amount of time in a space with a lot of unknowns.
I didn’t intend to claim anything about how the brain or human intelligence works. Rather, I’m saying probability theory points at a correct way to reason for ideal agents, which humans can try to approximate. I expect approximations which involve thinking explicitly in terms of probabilities (not necessarily only in terms of probabilities) will tend to outperform approximations that don’t.
Anyway, back to the object level: I would welcome more evidence on the question of aliens, but I personally don’t feel that confused by current observations, and believe they are well-explained by higher prior probability hypotheses that do not involve aliens.
Perhaps the reason this post received some downvotes: it reads somewhat as a call for others to do expensive investigatory work and / or deductive thinking. Personally, I feel I’ve already done enough investigation and deduction on my own on this topic, and more (by myself or others) is probably not worth the effort.
Note, there’s sometimes a tradeoff between gathering more facts and thinking longer to deduce more from the facts you already have. In this case, I think there’s already more than enough evidence available for an ideal agent to conclude from a cursory inspection that the observed evidence is not well-explained by actual aliens. But you don’t need to be an ideal agent to draw similar conclusions: you merely need to apply some effort and reasoning skills which are pretty common among LW readers, but not so common outside these circles (some of the skills I have in mind are those described by the bullet points in my reply here.)
Probability theory does not do that. It does not make your reasoning robust against unknown unknowns.
From my perspective it doesn’t look like there is an explanation that well-explains the available evidence. That goes both for alien-involving explanations and for non-alien-involving explanations. That’s what makes the situation confusing.
I’m unsure why you believe that LW readers are that much better at reasoning than highly promoted intelligence analysts.
I see problem with the tweet thread that the main evidence it uses against UFO-cases is impossibility of dyno-bevers. But most plausible explanation is glitches in the matrix, and such glitches would equally likely create ‘alien crafts’ and ‘dyno-bevers’. Both are equally absurd.
Decision making motivates having some way of conditioning beliefs on influence at given points of intervention, that’s how you estimate the consequences of those interventions and their desirability. To take a stance of “don’t know” seems to me analogous to considering how a world model varies with (depends on) the thing you don’t know, how it depends on what it turns out to be, or on what the credences about the facts surrounding it are.