If they actually have no logical basis then it would be hard to expect that they would be better than random.
But a feeling that something is likely true is a logical basis, since it is caused by something, and it could be a reason why someone can make correct judgments at a rate better than random. For example, when I tried calibration games, whenever a binary question came up and I had no knowledge of the topic, I guessed based on what I happened to feel was more likely. So for example if the question was “Did team A or team B win the superbowl in 1984?” I had no knowledge of the answer because I am not interested in sports and pay no attention to them. But one of the team names might have felt slightly more familiar than the other, and so I guessed that name.
Following that policy of following my feelings, I was not able to get less than 60% accuracy on the binary questions that I had no real knowledge about.
I interpreted that sentence to mean that the belief itself has no logical basis, not the judgements.
From the standpoint of treating a human being as a rational agent, it makes absolutely no difference whether a judgement is based on feeling, belief, intuition, divine inspiration, etc. All that matters is the decision and the outcome.
That’s a wonderful observation. Do you remember how many questions you counted?
But the belief could be said to be “true”, if you’re strict about what “logical” means. The brain doesn’t use logical representations; it uses neural activation patterns, which are non-discrete, and which, unlike signs, exist in a metric space. However it combines these, it does so computationally, which is probably what you and I mean by logically. But AFAIK no logics that anyone actually uses for any applications have the power of a Turing machine.
In semiotics, they are that strict about what “logical” means. They believe that people think only in words. This is silly to anyone who knows much about biology or neuroscience. But if one believes that, then notes that people know things that can’t be represented in the words we have for those concepts—for instance, knowing not just that they feel hot or cold, but how hot or how cold they are—one might call it extra-logical.
I would be very careful of drawing conclusions like that. I don’t think that experts in that domain think that every sign is a word in the way the “word” is commonly understood by laypeople.
I don’t remember the number of questions but I played it for quite a while. I remember being frustrated because I couldn’t get the 50% option calibrated, since I didn’t want to purposely choose the option that felt less likely. Presumably more practice would have helped distinguish between weaker and stronger impressions of that kind.
I agree that the brain does this kind of thing computationally, and it would be a mistake to suppose that there is some other non-computational way to do it.
This is a very common belief. I’ve even seen seasoned mathematicians and AI gurus make this mistake, often unwittingly.
It’s no mistake. There good research that pattern matching works and human brains are good at it. No expert wins at Chess or Go with a strategy that’s based in logic.
When we move to more formal ontology, we can think of logic as predicate calculus. David Chapman argues in Probality and Logic that logic usually means predicate calculus when used by experts.
Barry Smith who does applied ontology for bioinformatics argues in against Fantology that a lot of important knowledge is not represented by predicate calculus. Barry approach to ontology seems to be well accepted within bioinformatics to be useful in practice.
If you look at standard semiotics reasoning by the way of metaphar is not a process that fits into predicate calculus. If you accept that there are cases where people draw correct judgements by reasoning with the help of metaphars and you use the standard formal definition of logic, than you are dealing with reasoning processes that are not logic based. It’s very worthwhile to study how those reasoning processes work and how to reason well with metaphars.
At the same time I’m not sure whether Lacan or Derrida succeeded at producing useful knowledge that helps for practical contexts in the way someone like Barry Smith succeeded in producing useful knowledge.
This is a very common belief. I’ve even seen seasoned mathematicians and AI gurus make this mistake, often unwittingly.
If they actually have no logical basis then it would be hard to expect that they would be better than random.
But a feeling that something is likely true is a logical basis, since it is caused by something, and it could be a reason why someone can make correct judgments at a rate better than random. For example, when I tried calibration games, whenever a binary question came up and I had no knowledge of the topic, I guessed based on what I happened to feel was more likely. So for example if the question was “Did team A or team B win the superbowl in 1984?” I had no knowledge of the answer because I am not interested in sports and pay no attention to them. But one of the team names might have felt slightly more familiar than the other, and so I guessed that name.
Following that policy of following my feelings, I was not able to get less than 60% accuracy on the binary questions that I had no real knowledge about.
I interpreted that sentence to mean that the belief itself has no logical basis, not the judgements.
From the standpoint of treating a human being as a rational agent, it makes absolutely no difference whether a judgement is based on feeling, belief, intuition, divine inspiration, etc. All that matters is the decision and the outcome.
That’s a wonderful observation. Do you remember how many questions you counted?
But the belief could be said to be “true”, if you’re strict about what “logical” means. The brain doesn’t use logical representations; it uses neural activation patterns, which are non-discrete, and which, unlike signs, exist in a metric space. However it combines these, it does so computationally, which is probably what you and I mean by logically. But AFAIK no logics that anyone actually uses for any applications have the power of a Turing machine.
In semiotics, they are that strict about what “logical” means. They believe that people think only in words. This is silly to anyone who knows much about biology or neuroscience. But if one believes that, then notes that people know things that can’t be represented in the words we have for those concepts—for instance, knowing not just that they feel hot or cold, but how hot or how cold they are—one might call it extra-logical.
I would be very careful of drawing conclusions like that. I don’t think that experts in that domain think that every sign is a word in the way the “word” is commonly understood by laypeople.
I don’t remember the number of questions but I played it for quite a while. I remember being frustrated because I couldn’t get the 50% option calibrated, since I didn’t want to purposely choose the option that felt less likely. Presumably more practice would have helped distinguish between weaker and stronger impressions of that kind.
I agree that the brain does this kind of thing computationally, and it would be a mistake to suppose that there is some other non-computational way to do it.
It’s no mistake. There good research that pattern matching works and human brains are good at it. No expert wins at Chess or Go with a strategy that’s based in logic.
When we move to more formal ontology, we can think of logic as predicate calculus. David Chapman argues in Probality and Logic that logic usually means predicate calculus when used by experts. Barry Smith who does applied ontology for bioinformatics argues in against Fantology that a lot of important knowledge is not represented by predicate calculus. Barry approach to ontology seems to be well accepted within bioinformatics to be useful in practice.
If you look at standard semiotics reasoning by the way of metaphar is not a process that fits into predicate calculus. If you accept that there are cases where people draw correct judgements by reasoning with the help of metaphars and you use the standard formal definition of logic, than you are dealing with reasoning processes that are not logic based. It’s very worthwhile to study how those reasoning processes work and how to reason well with metaphars. At the same time I’m not sure whether Lacan or Derrida succeeded at producing useful knowledge that helps for practical contexts in the way someone like Barry Smith succeeded in producing useful knowledge.