In many fields, intuitions are just not very reliable. For example, in math, many of the results in both topology and set theory are highly counter-intuitive. If one is reaching a conclusion primarily based off of intuitions that should be a cause for concern.
On the other hand, working on topology for a while gives one the meta-intuition that one should check reasonable sounding statements on the long line), the topologists’s sine curve, the Cantor set, etc.
Or better, one’s idea of what constitutes a “reasonable-sounding statement” in the first place changes, to better accommodate what is actually true.
(Checking those examples is good; but even better would be not to need to, due to having an appropriate feeling for how abstract a topological space is.)
Completely agreed. Part of this might look like a shift in definitions/vocabulary over time. Coming to topology from analysis, sequence felt like a natural way to interrogate limiting behavior. After a while though, it sort of became clear that thinking sequentially requires putting first-countability assumptions everywhere. Introducing nets did away with the need for this assumption and better captured what convergence ought to mean in a general topological spaces.
This is a valid point. Sometimes we rely on intuition. So can one reasonably distinguish this case from the case of ZFC or PA? I think the answer is yes.
First, we do have some other (albeit weak) evidence for the consistency of PA and ZFC. In the case of PA we have what looks like a physical model that seems pretty similar. That’s only a weak argument because the full induction axiom schema is much stronger than one can represent in any finite chunk of PA in a reasonable fashion. We also have spent a large amount of time on both PA and ZFC making theorems and we haven’t seen a contradiction. This is after we’ve had a lot of experience with systems like naive set theory where we have what seems to be a good idea of how to find contradictions in systems. This is akin to something similar to having a functional AGI and seeing what it does in at least one case for a short period of time. Of course, this argument is also weak since Godelian issues imply that there should be axiomatic systems that are fairly simple and yet have contradictions that only appear when one looks at extremely long chains of inferences compared to the complexity of the systems.
Second in the case of PA (and to a slightly lesser extent ZFC) , different people who have thought about the question have arrived at the same intuition. There are of course a few notable exceptions like Edward Nelson but those exceptions are limited, and in many cases, like Nelson’s there seem to be other, extra-mathematical motives for them to reach their conclusions. This is in contrast to the situation in question where a much smaller number of people have thought about the issues and they haven’t reached the same intuition.
A third issue is that we have consistency proofs of PA that use somewhat weak systems. Gentzen’s theorem is the prime example. The forms of induction required are extremely weak compared to the full induction schema as long as one is allowed a very tiny bit of ordinal arithmetic. I don’t know what the relevant comparison would be in the AGI context, but this seems like a type of evidence we don’t have in that context.
If one is reaching a conclusion primarily based off of intuitions that should be a cause for concern.
I wonder if I might be missing your point, since this post is basically asking what one should do after one is already concerned. Are you saying that the first step is to become concerned (and perhaps one or both parties in my example aren’t yet)?
Many of the results are counter-intuitive, but most are not, especially for someone trained in that area. In fact, intuition is required to make progress on math.
But that intuition is in many cases essentially many years of experience with similar contexts all put together operating in the back of one’s head. In this case the set of experience to inform/create intuition is pretty small. So it isn’t at all clear when there are strongly contradicting intuitions which one makes more sense to pay attention to.
In many fields, intuitions are just not very reliable. For example, in math, many of the results in both topology and set theory are highly counter-intuitive. If one is reaching a conclusion primarily based off of intuitions that should be a cause for concern.
On the other hand, working on topology for a while gives one the meta-intuition that one should check reasonable sounding statements on the long line), the topologists’s sine curve, the Cantor set, etc.
Or better, one’s idea of what constitutes a “reasonable-sounding statement” in the first place changes, to better accommodate what is actually true.
(Checking those examples is good; but even better would be not to need to, due to having an appropriate feeling for how abstract a topological space is.)
Completely agreed. Part of this might look like a shift in definitions/vocabulary over time. Coming to topology from analysis, sequence felt like a natural way to interrogate limiting behavior. After a while though, it sort of became clear that thinking sequentially requires putting first-countability assumptions everywhere. Introducing nets did away with the need for this assumption and better captured what convergence ought to mean in a general topological spaces.
Sure. But we don’t have much in the way of actual AI to check our intuitions against in the same way.
Do you believe ZFC (or even PA) to be consistent? Can you give a reason for this belief that doesn’t relay on your intuition?
Hack, can you justify the axioms used in those systems without appeal to your intuition?
This is a valid point. Sometimes we rely on intuition. So can one reasonably distinguish this case from the case of ZFC or PA? I think the answer is yes.
First, we do have some other (albeit weak) evidence for the consistency of PA and ZFC. In the case of PA we have what looks like a physical model that seems pretty similar. That’s only a weak argument because the full induction axiom schema is much stronger than one can represent in any finite chunk of PA in a reasonable fashion. We also have spent a large amount of time on both PA and ZFC making theorems and we haven’t seen a contradiction. This is after we’ve had a lot of experience with systems like naive set theory where we have what seems to be a good idea of how to find contradictions in systems. This is akin to something similar to having a functional AGI and seeing what it does in at least one case for a short period of time. Of course, this argument is also weak since Godelian issues imply that there should be axiomatic systems that are fairly simple and yet have contradictions that only appear when one looks at extremely long chains of inferences compared to the complexity of the systems.
Second in the case of PA (and to a slightly lesser extent ZFC) , different people who have thought about the question have arrived at the same intuition. There are of course a few notable exceptions like Edward Nelson but those exceptions are limited, and in many cases, like Nelson’s there seem to be other, extra-mathematical motives for them to reach their conclusions. This is in contrast to the situation in question where a much smaller number of people have thought about the issues and they haven’t reached the same intuition.
A third issue is that we have consistency proofs of PA that use somewhat weak systems. Gentzen’s theorem is the prime example. The forms of induction required are extremely weak compared to the full induction schema as long as one is allowed a very tiny bit of ordinal arithmetic. I don’t know what the relevant comparison would be in the AGI context, but this seems like a type of evidence we don’t have in that context.
Thinking about this some more, I don’t think our intuitions are particularly unreliable, simply it’s more memorable when they fail.
I wonder if I might be missing your point, since this post is basically asking what one should do after one is already concerned. Are you saying that the first step is to become concerned (and perhaps one or both parties in my example aren’t yet)?
Yes, I’m not sure frankly that either party in question is demonstrating enough concern about the reliability of their intuitions.
Many of the results are counter-intuitive, but most are not, especially for someone trained in that area. In fact, intuition is required to make progress on math.
But that intuition is in many cases essentially many years of experience with similar contexts all put together operating in the back of one’s head. In this case the set of experience to inform/create intuition is pretty small. So it isn’t at all clear when there are strongly contradicting intuitions which one makes more sense to pay attention to.