Can you rotate four dimensional solids in your head?
Well, suppose I’m colorblind from birth. I can’t visualize green. Is this significantly different from the example of 4d rotations?
If so, how? (ETA: after all, we can do all the math associated with 4d rotations, so we’re not deficient in conceptualizing them, just in imagining them. Arguably, computers can’t visualize them either. They just do the math and move on).
If not, then is this the only kind of thought (i.e. visualizations, etc.) that we can defend as potentially unthinkable by us? If this is the only kind of thought thus defensible, then we’ve rendered the original quote trivial: it infers from the fact that it’s possible to be unable to see a color that it’s possible to be unable to think a thought. But if these kinds of visualizations are the only kinds of thoughts we might not be able to think, then the quote isn’t saying anything.
If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom?
I’m not a physicist, but I have been taught that beyond the simplest atoms, the calculations become so difficult that we’re unable to determine whether our quantum models actually predict the configurations we observe. In this case, we can’t simply do the math and move on, because the math is too difficult. With our own mental hardware, it appears that we can neither visualize nor predict the behavior of particles on that scale, above a certain level of complexity, but that doesn’t mean that a jupiter brain wouldn’t be able to.
If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom?
I’m not discounting qualia (that’s it’s own discussion), I’m just saying that if these are the only kinds of thoughts which we can defend as being potentially unthinkable by us, then the original quote is trivial.
So one strategy you might take to defend thoughts we cannot think is this: thinking is or supervenes on a physical process, and thus it necessarily takes time. All human beings have a finite lifespan. Some thought could be formulated such that the act of thinking it with a human brain would take longer than any possible lifespan, or perhaps just an infinite amount of time. Therefore, there are thoughts we cannot think.
I think this suggestion is basically the same as yours: what prevents us from thinking this thought is some limited resources, like memory or lifespan, or something like that. Similarly, I could suggest a language that is in principle untranslatable, just because all well formed sentences and clauses in that language are long enough that we couldn’t remember a whole one.
But it would be important to distinguish, in these cases, between two different kinds of unthinkability or untranslatability. Both the infinite (or just super complex) thoughts and the super long sentences are translatable into a language we can understand, in principle. There’s nothing about those thoughts or sentences, or our thoughts or sentences, that makes them incompatible. The incompatibility arises from a fact about our biology. So in the same line, we could say that some alien species’ language is untranslatable because they speak and write in some medium we don’t have the technology to access. The problem there isn’t with the language or the act of translation.
In sum, I think that this suggestion (and perhaps the original quote) trades on an equivocation between two different kinds of unthinkability. But if the only defensible kind of unthinkability is one on the basis of some accidental limitation of access or resources, then I can’t see what’s interesting about the idea. It’s no more interesting then than the point that I can’t speak Chinese because I haven’t learned it.
Well, suppose I’m colorblind from birth. I can’t visualize green. Is this significantly different from the example of 4d rotations?
If so, how? (ETA: after all, we can do all the math associated with 4d rotations, so we’re not deficient in conceptualizing them, just in imagining them. Arguably, computers can’t visualize them either. They just do the math and move on).
If not, then is this the only kind of thought (i.e. visualizations, etc.) that we can defend as potentially unthinkable by us? If this is the only kind of thought thus defensible, then we’ve rendered the original quote trivial: it infers from the fact that it’s possible to be unable to see a color that it’s possible to be unable to think a thought. But if these kinds of visualizations are the only kinds of thoughts we might not be able to think, then the quote isn’t saying anything.
If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom?
I’m not a physicist, but I have been taught that beyond the simplest atoms, the calculations become so difficult that we’re unable to determine whether our quantum models actually predict the configurations we observe. In this case, we can’t simply do the math and move on, because the math is too difficult. With our own mental hardware, it appears that we can neither visualize nor predict the behavior of particles on that scale, above a certain level of complexity, but that doesn’t mean that a jupiter brain wouldn’t be able to.
I’m not discounting qualia (that’s it’s own discussion), I’m just saying that if these are the only kinds of thoughts which we can defend as being potentially unthinkable by us, then the original quote is trivial.
So one strategy you might take to defend thoughts we cannot think is this: thinking is or supervenes on a physical process, and thus it necessarily takes time. All human beings have a finite lifespan. Some thought could be formulated such that the act of thinking it with a human brain would take longer than any possible lifespan, or perhaps just an infinite amount of time. Therefore, there are thoughts we cannot think.
I think this suggestion is basically the same as yours: what prevents us from thinking this thought is some limited resources, like memory or lifespan, or something like that. Similarly, I could suggest a language that is in principle untranslatable, just because all well formed sentences and clauses in that language are long enough that we couldn’t remember a whole one.
But it would be important to distinguish, in these cases, between two different kinds of unthinkability or untranslatability. Both the infinite (or just super complex) thoughts and the super long sentences are translatable into a language we can understand, in principle. There’s nothing about those thoughts or sentences, or our thoughts or sentences, that makes them incompatible. The incompatibility arises from a fact about our biology. So in the same line, we could say that some alien species’ language is untranslatable because they speak and write in some medium we don’t have the technology to access. The problem there isn’t with the language or the act of translation.
In sum, I think that this suggestion (and perhaps the original quote) trades on an equivocation between two different kinds of unthinkability. But if the only defensible kind of unthinkability is one on the basis of some accidental limitation of access or resources, then I can’t see what’s interesting about the idea. It’s no more interesting then than the point that I can’t speak Chinese because I haven’t learned it.