If space weren’t a priori then we couldn’t become fully confident of geometrical laws such as “a square turned 90 degrees about its center is the same shape”, we’d have to learn these laws from experience, running into Hume’s problem of induction.
This is false. Hume’s problem of induction can be avoided by the very simple expedient of not requiring “fully confident” to be perfect, probability-1 confidence. Learning laws from experience is entirely sufficient for 99.99% confidence, and probably still good up to ten or even twenty nines.
This is a logical fallacy, which has been demonstrated as such in a very precise mirror—the Chomskyan view of language syntax, which has been experimentally disproven. To summarize the linguistic debate: Noam Chomsky created the field of syntax and maintained, on the same grounds of the impossibility of induction, that we must have an a priori internal syntax model we are born with, within which children’s language learning is assigning values to a finite set of free parameters, such as “Subject-Verb-Object” sentence structure (SVO) vs. OSV/VOS/SOV/OVS/OSV. He declared that the program of syntax was to discover and understand the set of free parameters, and the underlying framework they were modifying. This was a compelling argument which produced beautiful theories, but it was built on faulty assumptions: perfectly precise language learning is impossible, but it is also unnecessary. (Additionally, some languages, notably Pirãha, violate the core assumptions the accumulated theory had concluded were universals embedded in the language submodule/framework.)
The theory which superseded it (and is now ‘advancing one funeral at a time’) is an approximate theory: it is impossible to learn any syntax precisely from finite evidence, but arbitrarily good approximation is possible. Every English speaker has an ‘idiolect’, the hyper-specific dialect that is how they, and they only, speak and understand English, and this differs slightly, both in vocabulary and syntax, from everyone else’s. No two humans speak the same language, nor have they ever, but this is fine because the languages we do speak are close enough to be mutually intelligible. (And of course now GPT-3 has its own idiolect, though its concept of vocabulary is severely lacking in terms of the meanings of words.)
The analogy is hopefully clear: we have no need for an innate assumption of space. My concept of space and yours are not going to match, but they will be close enough that we can communicate intelligibly and reason from the same premises to the same conclusions. It is of course possible that we have some built-in assumptions, but it is not necessary and we should consider it as an Ockham violation unless we find that there are notions of space we cannot learn even when they manifestly are better at describing our reality. Experimentally, I would say we have very strong evidence that space is not innate: watching babies learn how to interpret their sensorium, they need to learn that distance, angle, and shape exist, and that they are properties shared between sight and touch.
I expect that the same can be done for time, the self, and probably other aspects mentioned here. We can learn things approximately, without any a priori assumptions beyond the basic assumption that induction is valid, i.e. that things that appear true in our memories are more likely to appear true in our ongoing experience than things that appear false in our memories. (I have attempted, and I think succeeded, to make that definition time-free.) For establishing that this applies to time, I would first go about it by examining how babies learn object permanence, which seems like an example of minds which do not yet have an assumption of time. Similarly for the self and the mirror test and video/memory test.
This is false. Hume’s problem of induction can be avoided by the very simple expedient of not requiring “fully confident” to be perfect, probability-1 confidence. Learning laws from experience is entirely sufficient for 99.99% confidence, and probably still good up to ten or even twenty nines.
This is a logical fallacy, which has been demonstrated as such in a very precise mirror—the Chomskyan view of language syntax, which has been experimentally disproven. To summarize the linguistic debate: Noam Chomsky created the field of syntax and maintained, on the same grounds of the impossibility of induction, that we must have an a priori internal syntax model we are born with, within which children’s language learning is assigning values to a finite set of free parameters, such as “Subject-Verb-Object” sentence structure (SVO) vs. OSV/VOS/SOV/OVS/OSV. He declared that the program of syntax was to discover and understand the set of free parameters, and the underlying framework they were modifying. This was a compelling argument which produced beautiful theories, but it was built on faulty assumptions: perfectly precise language learning is impossible, but it is also unnecessary. (Additionally, some languages, notably Pirãha, violate the core assumptions the accumulated theory had concluded were universals embedded in the language submodule/framework.)
The theory which superseded it (and is now ‘advancing one funeral at a time’) is an approximate theory: it is impossible to learn any syntax precisely from finite evidence, but arbitrarily good approximation is possible. Every English speaker has an ‘idiolect’, the hyper-specific dialect that is how they, and they only, speak and understand English, and this differs slightly, both in vocabulary and syntax, from everyone else’s. No two humans speak the same language, nor have they ever, but this is fine because the languages we do speak are close enough to be mutually intelligible. (And of course now GPT-3 has its own idiolect, though its concept of vocabulary is severely lacking in terms of the meanings of words.)
The analogy is hopefully clear: we have no need for an innate assumption of space. My concept of space and yours are not going to match, but they will be close enough that we can communicate intelligibly and reason from the same premises to the same conclusions. It is of course possible that we have some built-in assumptions, but it is not necessary and we should consider it as an Ockham violation unless we find that there are notions of space we cannot learn even when they manifestly are better at describing our reality. Experimentally, I would say we have very strong evidence that space is not innate: watching babies learn how to interpret their sensorium, they need to learn that distance, angle, and shape exist, and that they are properties shared between sight and touch.
I expect that the same can be done for time, the self, and probably other aspects mentioned here. We can learn things approximately, without any a priori assumptions beyond the basic assumption that induction is valid, i.e. that things that appear true in our memories are more likely to appear true in our ongoing experience than things that appear false in our memories. (I have attempted, and I think succeeded, to make that definition time-free.) For establishing that this applies to time, I would first go about it by examining how babies learn object permanence, which seems like an example of minds which do not yet have an assumption of time. Similarly for the self and the mirror test and video/memory test.