This sounds like just as much of an a priori assumption as my working assumption that it does have some bearing.
An inhabitant of an infinite universe could notice that every single thing in it is finite, but would be completely wrong in assuming that the universe they are in is finite.
Yes, induction can lead to incorrect conclusions. But this is not a very strong argument against any given induction.
You take your assumption—which is presumable not justfiable apriori—that the past causes the future, and invert it.
I change my existing model so that the future causes the past within my model? I’m not sure how to do that either. I picture flipping the direction of every arrow in my causal graph, but that doesn’t introduce any irreducible teleology; I’m still left with an ordinary causal graph when I finish.
Yes, induction can lead to incorrect conclusions. But this is not a very strong argument against any given induction.
Induction only ever works, inasmuch as it works, across tokens of the same type. Parts and wholes are almost always of different types. Trying to derive properties of wholes from properties of part is the fallacy of composition.
This sounds like just as much of an a priori assumption as my working assumption that it does have some bearing.
Yes, induction can lead to incorrect conclusions. But this is not a very strong argument against any given induction.
I change my existing model so that the future causes the past within my model? I’m not sure how to do that either. I picture flipping the direction of every arrow in my causal graph, but that doesn’t introduce any irreducible teleology; I’m still left with an ordinary causal graph when I finish.
Induction only ever works, inasmuch as it works, across tokens of the same type. Parts and wholes are almost always of different types. Trying to derive properties of wholes from properties of part is the fallacy of composition.