I think that attempting to discuss something as broad as “the basics of induction” might be problematic just because the topic is so broad. People mean a variety of different things by terms like “induction” or “inductivism” and there’s a great danger of talking past one another.
For instance, the sort of induction principle I would (tentatively) endorse doesn’t at first glance look like an induction principle at all: it’s something along the lines of “all else being equal, prefer simpler propositions”. There are lots of ways to do something along those lines, some are better than others, I don’t claim to know the One True Best Way to do it, but I think this is the right approach. This gets you something like induction because theories in which things change gratuitously tend to be more complex. But whether you would call me an inductivist, I don’t know. I am fairly sure we don’t disagree about everything in this area, and it’s quite possible that our relevant disagreements are not best thought of as disagreements about induction, as opposed to disagreements about (say) inference or probability or explanation or simplicity that have consequences for what we think about induction.
(My super-brief answers to your questions about induction, taking “induction” for this purpose to mean “the way I think we should use empirical evidence to arrive at generalized opinions”: It’s trying to solve the problem of how you discover things about the world that go beyond direct observations. “Solve” might be too strong a word, but it addresses it by giving a procedure that, if the world behaves in regular ways, will tend to move your beliefs into better correspondence with reality as you get more evidence. (It seems, so far, as if the world does behave in regular ways, but of course I am not taking that as anything like a deductive proof that this sort of procedure is correct; that would be circular.) You do it by (1) weighting your beliefs according to complexity in some fashion and then (2) adjusting them as new evidence comes in—in one idealized version of the process you do #1 according to a “universal prior” and #2 according to Bayes’ theorem, though in practice the universal prior is uncomputable and applying Bayes in difficult cases involves way too much computation, so you need to make do with approximations and heuristics. I do not, explicitly, claim that the future resembles the past (or, rather, I kinda do claim it, but not as an axiom but as an inductive generalization arrived at by the means under discussion); I prefer simpler explanations, and ones where the future resembles the past are often simpler. For evidence to support one claim over another, it needs to be more likely when the former claim is true than when the latter is; of course this doesn’t follow merely from its being consistent with the former claim. Most evidence is consistent with most claims.)
I think that attempting to discuss something as broad as “the basics of induction” might be problematic just because the topic is so broad. People mean a variety of different things by terms like “induction” or “inductivism” and there’s a great danger of talking past one another.
For instance, the sort of induction principle I would (tentatively) endorse doesn’t at first glance look like an induction principle at all: it’s something along the lines of “all else being equal, prefer simpler propositions”. There are lots of ways to do something along those lines, some are better than others, I don’t claim to know the One True Best Way to do it, but I think this is the right approach. This gets you something like induction because theories in which things change gratuitously tend to be more complex. But whether you would call me an inductivist, I don’t know. I am fairly sure we don’t disagree about everything in this area, and it’s quite possible that our relevant disagreements are not best thought of as disagreements about induction, as opposed to disagreements about (say) inference or probability or explanation or simplicity that have consequences for what we think about induction.
(My super-brief answers to your questions about induction, taking “induction” for this purpose to mean “the way I think we should use empirical evidence to arrive at generalized opinions”: It’s trying to solve the problem of how you discover things about the world that go beyond direct observations. “Solve” might be too strong a word, but it addresses it by giving a procedure that, if the world behaves in regular ways, will tend to move your beliefs into better correspondence with reality as you get more evidence. (It seems, so far, as if the world does behave in regular ways, but of course I am not taking that as anything like a deductive proof that this sort of procedure is correct; that would be circular.) You do it by (1) weighting your beliefs according to complexity in some fashion and then (2) adjusting them as new evidence comes in—in one idealized version of the process you do #1 according to a “universal prior” and #2 according to Bayes’ theorem, though in practice the universal prior is uncomputable and applying Bayes in difficult cases involves way too much computation, so you need to make do with approximations and heuristics. I do not, explicitly, claim that the future resembles the past (or, rather, I kinda do claim it, but not as an axiom but as an inductive generalization arrived at by the means under discussion); I prefer simpler explanations, and ones where the future resembles the past are often simpler. For evidence to support one claim over another, it needs to be more likely when the former claim is true than when the latter is; of course this doesn’t follow merely from its being consistent with the former claim. Most evidence is consistent with most claims.)