Quote: An inductive AI asking what probability assignment to make on the next round is asking “Does induction work?”, and this is the question that it may answer by inductive reasoning. If you ask “Why does induction work?” then answering “Because induction works” is circular logic, and answering “Because I believe induction works” is magical thinking.
My view (IMBW) is that the inductive AI is asking the different question “Is induction a good choice of strategy for this class of problem ?” Your follow-up question is “Why did you choose induction for that class of problem ?” and the answer is “Because induction has proved a good choice of strategy in other, similar classes of problem, or for a significant subset of problems attempted in this class”.
Generalising, I suggest that self-optimising systems start on particulars and gradually become more general, rather than starting at generalities.
Quote: An inductive AI asking what probability assignment to make on the next round is asking “Does induction work?”, and this is the question that it may answer by inductive reasoning. If you ask “Why does induction work?” then answering “Because induction works” is circular logic, and answering “Because I believe induction works” is magical thinking.
My view (IMBW) is that the inductive AI is asking the different question “Is induction a good choice of strategy for this class of problem ?” Your follow-up question is “Why did you choose induction for that class of problem ?” and the answer is “Because induction has proved a good choice of strategy in other, similar classes of problem, or for a significant subset of problems attempted in this class”.
Generalising, I suggest that self-optimising systems start on particulars and gradually become more general, rather than starting at generalities.