In the real world inductions seems to work for some problems but not for others.
The turkey who gets feed by humans can update every day he’s fed on the thesis that humans are benelovent.
When he get’s slaughtered at thanksgiving, he’s out of luck.
I feel like this is more of a problem with your optimism than with induction. You should really have a hypothesis set that says “humans want me to be fed for some period of time” and the evidence increases your confidence in that, not just some subset of it. After that, you can have additional hypotheses about, for example, their possible motivations, that you could update on based on whatever other data you have (e.g. you’re super-induction-turkey, so you figured out evolution). Or, more trivially, you might notice that sometimes your fellow turkeys disappear and don’t come back (if that happens). You would then predict the future based on all of these hypotheses, not just one linear trend you detected.
Being able to predict the results of giving up on a problem does not imply that giving up is superior to tackling a problem that I don’t know I’ll be able to solve.
In the real world inductions seems to work for some problems but not for others.
The turkey who gets feed by humans can update every day he’s fed on the thesis that humans are benelovent. When he get’s slaughtered at thanksgiving, he’s out of luck.
I feel like this is more of a problem with your optimism than with induction. You should really have a hypothesis set that says “humans want me to be fed for some period of time” and the evidence increases your confidence in that, not just some subset of it. After that, you can have additional hypotheses about, for example, their possible motivations, that you could update on based on whatever other data you have (e.g. you’re super-induction-turkey, so you figured out evolution). Or, more trivially, you might notice that sometimes your fellow turkeys disappear and don’t come back (if that happens). You would then predict the future based on all of these hypotheses, not just one linear trend you detected.
I’m not sure why, but now I want Super-induction-turkey to be the LW mascot.
If you have a method of understanding the world that works for all problems, I would love to hear it.
Acknowledging that you can’t solve them?
In what sense does that “work”?
Being able to predict the results of giving up on a problem does not imply that giving up is superior to tackling a problem that I don’t know I’ll be able to solve.
How do you know which ones are the ones you can’t solve?
So induction gives the right answer 100s of times, and then gets it wrong once. Doesn’t seem too bad a ratio.