We can keep seeking the perfect worldview forever, and we’ll never find one. The answer to how to make the best choice every time. The answer to moral dilemmas. The answer to social issues, personal issues, well-being issues. No worldview will be able to output the best answer in every circumstance.
Someone picks a questionable ontology for modeling biological organisms/neural nets—for concreteness, let’s say they try to represent some system as a decision tree.
Lo and behold, this poor choice of ontology doesn’t work very well; the modeler requires a huge amount of complexity to decently represent the real-world system in their poorly-chosen ontology. For instance, maybe they need a ridiculously large decision tree or random forest to represent a neural net to decent precision.
The modeler concludes that the real world system is hopelessly complicated (i.e. fractal complexity), and no human-interpretable model will ever capture it to reasonable precision.
… and in this situation, my response is “It’s not hopelessly complex, that’s just what it looks like when you choose the ontology without doing the work to discover the ontology”.
There is a generalized version of this pattern, beyond just the “you don’t get to choose the ontology” problem:
Someone latches on to a particular strategy to solve some problem, or to solve problems in general, without doing the work to discover a strategy which works well.
Lo and behold, the strategy does not work.
The person concludes that the real world is hopelessly complex/intractable/ever-changing, and no human will ever be able to solve the problem or to solve problems in general.
My generalized response is: it’s not impossible, you just need to actually do the work to figure it out properly.
(Buddhism seems generally mostly unhelpful and often antihelpful, but) What you say here is very much not giving the problem its due. Our problems are not cartesian—we care about ourselves and each other, and are practically involved with ourselves and each other; and ourselves and each other are diagonalizey, self-createy things. So yes, a huge range of questions can be answered, but there will always be questions that you can’t answer. I would guess furthermore that in relevant sense, there will always be deep / central / important / salient / meaningful questions that aren’t fully satisfactorily answered; but that’s less clear.
Lo and behold, this poor choice of ontology doesn’t work very well; the modeler requires a huge amount of complexity to decently represent the real-world system in their poorly-chosen ontology. For instance, maybe they need a ridiculously large decision tree or random forest to represent a neural net to decent precision.
That can happen because your choice of ontology was bad, but it can also be the case that representing the real-world system with “decent” precision in any ontology requires a ridiculously large model. Concretely, I expect that this is true of e.g. human language—e.g. for the Hutter Prize I don’t expect it to be possible to get a lossless compression ratio better than 0.08 on enwik9 no matter what ontology you choose.
It would be nice if we had a better way of distinguishing between “intrinsically complex domain” and “skill issue” than “have a bunch of people dedicate years of their lives to trying a bunch of different approaches” though.
Hm, if by “discovering” you mean Dropping all fixed priors Making direct contact with reality (which is without any ontology) And then deep insight emerges And then after-the-fact you construct an ontology that is most beneficial based on your discovery
Then I’m on board with that
And yet I still claim that ontology is insufficient, imperfect, and not actually gonna work in the end.
Sounds like a skill issue.
I’m reminded of a pattern:
Someone picks a questionable ontology for modeling biological organisms/neural nets—for concreteness, let’s say they try to represent some system as a decision tree.
Lo and behold, this poor choice of ontology doesn’t work very well; the modeler requires a huge amount of complexity to decently represent the real-world system in their poorly-chosen ontology. For instance, maybe they need a ridiculously large decision tree or random forest to represent a neural net to decent precision.
The modeler concludes that the real world system is hopelessly complicated (i.e. fractal complexity), and no human-interpretable model will ever capture it to reasonable precision.
… and in this situation, my response is “It’s not hopelessly complex, that’s just what it looks like when you choose the ontology without doing the work to discover the ontology”.
There is a generalized version of this pattern, beyond just the “you don’t get to choose the ontology” problem:
Someone latches on to a particular strategy to solve some problem, or to solve problems in general, without doing the work to discover a strategy which works well.
Lo and behold, the strategy does not work.
The person concludes that the real world is hopelessly complex/intractable/ever-changing, and no human will ever be able to solve the problem or to solve problems in general.
My generalized response is: it’s not impossible, you just need to actually do the work to figure it out properly.
(Buddhism seems generally mostly unhelpful and often antihelpful, but) What you say here is very much not giving the problem its due. Our problems are not cartesian—we care about ourselves and each other, and are practically involved with ourselves and each other; and ourselves and each other are diagonalizey, self-createy things. So yes, a huge range of questions can be answered, but there will always be questions that you can’t answer. I would guess furthermore that in relevant sense, there will always be deep / central / important / salient / meaningful questions that aren’t fully satisfactorily answered; but that’s less clear.
That can happen because your choice of ontology was bad, but it can also be the case that representing the real-world system with “decent” precision in any ontology requires a ridiculously large model. Concretely, I expect that this is true of e.g. human language—e.g. for the Hutter Prize I don’t expect it to be possible to get a lossless compression ratio better than 0.08 on enwik9 no matter what ontology you choose.
It would be nice if we had a better way of distinguishing between “intrinsically complex domain” and “skill issue” than “have a bunch of people dedicate years of their lives to trying a bunch of different approaches” though.
Hm, if by “discovering” you mean
Dropping all fixed priors
Making direct contact with reality (which is without any ontology)
And then deep insight emerges
And then after-the-fact you construct an ontology that is most beneficial based on your discovery
Then I’m on board with that
And yet I still claim that ontology is insufficient, imperfect, and not actually gonna work in the end.