We can keep seeking the perfect worldview forever, and we’ll never find one. The answer to how to make the best choice every time. The answer to moral dilemmas. The answer to social issues, personal issues, well-being issues. No worldview will be able to output the best answer in every circumstance.
Someone picks a questionable ontology for modeling biological organisms/neural nets—for concreteness, let’s say they try to represent some system as a decision tree.
Lo and behold, this poor choice of ontology doesn’t work very well; the modeler requires a huge amount of complexity to decently represent the real-world system in their poorly-chosen ontology. For instance, maybe they need a ridiculously large decision tree or random forest to represent a neural net to decent precision.
The modeler concludes that the real world system is hopelessly complicated (i.e. fractal complexity), and no human-interpretable model will ever capture it to reasonable precision.
… and in this situation, my response is “It’s not hopelessly complex, that’s just what it looks like when you choose the ontology without doing the work to discover the ontology”.
There is a generalized version of this pattern, beyond just the “you don’t get to choose the ontology” problem:
Someone latches on to a particular strategy to solve some problem, or to solve problems in general, without doing the work to discover a strategy which works well.
Lo and behold, the strategy does not work.
The person concludes that the real world is hopelessly complex/intractable/ever-changing, and no human will ever be able to solve the problem or to solve problems in general.
My generalized response is: it’s not impossible, you just need to actually do the work to figure it out properly.
(Buddhism seems generally mostly unhelpful and often antihelpful, but) What you say here is very much not giving the problem its due. Our problems are not cartesian—we care about ourselves and each other, and are practically involved with ourselves and each other; and ourselves and each other are diagonalizey, self-createy things. So yes, a huge range of questions can be answered, but there will always be questions that you can’t answer. I would guess furthermore that in relevant sense, there will always be deep / central / important / salient / meaningful questions that aren’t fully satisfactorily answered; but that’s less clear.
It says to avoid suffering by dismantling your motives. Some people act on that advice and then don’t try to do things and therefore don’t do things. Also so far no one has pointed out to me someone who’s done something I’d recognize as good and impressive, and who credibly attributes some of that outcome to Buddhism. (Which is a high bar; what other cherished systems wouldn’t reach that bar? But people make wild claims about Buddhism.)
It says to avoid suffering by dismantling your motives.
Worth noting that this is more true about some strands of Buddhism than others. I think most true for Theravada, least true for some Western strands such as Pragmatic Dharma; I believe Mahayana and Tibetan Buddhism are somewhere in between, though I’m not an expert on either. Not sure where to place Zen.
Being ignorant, I can’t respond in detail. It makes sense that there’d be variation between ideologies, and that many people would have versions that are less, or differently, bad (according to me, on this dimension). But I would also guess that I’d find deep disagreements in more strands, if I knew more about them, that are related to motive dismantling.
For example, I’d expect many strands to incorporate something like the negation
of “Reality bites back.” or
of “Reality is (or rather, includes quite a lot of) that which, when you stop believing in it, doesn’t go away.” or
of ” We live in the world beyond the reach of God.”.
As another example, I would expect most Buddhists to say that you move toward unity with God (however you want to phrase that) by in some manner becoming less {involved with / reliant on / constituted by / enthralled by / …} symbolic experience/reasoning, but I would fairly strongly negate this, and say that you can only constitute God via much more symbolic experience/reasoning.
For example, I’d expect many strands to incorporate something like the negation
Some yes, though the strands that I personally like the most lean strongly into those statements. The interpretation of Buddhism that makes the most sense to me sees much of the aim of practice as first becoming aware of, and then dropping, various mental mechanisms that cause motivated reasoning and denial of what’s actually true.
Lo and behold, this poor choice of ontology doesn’t work very well; the modeler requires a huge amount of complexity to decently represent the real-world system in their poorly-chosen ontology. For instance, maybe they need a ridiculously large decision tree or random forest to represent a neural net to decent precision.
That can happen because your choice of ontology was bad, but it can also be the case that representing the real-world system with “decent” precision in any ontology requires a ridiculously large model. Concretely, I expect that this is true of e.g. human language—e.g. for the Hutter Prize I don’t expect it to be possible to get a lossless compression ratio better than 0.08 on enwik9 no matter what ontology you choose.
It would be nice if we had a better way of distinguishing between “intrinsically complex domain” and “skill issue” than “have a bunch of people dedicate years of their lives to trying a bunch of different approaches” though.
Maybe you could see the fixed points that OP is pointing towards as priors in the search process for frames.
Like, your search is determined by your priors which are learnt through your upbringing. The problem is that they’re often maladaptive and misleading. Therefore, working through these priors and generating new ones is a bit like relearning from overfitting or similar.
Another nice thing about meditation is that it sharpens your mind’s perception which makes your new priors better. It also makes you less dependent on attractor states you could have gotten into from before since you become less emotionally dependent on past behaviour. (there’s obviously more complexity here) (I’m referring to dependent origination for you meditators out there)
It’s like pruning the bad data from your dataset and retraining your model, you’re basically guaranteed to find better ontologies from that (or that’s the hope at least).
Hm, if by “discovering” you mean Dropping all fixed priors Making direct contact with reality (which is without any ontology) And then deep insight emerges And then after-the-fact you construct an ontology that is most beneficial based on your discovery
Then I’m on board with that
And yet I still claim that ontology is insufficient, imperfect, and not actually gonna work in the end.
‘these practices grant unmediated access to reality’ sounds like a metaphysical claim. The Buddha’s take on his system’s relevance to metaphysics seems pretty consistently deflationary to me.
I don’t know how else to phrase it, but I would like to not contradict interdependent origination. While still pointing toward what happens when all views are dropped and insight becomes possible.
Sounds like a skill issue.
I’m reminded of a pattern:
Someone picks a questionable ontology for modeling biological organisms/neural nets—for concreteness, let’s say they try to represent some system as a decision tree.
Lo and behold, this poor choice of ontology doesn’t work very well; the modeler requires a huge amount of complexity to decently represent the real-world system in their poorly-chosen ontology. For instance, maybe they need a ridiculously large decision tree or random forest to represent a neural net to decent precision.
The modeler concludes that the real world system is hopelessly complicated (i.e. fractal complexity), and no human-interpretable model will ever capture it to reasonable precision.
… and in this situation, my response is “It’s not hopelessly complex, that’s just what it looks like when you choose the ontology without doing the work to discover the ontology”.
There is a generalized version of this pattern, beyond just the “you don’t get to choose the ontology” problem:
Someone latches on to a particular strategy to solve some problem, or to solve problems in general, without doing the work to discover a strategy which works well.
Lo and behold, the strategy does not work.
The person concludes that the real world is hopelessly complex/intractable/ever-changing, and no human will ever be able to solve the problem or to solve problems in general.
My generalized response is: it’s not impossible, you just need to actually do the work to figure it out properly.
(Buddhism seems generally mostly unhelpful and often antihelpful, but) What you say here is very much not giving the problem its due. Our problems are not cartesian—we care about ourselves and each other, and are practically involved with ourselves and each other; and ourselves and each other are diagonalizey, self-createy things. So yes, a huge range of questions can be answered, but there will always be questions that you can’t answer. I would guess furthermore that in relevant sense, there will always be deep / central / important / salient / meaningful questions that aren’t fully satisfactorily answered; but that’s less clear.
Can you say more on what you think is un- or anti-helpful in Buddhism?
It says to avoid suffering by dismantling your motives. Some people act on that advice and then don’t try to do things and therefore don’t do things. Also so far no one has pointed out to me someone who’s done something I’d recognize as good and impressive, and who credibly attributes some of that outcome to Buddhism. (Which is a high bar; what other cherished systems wouldn’t reach that bar? But people make wild claims about Buddhism.)
Worth noting that this is more true about some strands of Buddhism than others. I think most true for Theravada, least true for some Western strands such as Pragmatic Dharma; I believe Mahayana and Tibetan Buddhism are somewhere in between, though I’m not an expert on either. Not sure where to place Zen.
Being ignorant, I can’t respond in detail. It makes sense that there’d be variation between ideologies, and that many people would have versions that are less, or differently, bad (according to me, on this dimension). But I would also guess that I’d find deep disagreements in more strands, if I knew more about them, that are related to motive dismantling.
For example, I’d expect many strands to incorporate something like the negation
of “Reality bites back.” or
of “Reality is (or rather, includes quite a lot of) that which, when you stop believing in it, doesn’t go away.” or
of ” We live in the world beyond the reach of God.”.
As another example, I would expect most Buddhists to say that you move toward unity with God (however you want to phrase that) by in some manner becoming less {involved with / reliant on / constituted by / enthralled by / …} symbolic experience/reasoning, but I would fairly strongly negate this, and say that you can only constitute God via much more symbolic experience/reasoning.
Some yes, though the strands that I personally like the most lean strongly into those statements. The interpretation of Buddhism that makes the most sense to me sees much of the aim of practice as first becoming aware of, and then dropping, various mental mechanisms that cause motivated reasoning and denial of what’s actually true.
That can happen because your choice of ontology was bad, but it can also be the case that representing the real-world system with “decent” precision in any ontology requires a ridiculously large model. Concretely, I expect that this is true of e.g. human language—e.g. for the Hutter Prize I don’t expect it to be possible to get a lossless compression ratio better than 0.08 on enwik9 no matter what ontology you choose.
It would be nice if we had a better way of distinguishing between “intrinsically complex domain” and “skill issue” than “have a bunch of people dedicate years of their lives to trying a bunch of different approaches” though.
This does seem kind of correct to me?
Maybe you could see the fixed points that OP is pointing towards as priors in the search process for frames.
Like, your search is determined by your priors which are learnt through your upbringing. The problem is that they’re often maladaptive and misleading. Therefore, working through these priors and generating new ones is a bit like relearning from overfitting or similar.
Another nice thing about meditation is that it sharpens your mind’s perception which makes your new priors better. It also makes you less dependent on attractor states you could have gotten into from before since you become less emotionally dependent on past behaviour. (there’s obviously more complexity here) (I’m referring to dependent origination for you meditators out there)
It’s like pruning the bad data from your dataset and retraining your model, you’re basically guaranteed to find better ontologies from that (or that’s the hope at least).
Hm, if by “discovering” you mean
Dropping all fixed priors
Making direct contact with reality (which is without any ontology)
And then deep insight emerges
And then after-the-fact you construct an ontology that is most beneficial based on your discovery
Then I’m on board with that
And yet I still claim that ontology is insufficient, imperfect, and not actually gonna work in the end.
‘these practices grant unmediated access to reality’ sounds like a metaphysical claim. The Buddha’s take on his system’s relevance to metaphysics seems pretty consistently deflationary to me.
I don’t know how else to phrase it, but I would like to not contradict interdependent origination. While still pointing toward what happens when all views are dropped and insight becomes possible.