The LDSL series provides quite a few everyday examples, but for some reason you aren’t satisfied with those. Difficult examples require that you’re good at something, so I might not be able to find an example for you.
tailcalled
I don’t really care what you think.
… can you pick some topic that you are good at instead of focusing on AI? That would probably make the examples more informative.
“How do we interpret the inner-workings of neural networks.” is not a puzzle unless you get more concrete an application of it. For instance an input/output pair which you find surprising and want an interpretation for, or at least some general reason you want to interpret it.
Wouldn’t it be more impressive if I could point you to a solution to a puzzle you’ve been stuck on than if I present my own puzzle and give you the solution to that?
I’m not saying you’re dismissing me because I call myself anti-reductionist, I’m saying you’re dismissing me because I am an anti-reductionist.
I don’t think you’re using the actual arguments I presented in the LDSL series to evaluate my position.
I’m dismissing you because you can’t give me examples of where your theory has been concretely useful!
If you don’t have any puzzles within Economics/Sociology/Biology/Evolution/Psychology/AI/Ecology where it would be useful with a more holistic theory, then it’s not clear why I should talk to you.
You praise someone who wants to do agent-based models, but agent-based models are a reductionistic approach to the field of complexity science, so this sure seems to prove my point. (I mean, approximately all of the non-reductionistic approaches to the field of complexity science are bad too.)
More specifically, my position is anti-reductionist, and rationalist-empiricist-reductionists dismiss anti-reductionists as cranks. As long as you are trying to model whether I am that and then dismiss me if you find I am, it is a waste of time to try to communicate my position to you.
Thing is just from the conclusions it won’t be obvious that the meta-level theory is better. The improvement can primarily be understood in the context of the virtues of the meta-level theory.
But that would probe the power of the arguments whereas really I’m trying to probe the obviousness of the claims.
I can think of reasons why you’d like to know what theories would be smart to make using this framework, e.g. so you can make those theories instead of bothering to learn the framework. However, that’s not a reason it would be good for me to share it with you, since I think that’d just distract you from the point of my theory.
This may well be true (though I think not), but what is your argument about not even linking to your original posts?
I don’t know of anyone who seems to have understood the original posts, so I kinda doubt people can understand the point of them. Plus often what I’m writing about is a couple of steps removed from the original posts.
Or how often you don’t explain yourself even in completely unrelated subjects?
Part of the probing is to see which of the claims I make will seem obviously true and which of them will just seem senseless.
Why?
It’s mainly good for deciding what phenomena to make narrow theories about.
For me though, what would get me much more on-board with your thoughts are actual examples of you using these ideas to model things nobody else can model (mathematically!) in as broad a spectrum of fields as you claim. That, or a much more compact & streamlined argument.
I think this is the crux. To me after understanding these ideas, it’s retroactively obvious that they are modelling all sorts of phenomena. My best guess is that the reason you don’t see it is that you don’t see the phenomena that are failing to be modelled by conventional methods (or at least don’t understand how those phenomena related to the birds-eye perspective), so you don’t realize what new thing is missing. And I can’t easily cure this kind of cluelessness with examples, because my theories aren’t necessary if you just consider a single very narrow and homogenous phenomenon as then you can just make a special-built theory for that.
The details are in the book. I’m mainly writing the OP to inform clueless progressives who might’ve dismissed Ayn Rand for being a right-wing misogynist that despite this they might still find her book insightful.
Recently I’ve been starting to think it could go many other ways than my predictions above suggest. So it’s probably safer to say that the futurist/rationalist predictions are all wrong than that any particular prediction I can make is right.
I’m still mostly optimistic though.
The thing about slop effects is that my updates (attempted to be described e.g. here https://www.lesswrong.com/s/gEvTvhr8hNRrdHC62 ) makes huge fractions of LessWrong look like slop to me. Some of the increase in vagueposting is basically lazy probing for whether rationalists will get the problem if framed in different ways than the original longform.
This post has the table example. That’s probably the most important of all the examples.
That’s accounting, not statistics.
AFAIK epidemiologists usually measure particular diseases and focus their models on those, whereas LDSL would more be across all species of germs.
There is basically no competition. You just keep on treating it like the narrow domain-specific models count as competition when they really don’t because they focus on something different than mine.