I don’t think your dialectical reversion back to randomista logic makes much sense considering we can’t exactly do random control trials to figure out any of the major questions of the social sciences. If you want to promote social science research, I think the best thing you could do is collect consistent statistics over long periods of times. You can learn a lot about modern societies just by learning how national accounts work and looking back at them many different ways. Alternatively, building agent based simulations allows you to test in flexible ways how different types of behavior, both heterogenous and homogenous, might effect macroscopic social outcomes. These are the techniques that I use and they’ve proven very helpful.
If there’s one other thing you’re missing is this, epistemology isn’t something you can rely on others for, even trying to triangulate between different viewpoints. You always have to do your own epistemology, because every way of knowing you encounter in society is a part of someone’s ideological framework trying to adversarially draw you into it.
Thank you for the substantive response. I do think there’s a few misunderstandings here of what I’m saying.
I’m not talking about world states which exist “out there” in the territory, which, is debatable whether they exist or not anyway, I’m talking about world states that exist within the agent, compressed however they like. Within the agent, each world state they consider as a possible goal is distinguished from the others in order for it to be meaningful in some way. The distinguishing characteristics can be decided by the agent in an arbitrary way.
So when I’m talking about signs I’m talking about signifier/signified pairs, when we’re talking about real numbers for example, we’re taking about a signifier with two different signifieds, therefore two different signs. I talk about exactly this issue in my last post:
As I say, most signifiers do have both associated first order and higher order signs! But these are /not/ the same thing, they are not equivalent, as you say they are, from an information perspective. If you know the first order sign, there’s no reason you would automatically know the corresponding higher order sign, and the same for vice versa, as I show in my excerpt from my most recent blog.
My argument specifically hinges on whether it’s possible for an agent to have final goals without higher order signs: it’s not, precisely because first order and higher order signs do not contain the same information.
I couldn’t name a specific direction, but what I would say is that agents of similar intelligence and environment will tend towards similar final goals. Otherwise, I generally agree with this post on the topic. https://unstableontology.com/2024/09/19/the-obliqueness-thesis/