I guess it depends on what you consider to count as “concrete”, but sure I can give 4 benefits.
1. Alief, belief, desire, and all the rest posit a lot of ontological complexity in the sense of gears or epicycles. They may allow you to produce accurate predictions for the things you care about but at the cost of being strictly less likely to be accurate in general because they have more parts so multiplication of probabilities makes them less likely to produce accurate predictions in all cases. Another way to put this is that the more parts your model has the more likely it is that if you are accurate it’s because you’re fitting the model to the data rather than generating a model that fits the data. And since the more parsimonious theory of axia is more likely to accurately predict reality (“be true”) because it posits less, it’s more likely that any philosophical work done with the concept will remain accurate when new evidence appears.
2. Understanding oneself is confusing if it seems there are competing kinds of things that lead to making decisions. The existence of aliefs and beliefs, for example, requires not only a way of understanding how aliefs and beliefs can be combined amongst themselves (we might call these the alief calculus and the belief calculus), but also some way of how aliefs and beliefs interact. Again, in an appeal to parsimony, it’s less complex if both these are actually the same thing and just have conflicting axia. This might be annoying if you were hoping to have a very powerful alief/belief calculus with proper functions, for example, but is no problem if you accept a calculus of relations that need not even be transitive.
(As an aside, this points to an important aspect of AI safety research that I think is often overlooked: most efforts now are focused on things with utility functions and the like, but that’s because if we can’t even solve the more constrained case of utility functions how are we possibly going to handle the more general case of preference relations?)
3. Following on from (2), it is much easier to be tranquil and experience fluidity if you can understand your (“you” being a perceived persistent subject) relationship to axia in general rather than each kind of thing we might break apart from axia. It’s the difference between addressing many special cases and addressing everything all at once.
4. Thinking in terms of axia allows you to more think of yourself as integrated rather than made up of parts, and this better reflects how we understand the world to work. That is, aliefs, beliefs, etc. have firm boundaries that require additional ontology to integrate with other parts of yourself. Axia eliminate the need to specifically understand how such integrations work in detail while still letting you work as if you did. Think how topology lets you do work even when you don’t understand the details of the space you are analyzing, and similarly how category theory lets you not even need to understand much about the things you are working with.
I’m not sure I’ve understood what you’re proposing. Let me sketch what it sounds like and ask a few questions, and then maybe you can tell me how I’ve misunderstood everything :-).
So, you have things called axia. I guess either (1) a belief is an axion, and so is an alief, and so is a memory of something, and so is a desire, and so on; or else, (2) belief is an axion, and so is alief, and so is memory, and so is desire; that is, each of these phenomena considered as a whole is an axion.
I don’t think #2 is consistent with what you’re saying because that means keeping these notions of belief and alief and so on and adding a broader category on top of them, and it sounds like you don’t want the narrower categories at all. So I’m guessing you intend #1. So now my belief that many millions of people live in London is an axion; so is my alief that if I ignore something bothersome that I need to do it will go away after a while; so is my recollection of baking chocolate chip cookies a few days ago; if I were hungry right now, so would be my desire for some food; I’m guessing you would also consider my preference for fewer people to get killed in Middle Eastern conflicts to be an axion, and perhaps also my perception of a computer monitor in front of me right now, and probably a bunch of other kinds of thing.
Question: what actually is the definition of “axion”? How do I tell whether a given (kind of) thing is one?
OK, so now you want to lump all these things together and … do what with them? I mean, if you just want to say that these are all things that happen in human thought processes then yeah, I agree, but you can say that just as well if you classify them as beliefs, desires, etc. Treat them all exactly the same? Surely not: the consequences for me of thinking there is a hungry tiger in the room are very different from those of wishing there were one, for instance. So we have these things that are diverse but have some common features, and this is equally true whether you give them different names like “belief” and “desire” or call them all “axia”. I’m not really seeing much practical difference, and the things you’ve said in response to Conor’s question don’t help me much. Specifically:
Your first point (more detailed models are more likely to overfit) seems like a fully general argument against distinguishing things from other things, and unsurprisingly I think that as it stands it’s just wrong. It is not true that more parsimonious theories are more likely to predict accurately. It’s probably true that more parsimonious and equally explanatory theories are more likely to predict accurately, but you’ve given no reason to suppose that lumping the different kinds of “axia” together doesn’t lose explanatory power, and if in fact what you want to do is call ’em all “axia” but still distinguish when that’s necessary to get better predictions (e.g., of how I will act when I contemplate the possibility of a hungry tiger in the room in a particular way) then your theory is no longer more parsimonious.
Your second point (it’s confusing to think that there are competing kinds of things that lead to making decisions) seems like nonsense to me. (Which may just indicate that I haven’t understood it right.) Our decisions are affected by (e.g.) our beliefs and our desires; beliefs and desires are not the same; if that’s confusing then very well, it’s confusing. Pretending that the distinctions aren’t there won’t make them go away, and if I don’t understand something then I want to be aware of my confusion.
Your third point seems more or less the same as the second.
Your fourth point (it’s better to think of ourselves as integrated) is fair enough, but once again that doesn’t mean abandoning distinctions. My left arm and my spleen are both part of my body, they’re both made up of cells, they have lots in common, but none the less it is generally preferable to distinguish arms from spleens and super-general terms like “body part” are not generally better than more specific terms like “arm”.
So I’m still not seeing the benefits of preferring “axia” to “aliefs, beliefs, desires, etc.” for most purposes. Perhaps we can try an actually concrete example? Suppose there is a thing I need to do, which is scary and bothersome and I don’t like thinking about it, so I put it off. I want to understand this situation. I could say “I believe that I need to do this, but I alieve that if I pay attention to it I will get hurt and that if I ignore it it will go away”. What would you have me think instead, and why would it be better? (One possibility: you want me to classify all those things as axia and then attend to more detailed specifics of each. If so, I’d like to understand why that’s better than classifying them as aliefs/beliefs and then attending to the more detailed specifics.)
First, thanks for your detailed comments. This kind of direct engagement with the ideas as stated helps me a lot in figuring out just what the heck it is I’m trying to communicate!
Question: what actually is the definition of “axion”? How do I tell whether a given (kind of) thing is one?
First, quick note, “axia” is actually the singular, and I guess the plural in English should be either “axies” or “axias”, but I share your intuition that it sounds like a plural so my intent was to use “axia” as a mass noun. This would hardly be the first time an Anglophone abused Ancient Greek, and my notion of “correct” usage is primarily based on wikitionary.
Axia is information that resides within a subject when we draw a subject-object distinction, as opposed to evidence (I’ll replace this with a Greek word later ;-)) which is the information that resides in the object being experienced. This gets a little tricky because for conscious subjects some axia may also be evidence (where the subject becomes the object of its own experience) and evidence becomes axia (that’s the whole point of updating and is the nature of experience), so axia is information “at rest” inside a subject to be used as priors during experience.
the consequences for me of thinking there is a hungry tiger in the room are very different from those of wishing there were one, for instance. So we have these things that are diverse but have some common features, and this is equally true whether you give them different names like “belief” and “desire” or call them all “axia”. I’m not really seeing much practical difference,
To me the point is that beliefs, desires, and the rest have a common structure that let us see the differences between beliefs, desires, etc. as differences of content rather than differences of kind. That is, beliefs are axia that contain information that describe claims about the world and desires are axia that contain information that describe claims about how we would like the world to be. That I can describe subclasses of axia in this way obviously implies that their content is rich enough that we can identify patterns within their content and talk about categories that match those patterns, but the important shift is in thinking of beliefs, aliefs, desires, etc. not as separate kinds of things but as different expressions of the same thing.
Maybe that seems uninteresting but it goes a long way in addressing what we need to understand to do philosophical work, in particular how much stuff we need to assume about the world in order to be able to say useful things about it, because we can be mistaken about what we really mean to point at when we say “belief”, “desire”, etc. but are less likely to be mistaken when we make the target we point at larger.
Perhaps we can try an actually concrete example? Suppose there is a thing I need to do, which is scary and bothersome and I don’t like thinking about it, so I put it off. I want to understand this situation. I could say “I believe that I need to do this, but I alieve that if I pay attention to it I will get hurt and that if I ignore it it will go away”. What would you have me think instead, and why would it be better? (One possibility: you want me to classify all those things as axia and then attend to more detailed specifics of each. If so, I’d like to understand why that’s better than classifying them as aliefs/beliefs and then attending to the more detailed specifics.)
So you seem to already get what I’m going to say, but I’ll say it anyway for clarity. If all these things are axia, then what you have is not a disagreement between what you believe and what you alieve and instead straight up contradictory axia. The resolution then is not a matter of aligning belief and alief or reweighting their importance in how you decide things, but instead to synthesize the contradictory axia. Thus I might think on why do I at once think I need to do this but also think it will hurt and hurt can be avoided by ignorance. Now these claims all stand on equal footing to be understood, each likely contributing something towards the complete understanding and ultimately the integration of axia that had previously been left ununified within you-as-subject.
The advantages are that you remove artificial boundaries in your ontology that may make it implicitly difficult to conceive of these axia being integrateable and work instead with a general process of axia synthesis that can be trained and reused in many cases rather than just in those between axia we can identify as “belief” and “alief”.
Loren ipsum
I guess it depends on what you consider to count as “concrete”, but sure I can give 4 benefits.
1. Alief, belief, desire, and all the rest posit a lot of ontological complexity in the sense of gears or epicycles. They may allow you to produce accurate predictions for the things you care about but at the cost of being strictly less likely to be accurate in general because they have more parts so multiplication of probabilities makes them less likely to produce accurate predictions in all cases. Another way to put this is that the more parts your model has the more likely it is that if you are accurate it’s because you’re fitting the model to the data rather than generating a model that fits the data. And since the more parsimonious theory of axia is more likely to accurately predict reality (“be true”) because it posits less, it’s more likely that any philosophical work done with the concept will remain accurate when new evidence appears.
2. Understanding oneself is confusing if it seems there are competing kinds of things that lead to making decisions. The existence of aliefs and beliefs, for example, requires not only a way of understanding how aliefs and beliefs can be combined amongst themselves (we might call these the alief calculus and the belief calculus), but also some way of how aliefs and beliefs interact. Again, in an appeal to parsimony, it’s less complex if both these are actually the same thing and just have conflicting axia. This might be annoying if you were hoping to have a very powerful alief/belief calculus with proper functions, for example, but is no problem if you accept a calculus of relations that need not even be transitive.
(As an aside, this points to an important aspect of AI safety research that I think is often overlooked: most efforts now are focused on things with utility functions and the like, but that’s because if we can’t even solve the more constrained case of utility functions how are we possibly going to handle the more general case of preference relations?)
3. Following on from (2), it is much easier to be tranquil and experience fluidity if you can understand your (“you” being a perceived persistent subject) relationship to axia in general rather than each kind of thing we might break apart from axia. It’s the difference between addressing many special cases and addressing everything all at once.
4. Thinking in terms of axia allows you to more think of yourself as integrated rather than made up of parts, and this better reflects how we understand the world to work. That is, aliefs, beliefs, etc. have firm boundaries that require additional ontology to integrate with other parts of yourself. Axia eliminate the need to specifically understand how such integrations work in detail while still letting you work as if you did. Think how topology lets you do work even when you don’t understand the details of the space you are analyzing, and similarly how category theory lets you not even need to understand much about the things you are working with.
I’m not sure I’ve understood what you’re proposing. Let me sketch what it sounds like and ask a few questions, and then maybe you can tell me how I’ve misunderstood everything :-).
So, you have things called axia. I guess either (1) a belief is an axion, and so is an alief, and so is a memory of something, and so is a desire, and so on; or else, (2) belief is an axion, and so is alief, and so is memory, and so is desire; that is, each of these phenomena considered as a whole is an axion.
I don’t think #2 is consistent with what you’re saying because that means keeping these notions of belief and alief and so on and adding a broader category on top of them, and it sounds like you don’t want the narrower categories at all. So I’m guessing you intend #1. So now my belief that many millions of people live in London is an axion; so is my alief that if I ignore something bothersome that I need to do it will go away after a while; so is my recollection of baking chocolate chip cookies a few days ago; if I were hungry right now, so would be my desire for some food; I’m guessing you would also consider my preference for fewer people to get killed in Middle Eastern conflicts to be an axion, and perhaps also my perception of a computer monitor in front of me right now, and probably a bunch of other kinds of thing.
Question: what actually is the definition of “axion”? How do I tell whether a given (kind of) thing is one?
OK, so now you want to lump all these things together and … do what with them? I mean, if you just want to say that these are all things that happen in human thought processes then yeah, I agree, but you can say that just as well if you classify them as beliefs, desires, etc. Treat them all exactly the same? Surely not: the consequences for me of thinking there is a hungry tiger in the room are very different from those of wishing there were one, for instance. So we have these things that are diverse but have some common features, and this is equally true whether you give them different names like “belief” and “desire” or call them all “axia”. I’m not really seeing much practical difference, and the things you’ve said in response to Conor’s question don’t help me much. Specifically:
Your first point (more detailed models are more likely to overfit) seems like a fully general argument against distinguishing things from other things, and unsurprisingly I think that as it stands it’s just wrong. It is not true that more parsimonious theories are more likely to predict accurately. It’s probably true that more parsimonious and equally explanatory theories are more likely to predict accurately, but you’ve given no reason to suppose that lumping the different kinds of “axia” together doesn’t lose explanatory power, and if in fact what you want to do is call ’em all “axia” but still distinguish when that’s necessary to get better predictions (e.g., of how I will act when I contemplate the possibility of a hungry tiger in the room in a particular way) then your theory is no longer more parsimonious.
Your second point (it’s confusing to think that there are competing kinds of things that lead to making decisions) seems like nonsense to me. (Which may just indicate that I haven’t understood it right.) Our decisions are affected by (e.g.) our beliefs and our desires; beliefs and desires are not the same; if that’s confusing then very well, it’s confusing. Pretending that the distinctions aren’t there won’t make them go away, and if I don’t understand something then I want to be aware of my confusion.
Your third point seems more or less the same as the second.
Your fourth point (it’s better to think of ourselves as integrated) is fair enough, but once again that doesn’t mean abandoning distinctions. My left arm and my spleen are both part of my body, they’re both made up of cells, they have lots in common, but none the less it is generally preferable to distinguish arms from spleens and super-general terms like “body part” are not generally better than more specific terms like “arm”.
So I’m still not seeing the benefits of preferring “axia” to “aliefs, beliefs, desires, etc.” for most purposes. Perhaps we can try an actually concrete example? Suppose there is a thing I need to do, which is scary and bothersome and I don’t like thinking about it, so I put it off. I want to understand this situation. I could say “I believe that I need to do this, but I alieve that if I pay attention to it I will get hurt and that if I ignore it it will go away”. What would you have me think instead, and why would it be better? (One possibility: you want me to classify all those things as axia and then attend to more detailed specifics of each. If so, I’d like to understand why that’s better than classifying them as aliefs/beliefs and then attending to the more detailed specifics.)
First, thanks for your detailed comments. This kind of direct engagement with the ideas as stated helps me a lot in figuring out just what the heck it is I’m trying to communicate!
First, quick note, “axia” is actually the singular, and I guess the plural in English should be either “axies” or “axias”, but I share your intuition that it sounds like a plural so my intent was to use “axia” as a mass noun. This would hardly be the first time an Anglophone abused Ancient Greek, and my notion of “correct” usage is primarily based on wikitionary.
Axia is information that resides within a subject when we draw a subject-object distinction, as opposed to evidence (I’ll replace this with a Greek word later ;-)) which is the information that resides in the object being experienced. This gets a little tricky because for conscious subjects some axia may also be evidence (where the subject becomes the object of its own experience) and evidence becomes axia (that’s the whole point of updating and is the nature of experience), so axia is information “at rest” inside a subject to be used as priors during experience.
To me the point is that beliefs, desires, and the rest have a common structure that let us see the differences between beliefs, desires, etc. as differences of content rather than differences of kind. That is, beliefs are axia that contain information that describe claims about the world and desires are axia that contain information that describe claims about how we would like the world to be. That I can describe subclasses of axia in this way obviously implies that their content is rich enough that we can identify patterns within their content and talk about categories that match those patterns, but the important shift is in thinking of beliefs, aliefs, desires, etc. not as separate kinds of things but as different expressions of the same thing.
Maybe that seems uninteresting but it goes a long way in addressing what we need to understand to do philosophical work, in particular how much stuff we need to assume about the world in order to be able to say useful things about it, because we can be mistaken about what we really mean to point at when we say “belief”, “desire”, etc. but are less likely to be mistaken when we make the target we point at larger.
So you seem to already get what I’m going to say, but I’ll say it anyway for clarity. If all these things are axia, then what you have is not a disagreement between what you believe and what you alieve and instead straight up contradictory axia. The resolution then is not a matter of aligning belief and alief or reweighting their importance in how you decide things, but instead to synthesize the contradictory axia. Thus I might think on why do I at once think I need to do this but also think it will hurt and hurt can be avoided by ignorance. Now these claims all stand on equal footing to be understood, each likely contributing something towards the complete understanding and ultimately the integration of axia that had previously been left ununified within you-as-subject.
The advantages are that you remove artificial boundaries in your ontology that may make it implicitly difficult to conceive of these axia being integrateable and work instead with a general process of axia synthesis that can be trained and reused in many cases rather than just in those between axia we can identify as “belief” and “alief”.