It’s sort of like the difference between a programmable computer vs an arbitrary blob of matter.
This is close to what I meant: My neurons keep doing something-like reinforcement learning, whether or not I theoretically believe thats valid. “I in fact can not think outside this” does adress the worry about a merely rational constraint.
On the other hand, we do want AI to eventually consider other hardware, and that might even be necessary for normal embedded agency, since we dont fully trust our hardware even when we dont want to normal-sense-change it.
To sum up, meaning in this view is broadly more inferentialist and less correspondence-based: the meaning of a think is more closely tied with the inferences around that thing, than with how that thing corresponds to a territory.
I broadly agree with inferentialism, but I don’t think that entirely adresses it. The mark of confused, rather than merely wrong, beliefs is that they dont really have a coherent use. So for example it might be that theres a path through possible scenarios leading back to the starting point, where if at every step I adjust my reaction in a way that seems appropriate to me, I end up with a different reaction when I’m back at the start. If you tried to describe my practices here you would just explicitly account for the framing dependence. But then it wouldn’t be confused! That framing-dependent concept you described also exists, but it seems quite different from the confused one. For the confused concept its essential that I consider it not dependent in this way. But if you try to include that in your description, by also describing the practices around my meta-beliefs about the concept, and the meta-meta beliefs, and so on, then you’d end up also describing the process with which I recognized it as confused and revised it. And then we’re back in the position of already-having-recognized that its bullshit.
When you were only going up individual meta-levels, the propositions logical induction worked with could be meaningful even if they were wrong, because they were part of processes outside the logical induction process, and those were sufficient to give them truth-conditions. Now you want to determine both what to believe and how those beliefs are to be used in one go, and it’s undermining that, because the “how beliefs are to be used” is what foundationalism kept fixed, and which gave them their truth conditions.
I’m not seeing that implication at all!
Well, this is a bit analogy-y, but I’ll try to explain. I think theres a semantic issue with anthropics (indeed, under inferentialism all confusion can be expressed as a semantic issue). Things like “the propability that I will have existed if I do X now” are unclear. For example a descriptivist way of understanding conditional propabilities is something like “The token C means conditional propability iff whenever you believe xCy = p, then you would belief P(y) = p if you came to believe x”. But that assumes not only that you are logically perfect but that you are there to have beliefs and answer for them. Now most of the time it’s not a problem if you’re not actually there, because we can just ask about if you were there (and you somehow got oxygen and glucose despite not touching anything, and you could see without blocking photons, etc but lets ignore that for now), even if you aren’t actually. But this can be a problem with anthropic situations. Normally when a hypothetical involves you, you can just imagine it from your prespective, and when it doesn’t involve you, you can imagine you were there. But if you’re trying to imagine a scenario that involves you but you can’t imagine it from your prespective, because you come into existence in it, or you have a mental defect in it, or something, then you have to imagine it from the third person. So you’re not really thinking about yourself, you’re thinking about a copy, which may be in quite a different epistemic situation. So if you can conceptually explain how to have semantics that accounts for my making mistakes, then I think that would propably be able to account for my not being there as well (in both cases, it’s just the virtuous epistemic process missing). And that would tell us how to have anthropic beliefs, and that would unknot the area.
This is close to what I meant: My neurons keep doing something-like reinforcement learning, whether or not I theoretically believe thats valid. “I in fact can not think outside this” does adress the worry about a merely rational constraint.
On the other hand, we do want AI to eventually consider other hardware, and that might even be necessary for normal embedded agency, since we dont fully trust our hardware even when we dont want to normal-sense-change it.
I broadly agree with inferentialism, but I don’t think that entirely adresses it. The mark of confused, rather than merely wrong, beliefs is that they dont really have a coherent use. So for example it might be that theres a path through possible scenarios leading back to the starting point, where if at every step I adjust my reaction in a way that seems appropriate to me, I end up with a different reaction when I’m back at the start. If you tried to describe my practices here you would just explicitly account for the framing dependence. But then it wouldn’t be confused! That framing-dependent concept you described also exists, but it seems quite different from the confused one. For the confused concept its essential that I consider it not dependent in this way. But if you try to include that in your description, by also describing the practices around my meta-beliefs about the concept, and the meta-meta beliefs, and so on, then you’d end up also describing the process with which I recognized it as confused and revised it. And then we’re back in the position of already-having-recognized that its bullshit.
When you were only going up individual meta-levels, the propositions logical induction worked with could be meaningful even if they were wrong, because they were part of processes outside the logical induction process, and those were sufficient to give them truth-conditions. Now you want to determine both what to believe and how those beliefs are to be used in one go, and it’s undermining that, because the “how beliefs are to be used” is what foundationalism kept fixed, and which gave them their truth conditions.
Well, this is a bit analogy-y, but I’ll try to explain. I think theres a semantic issue with anthropics (indeed, under inferentialism all confusion can be expressed as a semantic issue). Things like “the propability that I will have existed if I do X now” are unclear. For example a descriptivist way of understanding conditional propabilities is something like “The token C means conditional propability iff whenever you believe xCy = p, then you would belief P(y) = p if you came to believe x”. But that assumes not only that you are logically perfect but that you are there to have beliefs and answer for them. Now most of the time it’s not a problem if you’re not actually there, because we can just ask about if you were there (and you somehow got oxygen and glucose despite not touching anything, and you could see without blocking photons, etc but lets ignore that for now), even if you aren’t actually. But this can be a problem with anthropic situations. Normally when a hypothetical involves you, you can just imagine it from your prespective, and when it doesn’t involve you, you can imagine you were there. But if you’re trying to imagine a scenario that involves you but you can’t imagine it from your prespective, because you come into existence in it, or you have a mental defect in it, or something, then you have to imagine it from the third person. So you’re not really thinking about yourself, you’re thinking about a copy, which may be in quite a different epistemic situation. So if you can conceptually explain how to have semantics that accounts for my making mistakes, then I think that would propably be able to account for my not being there as well (in both cases, it’s just the virtuous epistemic process missing). And that would tell us how to have anthropic beliefs, and that would unknot the area.