One thing I see as different between your perspective and (my understanding of) teleosemantics, so far:
You make a general case that values underlie beliefs.
Teleosemantics makes a specific claim that the meaning of semantic constructs (such as beliefs and messages) is pinned down by what it is trying to correspond to.
Your picture seems very compatible with, EG, the old LW claim that UDT’s probabilities are really a measure of caring—how much you care about doing well in a variety of scenarios.
Teleosemantics might fail to analyze such probabilities as beliefs at all; certainly not beliefs about the world. (Perhaps beliefs about how important different scenarios are, where “importance” gets some further analysis...)
The teleosemantic picture is that epistemic accuracy is a common, instrumentally convergent subgoal; and “meaning” (in the sense of semantic content) arises precisely where this subgoal is being optimized.
That’s my guess at the biggest difference between our two pictures, anyway.
The teleosemantic picture is that epistemic accuracy is a common, instrumentally convergent subgoal; and “meaning” (in the sense of semantic content) arises precisely where this subgoal is being optimized.
I think this is exactly right. I often say things like “accurate maps are extremely useful to things like survival, so you and every other living thing has strong incentives to draw accurate maps, but this is contingent on the extent to which you care about e.g. survival”.
So to see if I have this right, the difference is I’m trying to point at a larger phenomenon and you mean teleosemantics to point just at the way beliefs get constrained to be useful.
So to see if I have this right, the difference is I’m trying to point at a larger phenomenon and you mean teleosemantics to point just at the way beliefs get constrained to be useful.
This doesn’t sound quite right to me. Teleosemantics is a purported definition of belief. So according to the teleosemantic picture, it isn’t a belief if it’s not trying to accurately reflect something.
The additional statement I prefaced this with, that accuracy is an instrumentally convergent subgoal, was intended to be an explanation of why this sort of “belief” is a common phenomenon, rather than part of the definition of “belief”.
In principle, there could be a process which only optimizes accuracy and doesn’t serve any larger goal. This would still be creating and maintaining beliefs according to the definition of teleosemantics, although it would be an oddity. (How did it get there? How did a non-agentic process end up creating it?)
One thing I see as different between your perspective and (my understanding of) teleosemantics, so far:
You make a general case that values underlie beliefs.
Teleosemantics makes a specific claim that the meaning of semantic constructs (such as beliefs and messages) is pinned down by what it is trying to correspond to.
Your picture seems very compatible with, EG, the old LW claim that UDT’s probabilities are really a measure of caring—how much you care about doing well in a variety of scenarios.
Teleosemantics might fail to analyze such probabilities as beliefs at all; certainly not beliefs about the world. (Perhaps beliefs about how important different scenarios are, where “importance” gets some further analysis...)
The teleosemantic picture is that epistemic accuracy is a common, instrumentally convergent subgoal; and “meaning” (in the sense of semantic content) arises precisely where this subgoal is being optimized.
That’s my guess at the biggest difference between our two pictures, anyway.
I think this is exactly right. I often say things like “accurate maps are extremely useful to things like survival, so you and every other living thing has strong incentives to draw accurate maps, but this is contingent on the extent to which you care about e.g. survival”.
So to see if I have this right, the difference is I’m trying to point at a larger phenomenon and you mean teleosemantics to point just at the way beliefs get constrained to be useful.
This doesn’t sound quite right to me. Teleosemantics is a purported definition of belief. So according to the teleosemantic picture, it isn’t a belief if it’s not trying to accurately reflect something.
The additional statement I prefaced this with, that accuracy is an instrumentally convergent subgoal, was intended to be an explanation of why this sort of “belief” is a common phenomenon, rather than part of the definition of “belief”.
In principle, there could be a process which only optimizes accuracy and doesn’t serve any larger goal. This would still be creating and maintaining beliefs according to the definition of teleosemantics, although it would be an oddity. (How did it get there? How did a non-agentic process end up creating it?)