Hm; there may not be a disagreement here. You seemed to be using it in a way that implied it was not determined by (or even was independent of) the prior. Was I mistaken there?
The idea was that some agents update faster than others (or indeed not at all).
If you like you can think of the agents that update relatively slowly as being confident that they are uncertain about the things they are unsure about. That confidence in their own uncertainty could indeed be represented by other priors.
If you want to think about it that way, please don’t say “other priors”. That’s very confusing, because “prior” in this context refers to the whole prior, not to pieces of it (which I’m not sure how you’re detangling from each other, anyway). If we’re talking about something of the universal-prior sort, it has one prior, over its total sensory experience; I’m not clear how you’re decomposing that or what alternative model you are suggesting.
The two types of prior probability I discussed were “belifs about the world” and “beliefs about the certainty of those beliefs”.
An agent that updates its beliefs about the world rapidly (in response to evidence) would have a low degree of certainty about those beliefs—while an agent that updates its beliefs about the world slowly would have a high degree of certainty that those beliefs were already correct—and were backed up with lots of existing evidence.
I gave an example of this already when I discussed the magician’s coin.
Except these aren’t separate things. That isn’t how this sort of system works! Its beliefs about the certainty of those beliefs are determined by its beliefs about the world.
Well, everything is about the world, if materialism is true.
You don’t seem to be even trying to perform a sympathetic reading. Leave aside quibbling about what is to do with the world—can you at least see that in the first case, updates happen quickly, and in the second case they happen slowly? “Speed” just refers to distance divided by time. Here distance is the probabiliy delta, and time is simply time. So, updates can happen fast and slow. Some systems update quickly, others update slowly—and others don’t update at all. This all seems fairly simple to me—what is the problem?
Right. I really don’t think that what I am saying is controversial. The way I remember it, I talked about systems with different update speeds—and you jumped on that.
Alternatively, I could say, I went with the assumption that you were attempting to carve the relevant concepts at the joints and getting it wrong, rather than that you were making a true statement which doesn’t even try to accomplish that.
M, sorry then. But you didn’t explain the term anywhere, so I assumed it meant what it sounded like—the original context makes it sound like you mean something separate from the prior, rather than something determined by it. If instead of talking about building an agent that were “confident in their priors” and “updated them slowly” you had just spoken of “priors that result in slow updating” I don’t think there would have been a problem. (I must admit I probably also wasn’t inclined to look for a sympathetic reading as your other comments about the universal prior seem to be just wrong. )
Hm; there may not be a disagreement here. You seemed to be using it in a way that implied it was not determined by (or even was independent of) the prior. Was I mistaken there?
The idea was that some agents update faster than others (or indeed not at all).
If you like you can think of the agents that update relatively slowly as being confident that they are uncertain about the things they are unsure about. That confidence in their own uncertainty could indeed be represented by other priors.
That’s not “other priors”, there’s just one prior. All the probabilities in Bayes’ Rule come from the updated-to-current version of the prior.
Other prior probabilities. There is one prior set of probabilities, which is composed of many prior probabilities and probability distributions.
If you want to think about it that way, please don’t say “other priors”. That’s very confusing, because “prior” in this context refers to the whole prior, not to pieces of it (which I’m not sure how you’re detangling from each other, anyway). If we’re talking about something of the universal-prior sort, it has one prior, over its total sensory experience; I’m not clear how you’re decomposing that or what alternative model you are suggesting.
The two types of prior probability I discussed were “belifs about the world” and “beliefs about the certainty of those beliefs”.
An agent that updates its beliefs about the world rapidly (in response to evidence) would have a low degree of certainty about those beliefs—while an agent that updates its beliefs about the world slowly would have a high degree of certainty that those beliefs were already correct—and were backed up with lots of existing evidence.
I gave an example of this already when I discussed the magician’s coin.
Except these aren’t separate things. That isn’t how this sort of system works! Its beliefs about the certainty of those beliefs are determined by its beliefs about the world.
Well, everything is about the world, if materialism is true.
You don’t seem to be even trying to perform a sympathetic reading. Leave aside quibbling about what is to do with the world—can you at least see that in the first case, updates happen quickly, and in the second case they happen slowly? “Speed” just refers to distance divided by time. Here distance is the probabiliy delta, and time is simply time. So, updates can happen fast and slow. Some systems update quickly, others update slowly—and others don’t update at all. This all seems fairly simple to me—what is the problem?
Well, sure. But that statement is trivial.
Right. I really don’t think that what I am saying is controversial. The way I remember it, I talked about systems with different update speeds—and you jumped on that.
Alternatively, I could say, I went with the assumption that you were attempting to carve the relevant concepts at the joints and getting it wrong, rather than that you were making a true statement which doesn’t even try to accomplish that.
M, sorry then. But you didn’t explain the term anywhere, so I assumed it meant what it sounded like—the original context makes it sound like you mean something separate from the prior, rather than something determined by it. If instead of talking about building an agent that were “confident in their priors” and “updated them slowly” you had just spoken of “priors that result in slow updating” I don’t think there would have been a problem. (I must admit I probably also wasn’t inclined to look for a sympathetic reading as your other comments about the universal prior seem to be just wrong. )