If you can refer to an individual structure informally, then either there is a language that allows finitely describing it, or ability to refer to that structure is an illusion and in fact you are only referring to some bigger collection of things (property) of which the object you talk about is an element. If you can’t refer to a structure, then you don’t need plausibility for it.
Can’t you give an example using a situation where Bayesian updating is non-problematic, and just show how we might use your idea for the prior with standard decision theory?
This is only helpful is something works with tricky mathematical structures, and in all cases that seems to need to be preference. For example, you’d prefer to make decisions that are (likely!) consistent with a given theory (make it hold), then it helps if your decision and that theory are expressed in the same setting (language), and you can make decisions under logical uncertainty if you use the universal prior on statements. Normally, decision theories don’t consider such cases, so I’m not sure how to relate. Introducing observations will probably be a mistake too.
either there is a language that allows finitely describing it
But if you fix a language L for your universal prior, then there will be a more powerful metalanguage L’ that allows finitely describing some structure, which can’t be finitely described in the base language, right? So don’t we still have the problem of the universal prior not really being universal?
I can’t parse the second part of your response. Will keep trying...
But if you fix a language L for your universal prior, then there will be a more powerful metalanguage L’ that allows finitely describing some structure, which can’t be finitely described in the base language, right? So don’t we still have the problem of the universal prior not really being universal?
It can still talk about all structures, but sometimes won’t be able to point at a specific structure, only a class containing it. You only need a language expressive enough to describe everything preference refers to, and no more. (This seems to be the correct solution to ontology problem—describe preference as being about mathematical structures (more generally, concepts/theories), and ignore the question of the nature of reality.)
(Clarified the second part of the previous comment a bit.)
You only need a language expressive enough to describe everything preference refers to, and no more.
Why do you think that any logical language (of the sort we’re currently familiar with) is sufficiently expressive for this purpose?
This seems to be the correct solution to ontology problem—describe preference as being about mathematical structures (more generally, concepts/theories), and ignore the question of the nature of reality.
I’m not sure. One way to think about it is whether the question “what is the right prior?” is more like “what is the right decision theory?” or more like “what is the right utility function?” In What Are Probabilities, Anyway? I essentially said that I lean towards the latter, but I’m highly uncertain.
ETA: And sometimes I suspect even “what is the right utility function?” is really more like “what is the right decision theory?” than we currently believe. In other words there is objective morality after all, but we’re currently just too stupid or philosophically incompetent to figure out what it is.
Why do you think that any logical language (of the sort we’re currently familiar with) is sufficiently expressive for this purpose?
The general idea seems right. If the existing languages are inadequate, they at least seem adequate for a full-featured prototype: figure out decision theory (and hence notion of preference) in terms of standard logic, then move on as necessary for extending expressive power. This should stop at some point, since this exercise at formality is aimed at construction of a program.
I’m not sure. One way to think about it is whether the question “what is the right prior?” is more like “what is the right decision theory?” or more like “what is the right utility function?” In What Are Probabilities, Anyway? I essentially said that I lean towards the latter, but I’m highly uncertain.
I don’t see clearly the distinction you’re making, so let me describe how I see it. Some design choices in constructing FAI would certainly be specific to our minds (values), but the main assumption to my approach to FAI is exactly that a large portion of design choices in FAI can be specified as a natural category in human brains, something we can point a simple mirror at and say “there!”, with the mirror doing most of the work in determining what goes into the FAI. I call the automated design choices “preference”, and the mirror (theory of mirror) “decision theory”, with the slot “notion of preference” that is to be filled in automatically. So, there is no question of which one of “decision theory” and “preference” is “essential”, both play a role. The worry is about the necessary size of the manually designed “decision theory” part, and whether it’s humanly possible to construct it.
If you can refer to an individual structure informally, then either there is a language that allows finitely describing it, or ability to refer to that structure is an illusion and in fact you are only referring to some bigger collection of things (property) of which the object you talk about is an element. If you can’t refer to a structure, then you don’t need plausibility for it.
This is only helpful is something works with tricky mathematical structures, and in all cases that seems to need to be preference. For example, you’d prefer to make decisions that are (likely!) consistent with a given theory (make it hold), then it helps if your decision and that theory are expressed in the same setting (language), and you can make decisions under logical uncertainty if you use the universal prior on statements. Normally, decision theories don’t consider such cases, so I’m not sure how to relate. Introducing observations will probably be a mistake too.
But if you fix a language L for your universal prior, then there will be a more powerful metalanguage L’ that allows finitely describing some structure, which can’t be finitely described in the base language, right? So don’t we still have the problem of the universal prior not really being universal?
I can’t parse the second part of your response. Will keep trying...
It can still talk about all structures, but sometimes won’t be able to point at a specific structure, only a class containing it. You only need a language expressive enough to describe everything preference refers to, and no more. (This seems to be the correct solution to ontology problem—describe preference as being about mathematical structures (more generally, concepts/theories), and ignore the question of the nature of reality.)
(Clarified the second part of the previous comment a bit.)
Why do you think that any logical language (of the sort we’re currently familiar with) is sufficiently expressive for this purpose?
I’m not sure. One way to think about it is whether the question “what is the right prior?” is more like “what is the right decision theory?” or more like “what is the right utility function?” In What Are Probabilities, Anyway? I essentially said that I lean towards the latter, but I’m highly uncertain.
ETA: And sometimes I suspect even “what is the right utility function?” is really more like “what is the right decision theory?” than we currently believe. In other words there is objective morality after all, but we’re currently just too stupid or philosophically incompetent to figure out what it is.
The general idea seems right. If the existing languages are inadequate, they at least seem adequate for a full-featured prototype: figure out decision theory (and hence notion of preference) in terms of standard logic, then move on as necessary for extending expressive power. This should stop at some point, since this exercise at formality is aimed at construction of a program.
I don’t see clearly the distinction you’re making, so let me describe how I see it. Some design choices in constructing FAI would certainly be specific to our minds (values), but the main assumption to my approach to FAI is exactly that a large portion of design choices in FAI can be specified as a natural category in human brains, something we can point a simple mirror at and say “there!”, with the mirror doing most of the work in determining what goes into the FAI. I call the automated design choices “preference”, and the mirror (theory of mirror) “decision theory”, with the slot “notion of preference” that is to be filled in automatically. So, there is no question of which one of “decision theory” and “preference” is “essential”, both play a role. The worry is about the necessary size of the manually designed “decision theory” part, and whether it’s humanly possible to construct it.