[you should do X] = [the statement that, if believed, would cause one to do X]
You can find that there is a bug in your brain that causes you to react to a certain belief, but you’d fix it if you notice it’s there, since you don’t think that belief should cause that action.
[the statement that, if believed by a rational agent, would cause it to do X]
but that’s circular.
But one of the points I’ve been trying to make is that it’s okay for the definition of something to be, in some sense, circular. As long as you can describe the code for a rational agent that manipulates that kind of statement.
Some things you can’t define exactly, only refer to them with some measure of accuracy. Physical facts are like this. Morality is like this. Rational agents don’t define morality, they respond to it, they are imperfect detectors of moral facts who would use their moral expertise to improve own ability to detect moral facts or build other tools capable of that. There is nothing circular here, just constant aspiration for referencing the unreachable ideal through changeable means.
But there aren’t causal arrows pointing from morality to rational agents, are there? Just acausal/timeless arrows.
You do have to define “morality” as meaning “that thing that we’re trying to refer to with some measure of accuracy”, whereas “red” is not defined to refer to the same thing.
But there aren’t causal arrows pointing from morality to rational agents, are there? Just acausal/timeless arrows.
I think the idea of acausal/logical control captures what causality was meant to capture in more detail, and is a proper generalization of it. So I’d say that there are indeed “causal” arrows from morality to decisions of agents, to the extent the idea of “causal” dependence is used correctly and not restricted to the way we define physical laws on a certain level of detail.
You do have to define “morality” as meaning “that thing that we’re trying to refer to with some measure of accuracy”
Why would I define it so? It’s indeed what we are trying to refer to, but what it is exactly we cannot know.
whereas “red” is not defined to refer to the same thing.
Lost me here. We know enough about morality to say that it’s not the same thing as “red”, yes.
I think the idea of acausal/logical control captures what causality was meant to capture in more detail, and is a proper generation of it. So I’d say that there are indeed “causal” arrows from morality to decisions of agents, to the extent the idea of “causal” dependence is used correctly and not restricted to the way we define physical laws on a certain level of detail.
Sure.
Why would I define it so? It’s indeed what we are trying to refer to, but what it is exactly we cannot know.
Let me rephrase a bit.
“That thing, over there (which we’re trying to refer to with some measure of accuracy), point point”.
I’m defining it extensionally, except for the fact that it doesn’t physically exist.
There has to be some kind of definition or else we wouldn’t know what we were talking about, even if it’s extensional and hard to put into words.
Lost me here. We know enough about morality to say that it’s not the same thing as “red”, yes.
“red” and “right” have different extensional definitions.
This is true, but my claim that words have to have a (possibly extensional) definition for us to use them, and that “right” has an extensional definition, stands.
Does “whatever’s written in that book” work as the appropriate kind of “extensional definition” for this purpose? If so, I agree, that’s what I mean by “using without knowing”. (As I understand it, it’s not the right way of using the term “extensional definition”, since you are not giving examples, you are describing a procedure for interacting with the fact in question.)
All definitions should be circular. “The president is the Head of State” is a correct definition. “The president is Obama” is true, but not a definition.
If that is non circular, so is [the statement that, if believed by a rational agent, would cause it to do X]
I’m quite confused. By circular do you mean anaylitcal, or recursive? (example of the latter: a setis something that can contain elemetns or other sets)
I don’t understand what you mean, here. I’m not sure what you mean by ‘true’ or ‘useful’, I guess. I’m talking about true claims in this sense.
Which one is that, and what does everybody already know it to mean?
I mean what you mean by “true”, or maybe something very similar.
By “useful” I mean “those claims that could help someone come to a decision about their actions”
It’s what people say when they say “should” but don’t precede it with “if”. Some people on lesswrong think it means:
[you should do X] = [X maximizes this complicated function that can be computed from my brain state]
Some think it means:
[you should do X] = [X maximizes whatever complicated function is computed from my brain state]
and I think:
[you should do X] = [the statement that, if believed, would cause one to do X]
You can find that there is a bug in your brain that causes you to react to a certain belief, but you’d fix it if you notice it’s there, since you don’t think that belief should cause that action.
I could say
[the statement that, if believed by a rational agent, would cause it to do X]
but that’s circular.
But one of the points I’ve been trying to make is that it’s okay for the definition of something to be, in some sense, circular. As long as you can describe the code for a rational agent that manipulates that kind of statement.
Some things you can’t define exactly, only refer to them with some measure of accuracy. Physical facts are like this. Morality is like this. Rational agents don’t define morality, they respond to it, they are imperfect detectors of moral facts who would use their moral expertise to improve own ability to detect moral facts or build other tools capable of that. There is nothing circular here, just constant aspiration for referencing the unreachable ideal through changeable means.
But there aren’t causal arrows pointing from morality to rational agents, are there? Just acausal/timeless arrows.
You do have to define “morality” as meaning “that thing that we’re trying to refer to with some measure of accuracy”, whereas “red” is not defined to refer to the same thing.
If you agree, I think we’re on the same page.
I think the idea of acausal/logical control captures what causality was meant to capture in more detail, and is a proper generalization of it. So I’d say that there are indeed “causal” arrows from morality to decisions of agents, to the extent the idea of “causal” dependence is used correctly and not restricted to the way we define physical laws on a certain level of detail.
Why would I define it so? It’s indeed what we are trying to refer to, but what it is exactly we cannot know.
Lost me here. We know enough about morality to say that it’s not the same thing as “red”, yes.
Sure.
Let me rephrase a bit.
“That thing, over there (which we’re trying to refer to with some measure of accuracy), point point”.
I’m defining it extensionally, except for the fact that it doesn’t physically exist.
There has to be some kind of definition or else we wouldn’t know what we were talking about, even if it’s extensional and hard to put into words.
“red” and “right” have different extensional definitions.
I suspect there is a difference between knowing things and being able to use them, neither generally implying the other.
This is true, but my claim that words have to have a (possibly extensional) definition for us to use them, and that “right” has an extensional definition, stands.
Does “whatever’s written in that book” work as the appropriate kind of “extensional definition” for this purpose? If so, I agree, that’s what I mean by “using without knowing”. (As I understand it, it’s not the right way of using the term “extensional definition”, since you are not giving examples, you are describing a procedure for interacting with the fact in question.)
It’s sort of subtle.
“Whatever’s written in the book at the location given by this formula: ”
defines a word totally in terms of other words, which I would call intensional.
“Whatever’s written in THAT book, point point”
points at the meaning, what I would call extensional.
All definitions should be circular. “The president is the Head of State” is a correct definition. “The president is Obama” is true, but not a definition.
Non-circular definitions can certainly be perfectly fine:
“A bachelor is an unmarried man.′
This style is used in math to define new concepts to simplify communication and thought.
“A bachelor is an unmarried man.′
If that is non circular, so is [the statement that, if believed by a rational agent, would cause it to do X]
I’m quite confused. By circular do you mean anaylitcal, or recursive? (example of the latter: a setis something that can contain elemetns or other sets)
I’m not sure what I mean.
The definition I am using is in the following category:
It may appear problematically self-referential, but it is in fact self-referential in a non-problematic manner.
Agreed?
I don’t think your statement was self referential or problematic,.
or rather [you should do X] = [the statement that, if believed, would cause one to do X if one were an ideal and completely non akrasic agent]
Correct.