I don’t like the idea of the words I use having definitions that I am unaware of and even after long reflection cannot figure out—not just the subtleties and edge cases, but massive central issues.
Don’t like in the sense of considering this an annoying standard flaw in human minds or don’t think this is correct? Where possible, the flaw is fixed by introducing more explicit definitions, and using those definitions instead of the immensely complicated and often less useful naive concepts. For human motivation, this doesn’t seem to work particularly well.
That’s precisely why I think motivation is not really a kind of definition. There are analogies, sure, but definitions are the things that you can change whenever you feel like it, but motivations are not.
The definitions that you are free to introduce or change usually latch on to an otherwise motivated thing, you usually have at least some sort of informal reason to choose a particular definition. When you change a definition, you start talking about something else. If it’s not important (or a priori impossible to evaluate) what it is you will be talking about, in other words if the motivation for your definition is vague and tolerates enough arbitrariness, then it’s OK to change a definition without a clear reason to make the particular change that you do.
So I think the situation can be described as follows. There are motivations that assign typically vague priorities to objects of study, and there are definitions that typically specify more precisely what an object of study will be. These motivations are usually not themselves objects of study, so we don’t try to find more accurate definitions for describing them. But we can also take a particular motivation as an object of study, in which case we may try to find a definition that describes it. The goal is then to find a definition for a particular motivation, so you can’t pick an arbitrary definition that doesn’t actually describe that motivation (i.e. this goal is not that vague).
I expect finding a description of (“definition for”) a motivation is an epistemic task, like finding a description of a particular physical device. You are not free to “change the definition” of that physical device if the goal is to describe it. (You may describe it differently, but it’s the same thing that you’ll be describing.) In this analogy, inventing new definitions corresponds to constructing new physical devices. And if you understand the original device well enough (find a “definition” for a “motivation”), you may be able to make new ones.
I mean, didn’t Eliezer cover this? You’re not lying if you call numbers groups and groups numbers. If you switch in the middle of a proof, sure, that’s lying, but that seems irrelevant. The definitions pick out what you’re talking about.
When I’m talking about morality, I’m talking about That Thing That Determines What You’re Supposed to Do, You Know, That One.
I don’t like the idea of the words I use having definitions that I am unaware of and even after long reflection cannot figure out—not just the subtleties and edge cases, but massive central issues.
Don’t like in the sense of considering this an annoying standard flaw in human minds or don’t think this is correct? Where possible, the flaw is fixed by introducing more explicit definitions, and using those definitions instead of the immensely complicated and often less useful naive concepts. For human motivation, this doesn’t seem to work particularly well.
That’s precisely why I think motivation is not really a kind of definition. There are analogies, sure, but definitions are the things that you can change whenever you feel like it, but motivations are not.
The definitions that you are free to introduce or change usually latch on to an otherwise motivated thing, you usually have at least some sort of informal reason to choose a particular definition. When you change a definition, you start talking about something else. If it’s not important (or a priori impossible to evaluate) what it is you will be talking about, in other words if the motivation for your definition is vague and tolerates enough arbitrariness, then it’s OK to change a definition without a clear reason to make the particular change that you do.
So I think the situation can be described as follows. There are motivations that assign typically vague priorities to objects of study, and there are definitions that typically specify more precisely what an object of study will be. These motivations are usually not themselves objects of study, so we don’t try to find more accurate definitions for describing them. But we can also take a particular motivation as an object of study, in which case we may try to find a definition that describes it. The goal is then to find a definition for a particular motivation, so you can’t pick an arbitrary definition that doesn’t actually describe that motivation (i.e. this goal is not that vague).
I expect finding a description of (“definition for”) a motivation is an epistemic task, like finding a description of a particular physical device. You are not free to “change the definition” of that physical device if the goal is to describe it. (You may describe it differently, but it’s the same thing that you’ll be describing.) In this analogy, inventing new definitions corresponds to constructing new physical devices. And if you understand the original device well enough (find a “definition” for a “motivation”), you may be able to make new ones.
Mathematical definitions sure aren’t just some social construct. You pick axioms, you derive theorems. If not, you are lying about maths.
(And picking axioms doesn’t matter either, because maths exists independently of physical systems computing them)
I mean, didn’t Eliezer cover this? You’re not lying if you call numbers groups and groups numbers. If you switch in the middle of a proof, sure, that’s lying, but that seems irrelevant. The definitions pick out what you’re talking about.
When I’m talking about morality, I’m talking about That Thing That Determines What You’re Supposed to Do, You Know, That One.
So am I.
You Know, That Part Of Your Brain That Computes That Thing That Determines What You’re Supposed to Do, Given What You Know, You Know, That One.
I don’t even remember the mind set I had when I wrote this, nor what this is all about.
Referring to a part of your brain doesn’t have the right properties when you change between different universes.
That is true, and so we refer to the medium-independet axiomatic definition.
What’s that?
That Thing That Determines What You’re Supposed to Do, Given What You Know, You Know, That One.