Somehow, he has to populate the objective function whose maximum is what he will rationally try to do. How he ends up assigning those intrinsic values relies on methods of argument that are neither deductive nor observational.
In your opinion, does this relate in any way to the “lack of free will” arguments, like those alleged by Sam Harris?
The whole: I can ask you about what your favourite movie is, and you will think of some. You will even try to justify your choices if asked about it, but ultimately you had no control of what movies popped into your head.
This is a good example of needing to watch my words: the same sentence, interpreted from the point of view of no-free-will, could mean the complex function of biochemical determinism playing out, resulting in what the human organism actually does.
What I meant was the utility function of consequentialism: for each possible goal x, you have some preference of how good that is f(x), and so what you’re trying to do is to maximize f(x) over x. It’s presupposing that you have some ability to choose x1 instead of x2, although there are some compatibilist views of free will and determinism that blur the line.
My point in that paragraph, though, is that you might have a perfectly rational machinery for optimizing f, but one has to also choose f. The way you choose f can’t be by optimizing over f. The reasons one has for choosing f also can’t be directly derived from scientific observations about the physical world, because (paraphrasing David Hume), an “is” does not imply an “ought.” So the way we choose f, whatever that is, requires some kind of argumentation or feeling that is not derivable from the scientific method or Bayes’ theorem.
Yeah, if you use religious or faith baised terminology, it might trigger negative signals (downvotes).
Though whether that is because the information you meant to convey was being disagreed with, or it’s because the statements themselves are actually overall more ambiguous, would be harder to distinguish.
Some kinds of careful resoning processes vibe with the community, and imop yours is that kind. Questioning each step separatetly on it’s merits, being sufficiently skeptical of premises leading to conclusions.
Anyways, back to the subject of f and inferring it’s features.
We are definitely having trouble drawing out f out of the human brain in a systematic falsiable way.
Whether or not it is physically possible to infer it, or it’s features, or how it is constructed; i.e whether it possible at all, that subject seems a little uninteresting to me.
Humans are perfectly capable of pulling made up functions out of their ass. I kind of feel like all the gold will go to first group of people who come up with processes of constructing f in coherent predictable ways.
Such that different initial conditions, when iterated over the process, produce predictably similiar f.
We might then try observe such process throughout people’s lifetimes, and sort of guess that a version of the same process is going on in the human brain.
But nothing about how that will develop is readily apparent to me. This is just my own imagination producing what seems like a plausible way forward.
In your opinion, does this relate in any way to the “lack of free will” arguments, like those alleged by Sam Harris? The whole: I can ask you about what your favourite movie is, and you will think of some. You will even try to justify your choices if asked about it, but ultimately you had no control of what movies popped into your head.
This is a good example of needing to watch my words: the same sentence, interpreted from the point of view of no-free-will, could mean the complex function of biochemical determinism playing out, resulting in what the human organism actually does.
What I meant was the utility function of consequentialism: for each possible goal x, you have some preference of how good that is f(x), and so what you’re trying to do is to maximize f(x) over x. It’s presupposing that you have some ability to choose x1 instead of x2, although there are some compatibilist views of free will and determinism that blur the line.
My point in that paragraph, though, is that you might have a perfectly rational machinery for optimizing f, but one has to also choose f. The way you choose f can’t be by optimizing over f. The reasons one has for choosing f also can’t be directly derived from scientific observations about the physical world, because (paraphrasing David Hume), an “is” does not imply an “ought.” So the way we choose f, whatever that is, requires some kind of argumentation or feeling that is not derivable from the scientific method or Bayes’ theorem.
Yeah, if you use religious or faith baised terminology, it might trigger negative signals (downvotes). Though whether that is because the information you meant to convey was being disagreed with, or it’s because the statements themselves are actually overall more ambiguous, would be harder to distinguish.
Some kinds of careful resoning processes vibe with the community, and imop yours is that kind. Questioning each step separatetly on it’s merits, being sufficiently skeptical of premises leading to conclusions.
Anyways, back to the subject of f and inferring it’s features. We are definitely having trouble drawing out f out of the human brain in a systematic falsiable way.
Whether or not it is physically possible to infer it, or it’s features, or how it is constructed; i.e whether it possible at all, that subject seems a little uninteresting to me. Humans are perfectly capable of pulling made up functions out of their ass. I kind of feel like all the gold will go to first group of people who come up with processes of constructing f in coherent predictable ways. Such that different initial conditions, when iterated over the process, produce predictably similiar f.
We might then try observe such process throughout people’s lifetimes, and sort of guess that a version of the same process is going on in the human brain. But nothing about how that will develop is readily apparent to me. This is just my own imagination producing what seems like a plausible way forward.