Note the construction of your hypothetical. “If I hold someone morally accountable in a ludicrous way, I end up feeling something ludicrous has happened”. Yes, you do. I think that’s an observation in favor of my general approach, that the moral evaluations come first, and the analytic rationalizations later.
It seems most people would say that you are correct to not hold such people to account, because falling is not a free choice
No, because their moral intuitions tell them not to hold people accountable in such a situation. Part of that intuition in algorithmic terms no doubt includes calculations about locus of control, but it’s hardly limited to it. Free will is part of the organic whole of moral evaluation.
(People a little more current on their Korzybski could probably identify the principle involved. Non-elementalism?)
Put in another way, free will is a concept that functions in a moral context, and divorcing it from that context and speaking only in terms of power, control, knowledge, and causality will quickly lead to concepts of free will ill suited for the function they play in morality.
This is a general principle for me. The world does not come labelled. There is no astral tether from labels to the infinite concepts they might refer to. You properly define your concepts by properly identifying the problem you’re trying to solve and determining what concepts work to solve them.
This applies well to Jaynes. Axiomatic probability theory defines labels, concepts, and relationships, and then says “maybe these will be useful to you”. Maybe they will, and maybe they won’t. Jaynes started with a problem of building a robot that could learn and represent it’s degree of belief with a number. He started with a problem, so that the concepts he built were relevant to his problem and you knew how they were relevant.
Note the construction of your hypothetical. “If I hold someone morally accountable in a ludicrous way, I end up feeling something ludicrous has happened”. Yes, you do. I think that’s an observation in favor of my general approach, that the moral evaluations come first, and the analytic rationalizations later.
No, because their moral intuitions tell them not to hold people accountable in such a situation. Part of that intuition in algorithmic terms no doubt includes calculations about locus of control, but it’s hardly limited to it. Free will is part of the organic whole of moral evaluation.
(People a little more current on their Korzybski could probably identify the principle involved. Non-elementalism?)
Put in another way, free will is a concept that functions in a moral context, and divorcing it from that context and speaking only in terms of power, control, knowledge, and causality will quickly lead to concepts of free will ill suited for the function they play in morality.
This is a general principle for me. The world does not come labelled. There is no astral tether from labels to the infinite concepts they might refer to. You properly define your concepts by properly identifying the problem you’re trying to solve and determining what concepts work to solve them.
This applies well to Jaynes. Axiomatic probability theory defines labels, concepts, and relationships, and then says “maybe these will be useful to you”. Maybe they will, and maybe they won’t. Jaynes started with a problem of building a robot that could learn and represent it’s degree of belief with a number. He started with a problem, so that the concepts he built were relevant to his problem and you knew how they were relevant.