I… think you are using the word “rational” in a radically different way than how it’s used on Less Wrong. As far as I’m concerned, the term does not even apply in this context; “cake-baking is a rational process” seems like gibberish, akin to “cake-baking is a blue process” or “cake-baking is a triangular process”.
Are you, perhaps, referring to the degree to which an activity or skill draws upon conscious, as opposed to unconscious, knowledge? (Or, relatedly but distinctly, to declarative vs. procedural knowledge?) But if so, then that really has very little to do with rationality (as the term is used on Less Wrong and in related spaces).
I’d like to see a reply that focuses on “arbitrary” instead of “irrational” (from the phrase, “Attraction, humor, joy and love are very often irrational and arbitrary”), or maybe there is a better word still, considering the standup example.
Comedy seems like a fruitful domain to explore this frame, since there is no apparent criteria to optimize for that isn’t “arbitrary” fundamentally: jokes age poorly, and don’t translate to other cultures or contexts well. They rely on surprise to be funny, but also predictability to be legible as jokes. Jokes both are constrained by and escape their formal properties, necessarily.
“I bet we can do better” can’t be the domain of optimization alone, it has to come equally through indifference/the arbitrary; it’s a dialectic.
Yes, rationality is (mostly) value/preference-agnostic, as I said earlier. Optimization is always optimization with respect to a goal. This is quite a basic idea, and has been discussed many, many, many times on Less Wrong.
I think (but am not sure) that treesurgency is replying about a somewhat different point, wherein jokes exist in an interesting space where properly optimizing them involves understanding them through a lens of arbitrariness. (noting that optimization != rationality)
Where, yeah, you can describe the formula of how to optimize a joke (which includes accounting for both predictability and unpredictability in an anti-inductive fashion). But… there’s something like, to tell a good joke, I can imagine it turning out to be the case that you need to at least have access to modalities of thinking that are (as-implemented-in-humans) rooted in arbitrariness.
(Perhaps more generally – yes, all optimization is technically optimization (in the schema sarah articulates in the OP), but sometimes to get certain kinds of creativity, human psychology demands indulging in arbitrariness. Which is not the same thing as irrationality)
(I’m not sure this is true, nor that it’s what treesurgency meant, but it seemed an idea worth considering, and while I’m pretty sure it’s been discussed on LessWrong, I don’t think it’s been addressed through the lens saraconstantin has put forth here)
Are you suggesting that optimization and arbitrariness are somehow at odds? That seems wrong. There can exist multiple optima, such that the choice of them is arbitary (and if the domain is an anti-inductive one, as humor is, then optimization can be a continuous or iterative process of arbitrarily choosing from among multiple available options, such that any one of them is a “correct”, i.e. optimal, choice).
I’m (attempting to) respond through the framework that Sarah put forth, not necessarily because I think it makes most sense ultimately but because in this thread I’m entering a state where I consider the lens as fully as I can.
In that framework, as I understand it, you can have freedom to optimize, or freedom to be arbitrary, and (potentially? not specified in the post?) the freedom to have both, but having both is indeed somewhat contradictory. It’s not impossible but it’s harder.
In particular, freedom for arbitrariness is not just “there are multiple optimal things, and you can arbitrarily pick between them.” It’s the freedom to actively make bad choices according to all your criteria.
And while technically you can argue that this is secretly just another form of optimization, on some humans the psychology motions being made are very different.
Ah, I see, thanks. Yes, I do find the framework given in the OP somewhat odd, and I hadn’t realized you were answering from that perspective. My comments do not really apply in that case, I guess.
I guess what I was reacting to were the inklings of a B.F. Skinner-ish behaviourist attack on individual autonomy and free will.
Anybody who adheres to that needs to read Karl Popper, and then throw their gnosticism in the trash where it belongs. Terrified of uncertainty? Too bad. It isn’t going away, no matter how much you or “we” “optimize”.
The more that mistake theorists proclaim themselves as wiser and “criticize democracy...because it gives too much power to the average person”, the more conflict theorists (extremists, Trump voters, etc...) they will nurture.
I… think you are using the word “rational” in a radically different way than how it’s used on Less Wrong. As far as I’m concerned, the term does not even apply in this context; “cake-baking is a rational process” seems like gibberish, akin to “cake-baking is a blue process” or “cake-baking is a triangular process”.
Are you, perhaps, referring to the degree to which an activity or skill draws upon conscious, as opposed to unconscious, knowledge? (Or, relatedly but distinctly, to declarative vs. procedural knowledge?) But if so, then that really has very little to do with rationality (as the term is used on Less Wrong and in related spaces).
I’d like to see a reply that focuses on “arbitrary” instead of “irrational” (from the phrase, “Attraction, humor, joy and love are very often irrational and arbitrary”), or maybe there is a better word still, considering the standup example.
Comedy seems like a fruitful domain to explore this frame, since there is no apparent criteria to optimize for that isn’t “arbitrary” fundamentally: jokes age poorly, and don’t translate to other cultures or contexts well. They rely on surprise to be funny, but also predictability to be legible as jokes. Jokes both are constrained by and escape their formal properties, necessarily.
“I bet we can do better” can’t be the domain of optimization alone, it has to come equally through indifference/the arbitrary; it’s a dialectic.
Yes, rationality is (mostly) value/preference-agnostic, as I said earlier. Optimization is always optimization with respect to a goal. This is quite a basic idea, and has been discussed many, many, many times on Less Wrong.
I think (but am not sure) that treesurgency is replying about a somewhat different point, wherein jokes exist in an interesting space where properly optimizing them involves understanding them through a lens of arbitrariness. (noting that optimization != rationality)
Where, yeah, you can describe the formula of how to optimize a joke (which includes accounting for both predictability and unpredictability in an anti-inductive fashion). But… there’s something like, to tell a good joke, I can imagine it turning out to be the case that you need to at least have access to modalities of thinking that are (as-implemented-in-humans) rooted in arbitrariness.
(Perhaps more generally – yes, all optimization is technically optimization (in the schema sarah articulates in the OP), but sometimes to get certain kinds of creativity, human psychology demands indulging in arbitrariness. Which is not the same thing as irrationality)
(I’m not sure this is true, nor that it’s what treesurgency meant, but it seemed an idea worth considering, and while I’m pretty sure it’s been discussed on LessWrong, I don’t think it’s been addressed through the lens saraconstantin has put forth here)
Are you suggesting that optimization and arbitrariness are somehow at odds? That seems wrong. There can exist multiple optima, such that the choice of them is arbitary (and if the domain is an anti-inductive one, as humor is, then optimization can be a continuous or iterative process of arbitrarily choosing from among multiple available options, such that any one of them is a “correct”, i.e. optimal, choice).
I’m (attempting to) respond through the framework that Sarah put forth, not necessarily because I think it makes most sense ultimately but because in this thread I’m entering a state where I consider the lens as fully as I can.
In that framework, as I understand it, you can have freedom to optimize, or freedom to be arbitrary, and (potentially? not specified in the post?) the freedom to have both, but having both is indeed somewhat contradictory. It’s not impossible but it’s harder.
In particular, freedom for arbitrariness is not just “there are multiple optimal things, and you can arbitrarily pick between them.” It’s the freedom to actively make bad choices according to all your criteria.
And while technically you can argue that this is secretly just another form of optimization, on some humans the psychology motions being made are very different.
Ah, I see, thanks. Yes, I do find the framework given in the OP somewhat odd, and I hadn’t realized you were answering from that perspective. My comments do not really apply in that case, I guess.
I guess what I was reacting to were the inklings of a B.F. Skinner-ish behaviourist attack on individual autonomy and free will.
Anybody who adheres to that needs to read Karl Popper, and then throw their gnosticism in the trash where it belongs. Terrified of uncertainty? Too bad. It isn’t going away, no matter how much you or “we” “optimize”.
The more that mistake theorists proclaim themselves as wiser and “criticize democracy...because it gives too much power to the average person”, the more conflict theorists (extremists, Trump voters, etc...) they will nurture.