This parallels a discussion I’ve had numerous times in the field of computer games. I’ve had any number of artists / scripters / managers say that what a computer game needs is not a realistic physics engine, but a cinematic physics engine. They don’t want it to be right, they want it to be pretty.
But, you’ll find that “cinematic style” isn’t consistent, and if you start from that basis, you won’t be able to make boring, every-day events look realistic, and you’ll have to add special-case patch-upon-patch and you’ll never get it right in the end. The cinematic stuff will look right, but nothing else will.
If you start with a rigidly-correct physics engine (or at least, within current state-of-the-art) you’ll find it MUCH easier to layer cinematic effects on top when asked for. Its usually far simpler than the other way around.
In an analogous way, I find that rationality makes it far easier for one to achieve one’s goals, EVEN WHEN SAID GOALS ARE NON-RATIONAL. Now, that may mean that the rational thing to do in some cases is to lie to people about your beliefs, or to present yourself in a non-natural way. If you end up being uncomfortable with that, then one needs to reassess what, exactly, one’s goals are, and what you are willing to do to achieve them. This may not be easy, but its far simpler than going the route of ignorance and emotionally-driven actions and then trying to put your life back together when you don’t end up where you thought you would.
Yes, I suppose I should. By a non-rational goal I meant a goal that was not necessarily to my benefit, or the benefit of the world, a goal with a negative net sum worth. Things like poisoning a reservoir or marrying someone who will make your life miserable.
You decided to try achieving that “non-rational” goal, so it must be to your benefit (at least, you must believe so).
An example that I usually give at this point is as follows. Is it physically possible that in the next 30 seconds I’ll open the window and jump out? Can I do it? Since I don’t want to do it, I won’t do it, and therefore it can not happen in reality. The concept of trying to do something you’ll never want to do is not in reality either.
That might actually be the main cost of rationality. You may have goals that will hurt you if you actually achieve them, and by not being rational, you manage to not achieve those goals, making your life better. Perhaps, in fact, people avoid rationality because they don’t really want to achieve those goals, they just think they want to.
There’s an Amanda Palmer song where the last line is “I don’t want to be the person that I want to be.”
Of course, if you become rational enough, you may be able to untangle those confused goals and conflicting desires. There’s a dangerous middle ground, though, where you may get just better at hurting yourself.
“Not to my benefit” is ambiguous; I assume you mean working against other goals, like happiness or other people not dying. But since optimizing for one thing means not optimizing for others, every goal has this property relative to every other (for an ideal agent). Still, the concept seems very useful; any thoughts on how to formalize it?
I don’t really have any ideas other than the “negative net sum” worth I mentioned above, but then that just begs the question of what metric one is using to measure worth.
This is a plausible claim, but do you have concrete details, proposed mechanisms, or examples from your own or others lives to back it up? “I find that rationality makes it far easier” is a promising-sounding claim, and it’d be nice to know the causes of your belief.
Hmm. This is a simple question that seems difficult to articulate an answer to. I think the heart of my argument is that it is very difficult to achieve any goal without planning, and planning (to be effective) relies upon a true and consistent set of beliefs and logical inferences from them. This is pretty much the definition of rationality.
Now, its not the case that the opposite is random activity which one hopes will bring about the correct outcome. To be driven by emotions, seat-of-the-pants decisions and gut-instincts is to allow an evolutionarily-derived decision-making process to run your life. Its not a completely faulty process, but it did not evolve for the kinds of situations modern people find themselves in so, in practice, its not hard to do better by applying rational principals.
As I understand, computer animation (as in Pixar) has built-in capabilities for the physically impossible. For example, there’s no constraint in the software that solid bodies have to have constant volume—when Ratatouille bounces around, he’s changing volume all the time for extra expressiveness and dramatic effect. In that way, “cinematic” reality is simpler than realistic reality—though of course it takes more artistry on the part of the animator to make it look good.
As I understand, computer animation (as in Pixar) has built-in capabilities for the physically impossible. For example, there’s no constraint in the software that solid bodies have to have constant volume
This parallels a discussion I’ve had numerous times in the field of computer games. I’ve had any number of artists / scripters / managers say that what a computer game needs is not a realistic physics engine, but a cinematic physics engine. They don’t want it to be right, they want it to be pretty.
But, you’ll find that “cinematic style” isn’t consistent, and if you start from that basis, you won’t be able to make boring, every-day events look realistic, and you’ll have to add special-case patch-upon-patch and you’ll never get it right in the end. The cinematic stuff will look right, but nothing else will.
If you start with a rigidly-correct physics engine (or at least, within current state-of-the-art) you’ll find it MUCH easier to layer cinematic effects on top when asked for. Its usually far simpler than the other way around.
In an analogous way, I find that rationality makes it far easier for one to achieve one’s goals, EVEN WHEN SAID GOALS ARE NON-RATIONAL. Now, that may mean that the rational thing to do in some cases is to lie to people about your beliefs, or to present yourself in a non-natural way. If you end up being uncomfortable with that, then one needs to reassess what, exactly, one’s goals are, and what you are willing to do to achieve them. This may not be easy, but its far simpler than going the route of ignorance and emotionally-driven actions and then trying to put your life back together when you don’t end up where you thought you would.
You’ll need to clarify what you mean by “non-rational goals”.
Yes, I suppose I should. By a non-rational goal I meant a goal that was not necessarily to my benefit, or the benefit of the world, a goal with a negative net sum worth. Things like poisoning a reservoir or marrying someone who will make your life miserable.
You decided to try achieving that “non-rational” goal, so it must be to your benefit (at least, you must believe so).
An example that I usually give at this point is as follows. Is it physically possible that in the next 30 seconds I’ll open the window and jump out? Can I do it? Since I don’t want to do it, I won’t do it, and therefore it can not happen in reality. The concept of trying to do something you’ll never want to do is not in reality either.
Yes, exactly. The fact that you think its to your benefit, but it isn’t, is the very essence of what I mean by a non-rational goal.
That might actually be the main cost of rationality. You may have goals that will hurt you if you actually achieve them, and by not being rational, you manage to not achieve those goals, making your life better. Perhaps, in fact, people avoid rationality because they don’t really want to achieve those goals, they just think they want to.
There’s an Amanda Palmer song where the last line is “I don’t want to be the person that I want to be.”
Of course, if you become rational enough, you may be able to untangle those confused goals and conflicting desires. There’s a dangerous middle ground, though, where you may get just better at hurting yourself.
“Not to my benefit” is ambiguous; I assume you mean working against other goals, like happiness or other people not dying. But since optimizing for one thing means not optimizing for others, every goal has this property relative to every other (for an ideal agent). Still, the concept seems very useful; any thoughts on how to formalize it?
I don’t really have any ideas other than the “negative net sum” worth I mentioned above, but then that just begs the question of what metric one is using to measure worth.
This is a plausible claim, but do you have concrete details, proposed mechanisms, or examples from your own or others lives to back it up? “I find that rationality makes it far easier” is a promising-sounding claim, and it’d be nice to know the causes of your belief.
Hmm. This is a simple question that seems difficult to articulate an answer to. I think the heart of my argument is that it is very difficult to achieve any goal without planning, and planning (to be effective) relies upon a true and consistent set of beliefs and logical inferences from them. This is pretty much the definition of rationality.
Now, its not the case that the opposite is random activity which one hopes will bring about the correct outcome. To be driven by emotions, seat-of-the-pants decisions and gut-instincts is to allow an evolutionarily-derived decision-making process to run your life. Its not a completely faulty process, but it did not evolve for the kinds of situations modern people find themselves in so, in practice, its not hard to do better by applying rational principals.
As I understand, computer animation (as in Pixar) has built-in capabilities for the physically impossible. For example, there’s no constraint in the software that solid bodies have to have constant volume—when Ratatouille bounces around, he’s changing volume all the time for extra expressiveness and dramatic effect. In that way, “cinematic” reality is simpler than realistic reality—though of course it takes more artistry on the part of the animator to make it look good.
That isn’t technically impossible. ;)
Ratatouille is not a character, it’s a food. The rat’s name is Remy.
Dang, forgot that.