Okay, my definition of sane is essentially: rational enough to take actions that generally work towards your goals and to create goals that are effective ways to satisfy your terminal values. It’s a rather high bar. Suicide bombers do not achieve their goals, cultists have had their cognitive machinery hijacked to serve someone else’s goals instead of their own. The reason I think this would be okay in aggregate is the psychological unity of mankind: we’re mostly pretty similar and there are remarkably low numbers of evil mutants. Being pretty similar, most people’s goals would be acceptable to me. I disagree with some things China does for example, but I find their overwhelming competence makes up for it in aggregate wellbeing of their populace.
gjm gives some good examples of heroic responsibility, but I understand the term slightly differently. Heroic responsibility is to have found a thing that you have decided is important, generally by reasoned cost/benefit and then take responsibility to get it done regardless of what life throws your way. It may be an easy task or a hard task, but it must be an important task. The basic idea is that you don’t stop when you feel like you tried, if your first attempt doesn’t work you do more research and come up with a new strategy. If your second plan doesn’t work because of unfair forces you take those unfair forces into account and come up with another plan. If that still doesn’t work you try harder again, then you keep going until you either achieve the goal, it becomes clear that you cannot achieve the goal or the amount of effort you would have to put into the problem becomes significantly greater than the size of the benefit you expect.
For example, the benefit for FAI is humanities continued existence, there is essentially no amount of effort one person could put in that could be too much. To use the example of Eliezer in this thread, the benefit of a person being happier and more effective for months each year is also large, much larger than the time it takes to research SAD and come up with some creative solutions.
Not quite. A collection of semi-local militias who managed to piss off just about everyone does not a caliphate make.
P.S. Though as a comment on the grandparent post, some suicide bombers certainly achieve their goals (and that’s even ignoring the obvious goal to die a martyr for the cause).
Achieved almost entirely by fighting through normal means, guns and such so I hardly see the relevant. Suicide bombing kills a vanishing small number of people. IED’s are an actual threat.
Their original goal as rebels was to remove a central government and now they’re fighting a war of genocide against other rebel factions. I wonder how they would have responded if you’d told them at the start that a short while later they’d be slaughtering fellow muslims in direct opposition to their holy book.
rational enough to take actions that generally work towards your goals and to create goals that are effective ways to satisfy your terminal values. It’s a rather high bar.
The definition you give sounds like a pretty low bar to me. The fact that you’re calling the bar high means that there are implied but unstated things around this definition—can you be more explicit? “Generally work towards your goals” looks to me like what 90% of the population is doing...
but I understand the term slightly differently
Is it basically persistence/stubborness/bloodymindedness, then?
Persistence is a good word for it, plus a sense of making it work even if the world is unfair, the odds are stacked against you. No sense of having fought the good fight and lost, if you failed and there were things you possibly could done beforehand, general strategies that would have been effective even if you did not know what was coming, then that is your own responsibility. It is not, I think, a particularly healthy way of looking at most things. It can only really be useful as a mindset for things that really matter.
can you be more explicit?
Ah, sorry, I insufficiently unpacked “effective ways to satisfy terminal values”. The hidden complexity was in “effectively”. By effectively I meant in an efficient and >75% optimal manner. Many people do not know their own terminal values. Most people also don’t know that what makes a human happy, which is often different from what a human wants. Of those that do know their values, few have effective plans to satisfy them. Looking back on it now, this is quite a large inferential distance behind the innocuous looking work ‘sane’. I shall try to improve on that in the future.
It’s a statement of fact, not a political agenda. Neuroscientists know more about people’s brains than normal people do, as a result of spending years and decades studying the subject.
Not yours specifically, but the general average across humanity. lukeprog wrote up a good summary of the factors correlated with happiness which you’ve probably read as well as an attempt to discern the causes. Not that happiness is the be-all and end-all of terminal values, but it certainly shows how little the average person knows about what they would actually happy with vs what they think they’d be happy with. I believe that small sub-sequence on the science of winning at life is far more than the average person knows on the subject, or else people wouldn’t give such terrible advice.
Aren’t you making the assumption that the average applies to everyone? It does not. There is a rather wide spread and pretending that a single average value represents it well enough is unwarranted.
There are certainly things biologically hardwired into human brains but not all of them are terminal values and for things that are (e.g. survival) you don’t need a neurobiologist to point that out. Frankly, I am at loss to see what neurobiologists can say about terminal values. It’s like asking Intel chip engineers about what a piece of software really does.
how little the average person knows about what they would actually happy with
I don’t know about that. Do you have evidence? If a person’s ideas about her happiness diverge from the average ones, I would by default assume that she’s different from the average, not that she is wrong.
Okay, my definition of sane is essentially: rational enough to take actions that generally work towards your goals and to create goals that are effective ways to satisfy your terminal values. It’s a rather high bar. Suicide bombers do not achieve their goals, cultists have had their cognitive machinery hijacked to serve someone else’s goals instead of their own. The reason I think this would be okay in aggregate is the psychological unity of mankind: we’re mostly pretty similar and there are remarkably low numbers of evil mutants. Being pretty similar, most people’s goals would be acceptable to me. I disagree with some things China does for example, but I find their overwhelming competence makes up for it in aggregate wellbeing of their populace.
gjm gives some good examples of heroic responsibility, but I understand the term slightly differently. Heroic responsibility is to have found a thing that you have decided is important, generally by reasoned cost/benefit and then take responsibility to get it done regardless of what life throws your way. It may be an easy task or a hard task, but it must be an important task. The basic idea is that you don’t stop when you feel like you tried, if your first attempt doesn’t work you do more research and come up with a new strategy. If your second plan doesn’t work because of unfair forces you take those unfair forces into account and come up with another plan. If that still doesn’t work you try harder again, then you keep going until you either achieve the goal, it becomes clear that you cannot achieve the goal or the amount of effort you would have to put into the problem becomes significantly greater than the size of the benefit you expect.
For example, the benefit for FAI is humanities continued existence, there is essentially no amount of effort one person could put in that could be too much. To use the example of Eliezer in this thread, the benefit of a person being happier and more effective for months each year is also large, much larger than the time it takes to research SAD and come up with some creative solutions.
Really, last time I checked there is now a Caliphate in what is still nominal Iraq and Syria.
Not quite. A collection of semi-local militias who managed to piss off just about everyone does not a caliphate make.
P.S. Though as a comment on the grandparent post, some suicide bombers certainly achieve their goals (and that’s even ignoring the obvious goal to die a martyr for the cause).
But not enough for “everyone” to mount an effective campaign to destroy them.
Achieved almost entirely by fighting through normal means, guns and such so I hardly see the relevant. Suicide bombing kills a vanishing small number of people. IED’s are an actual threat.
Their original goal as rebels was to remove a central government and now they’re fighting a war of genocide against other rebel factions. I wonder how they would have responded if you’d told them at the start that a short while later they’d be slaughtering fellow muslims in direct opposition to their holy book.
The definition you give sounds like a pretty low bar to me. The fact that you’re calling the bar high means that there are implied but unstated things around this definition—can you be more explicit? “Generally work towards your goals” looks to me like what 90% of the population is doing...
Is it basically persistence/stubborness/bloodymindedness, then?
Persistence is a good word for it, plus a sense of making it work even if the world is unfair, the odds are stacked against you. No sense of having fought the good fight and lost, if you failed and there were things you possibly could done beforehand, general strategies that would have been effective even if you did not know what was coming, then that is your own responsibility. It is not, I think, a particularly healthy way of looking at most things. It can only really be useful as a mindset for things that really matter.
Ah, sorry, I insufficiently unpacked “effective ways to satisfy terminal values”. The hidden complexity was in “effectively”. By effectively I meant in an efficient and >75% optimal manner. Many people do not know their own terminal values. Most people also don’t know that what makes a human happy, which is often different from what a human wants. Of those that do know their values, few have effective plans to satisfy them. Looking back on it now, this is quite a large inferential distance behind the innocuous looking work ‘sane’. I shall try to improve on that in the future.
Is there an implication that someone or something does know? That strikes me as awfully paternalistic.
It’s a statement of fact, not a political agenda. Neuroscientists know more about people’s brains than normal people do, as a result of spending years and decades studying the subject.
Huh? Neuroscientists know my terminal values better than I do because they studied brains?
Sorry, that’s nonsense.
Not yours specifically, but the general average across humanity. lukeprog wrote up a good summary of the factors correlated with happiness which you’ve probably read as well as an attempt to discern the causes. Not that happiness is the be-all and end-all of terminal values, but it certainly shows how little the average person knows about what they would actually happy with vs what they think they’d be happy with. I believe that small sub-sequence on the science of winning at life is far more than the average person knows on the subject, or else people wouldn’t give such terrible advice.
Aren’t you making the assumption that the average applies to everyone? It does not. There is a rather wide spread and pretending that a single average value represents it well enough is unwarranted.
There are certainly things biologically hardwired into human brains but not all of them are terminal values and for things that are (e.g. survival) you don’t need a neurobiologist to point that out. Frankly, I am at loss to see what neurobiologists can say about terminal values. It’s like asking Intel chip engineers about what a piece of software really does.
I don’t know about that. Do you have evidence? If a person’s ideas about her happiness diverge from the average ones, I would by default assume that she’s different from the average, not that she is wrong.