I really like this post. It touches on two topics that I am very interested in:
How society shapes our values (domesticates us)
and
What should we value (what is the meaning of life?)
I find the majority of discussions extremely narrow, focusing on details while rarely attempting to provide perspective. Like doing science without a theory, just performing lots of specific experiments without context or purpose.
1 Why are things the way they are and why do we value the things we value?
A social and psychological focus, Less Wrong touches on these issues but appears focused on specific psychological studies rather than any overall perspective (I suspect this would start to touch on politics and so would not be discussed). I think our understanding of the system we are a part of significantly shapes our sense of meaning and purpose and, as a result, strongly influences our society.
I would go so far as to suggest we are psychologically incapable of pursuing goals that are inconsistent with our understanding of how the universe functions (sorry Clippy), i.e. if we are selfish gene darwinists we will value winning and reproductive success. If we have a Confucian belief that the universe is a conflict between order and chaos we will pursue social stability and tradition. I have my own take on this for those who are interested (How we obtain our values, the meaning of life)
2 What problems do we want to solve?
It seems much easier to find problems to solve than goals to obtain. A recent post about Charity mentioned GiveWell. This organisation at least evaluates whether progress is made but as far as I am aware there is no economics of suffering no utilitarian (or otherwise) analysis of the relative significance of different problems. Is a destructive AI worse than global warming, or cancer or child abuse or obesity or terrorism. Is there a rational means to evaluate this for a given utility function? Has anyone tried? (this is an area I’m looking into so any links would be greatly appreciated)
3 What can we do?
Within instrumental rationality and related fields there are a lot of discussions of actions to achieve improvements in capability. Likewise for charity, lots of good causes. However there seems to be relatively little discussion of what is likely to be achieved as a result of the action, as if any progress is justification enough to focus on it. For example, what will be the difference in quality of life if I pursue a maximally healthy lifestyle vs a typical no exercise slacker life. In particular, do I want to die of a heart attack or cancer and alzheimers (which given my family history are the two ways I’m likely to go). If we had a realistic assessment of return on investment, as well as how psychologically likely we are to achieve things, we could focus our actions rationally.
I suggest that if we know how things work, what the problems are and what we can do about them, then we have a pretty good start on the meaning of life. I am frequently frustrated by the lack of perspective on these issues, we seem culturally conditioned to focus on action and specific theoretical points rather than trying to get a handle on it all. Of course that might be more fun, and that might be a sensible utility function. But for my own peace of mind I’d like to check there isn’t an alternative.
I’m very sympathetic to your comment. I feel that there’s an emerging community of people interested in answering these questions at places like Less Wrong and GiveWell but that the discussion is very much in its infancy. The questions that you raise are fundamentally very difficult but one can still hope to make some progress on them.
I’ll say that I find the line of thinking in Nick Bostrom’s Astronomical Waste article to be a compelling justification for existential risk reduction in principle. But I’m still left with the extremely difficult question of determining what the most relevant existential risks are and what we can hope to do about them.
Thank you, I also agree with your comments on your posting. I generally prefer a balance of pragmatic action with theory. In fact, I view the ‘have a go’ approach to theoretical understanding to be very useful as well. I think just roughly listing ones thoughts on a topic and then categorising them can be very revealing and really help provide perspective. I recently had a go at my priorities (utility function) and came up with the following:
To be loved
To be wise
To create things that I am proud of
To be entertained
To be respected
To be independent (ideally including being safe, relatively healthy and financially secure)
This is probably not perfect but it is something to build on (and a list I wouldn’t mind a friendly AI optimising for either).
Also, as with the positive effects mentioned in your article, I’ve found giving to charity makes it easier for me to feel love (or at least friendship) towards others and to feel more cared for in return (perhaps simply because giving to charity makes me slightly nicer towards everyone I meet).
My current focus is wisdom, I feel uncomfortable that I don’t have perspective on problems in society or the structure of the economy (i.e. how my quality of life is maintained). When I mention these ideas to others their reaction is generally to describe the problems as being too hard or impossible, I think this is a very interesting form of rationality failure, because the same people would go to enormous lengths to construct a solution to a technical problem if they were told it was not possible. Why don’t creative, intellectual and rational people apply their problem solving skills to these kinds of issues? Why don’t they ‘have a go’?
My current focus is wisdom, I feel uncomfortable that I don’t have perspective on problems in society or the structure of the economy (i.e. how my quality of life is maintained). When I mention these ideas to others their reaction is generally to describe the problems as being too hard or impossible, I think this is a very interesting form of rationality failure, because the same people would go to enormous lengths to construct a solution to a technical problem if they were told it was not possible. Why don’t creative, intellectual and rational people apply their problem solving skills to these kinds of issues? Why don’t they ‘have a go’?
My guess would be that the situation is that the “self-help” genre has a really bad name among creative/intellectual/rational people because the quality of people who have written in it is so low, and that consequently creative/intellectual/rational people feel squeamish about even entertaining the thought of doing an analysis of the type you describe.
Basically, when problems are really obviously important, lots of low quality people get attracted to them, so that when high quality people work on them they’re at risk of signaling that they’re of low quality. When high quality people work on more arcane things that are of subtle importance there’s not the issue of being confused with hoards of low quality people.
The dynamic described above has the very unfortunate consequence that many of the most important problems are simply not addressed.
In regards to prediction: I just heard (starts at 9:20) some claims that no method of prediction for the economy is doing better than extremely crude models. Unfortunately, I haven’t been able to find a cite for the “two young economists” who did the research.
However, I’m not sure that prediction is a matter of wisdom—I think of wisdom as very general principles, and prediction seems to require highly specific knowledge.
It was obvious that real estate prices couldn’t go up forever, especially as more and more people were speculating in real estate, but as far as I can tell, it was not at all obvious that such a large amount of the economy was entangled in real estate speculation that a real estate bust would have such large side effects.
Solutions to difficult technical problems became much more feasible after science was around for a while. I’m not dead certain we even have the beginnings for understanding complex social systems.
Part of the difficulty of prediction is that it’s dependent on both science and tech which hasn’t yet been discovered (our current world is shaped by computation having become easy while battery tech is still fairly recalcitrant) and on what people are doing—and people are making guesses about what to do in a highly chaotic situation.
Taleb is interesting for working on how to live well when only modest amounts of prediction are feasible.
I suspect that predicting the economy with economics is like predicting a persons behaviour from studying their biology. My desire for wisdom is in the form of perspective, I want to know the rough landscape of the economy (like the internal workings of a body).
For example I have little grasp of the industries contributing most to GDP or the taxes within my (or any other) country. In terms of government spending this site provides a nice overview for the UK, but it is only the start. I would love to know the chain of businesses and systems that provide the products I use each day. In particular, I’m very interested in the potential for technologically supported self sufficiency as a means for providing a robust underpinning to society. To do this effectively its necesary to understand the systems that we depend upon.
While such understanding might not enable prediction, I think it does provide perspective on potential opportunities and threats (just as biology does). It also helps to focus on relative importance, similar to how concentrating on cash flow helps prioritise business decisions. E.g. the social equivalent of worrying about too much paper usage in office printers when there are entire business units that aren’t profitable. Or similarly, being blind to opportunities that could render many other problems irrelevant (such as easy self sufficiency reducing the necesity for potentially problematic government infrastructure).
I really like this post. It touches on two topics that I am very interested in:
How society shapes our values (domesticates us)
and
What should we value (what is the meaning of life?)
I find the majority of discussions extremely narrow, focusing on details while rarely attempting to provide perspective. Like doing science without a theory, just performing lots of specific experiments without context or purpose.
1 Why are things the way they are and why do we value the things we value? A social and psychological focus, Less Wrong touches on these issues but appears focused on specific psychological studies rather than any overall perspective (I suspect this would start to touch on politics and so would not be discussed). I think our understanding of the system we are a part of significantly shapes our sense of meaning and purpose and, as a result, strongly influences our society.
I would go so far as to suggest we are psychologically incapable of pursuing goals that are inconsistent with our understanding of how the universe functions (sorry Clippy), i.e. if we are selfish gene darwinists we will value winning and reproductive success. If we have a Confucian belief that the universe is a conflict between order and chaos we will pursue social stability and tradition. I have my own take on this for those who are interested (How we obtain our values, the meaning of life)
2 What problems do we want to solve? It seems much easier to find problems to solve than goals to obtain. A recent post about Charity mentioned GiveWell. This organisation at least evaluates whether progress is made but as far as I am aware there is no economics of suffering no utilitarian (or otherwise) analysis of the relative significance of different problems. Is a destructive AI worse than global warming, or cancer or child abuse or obesity or terrorism. Is there a rational means to evaluate this for a given utility function? Has anyone tried? (this is an area I’m looking into so any links would be greatly appreciated)
3 What can we do? Within instrumental rationality and related fields there are a lot of discussions of actions to achieve improvements in capability. Likewise for charity, lots of good causes. However there seems to be relatively little discussion of what is likely to be achieved as a result of the action, as if any progress is justification enough to focus on it. For example, what will be the difference in quality of life if I pursue a maximally healthy lifestyle vs a typical no exercise slacker life. In particular, do I want to die of a heart attack or cancer and alzheimers (which given my family history are the two ways I’m likely to go). If we had a realistic assessment of return on investment, as well as how psychologically likely we are to achieve things, we could focus our actions rationally.
I suggest that if we know how things work, what the problems are and what we can do about them, then we have a pretty good start on the meaning of life. I am frequently frustrated by the lack of perspective on these issues, we seem culturally conditioned to focus on action and specific theoretical points rather than trying to get a handle on it all. Of course that might be more fun, and that might be a sensible utility function. But for my own peace of mind I’d like to check there isn’t an alternative.
I’m very sympathetic to your comment. I feel that there’s an emerging community of people interested in answering these questions at places like Less Wrong and GiveWell but that the discussion is very much in its infancy. The questions that you raise are fundamentally very difficult but one can still hope to make some progress on them.
I’ll say that I find the line of thinking in Nick Bostrom’s Astronomical Waste article to be a compelling justification for existential risk reduction in principle. But I’m still left with the extremely difficult question of determining what the most relevant existential risks are and what we can hope to do about them.
My own experience up until now has been that it’s better to take some tangible action in real time rather than equivocating. See my Missed opportunities for doing well by doing good posting.
Thank you, I also agree with your comments on your posting. I generally prefer a balance of pragmatic action with theory. In fact, I view the ‘have a go’ approach to theoretical understanding to be very useful as well. I think just roughly listing ones thoughts on a topic and then categorising them can be very revealing and really help provide perspective. I recently had a go at my priorities (utility function) and came up with the following:
To be loved
To be wise
To create things that I am proud of
To be entertained
To be respected
To be independent (ideally including being safe, relatively healthy and financially secure)
This is probably not perfect but it is something to build on (and a list I wouldn’t mind a friendly AI optimising for either).
Also, as with the positive effects mentioned in your article, I’ve found giving to charity makes it easier for me to feel love (or at least friendship) towards others and to feel more cared for in return (perhaps simply because giving to charity makes me slightly nicer towards everyone I meet).
My current focus is wisdom, I feel uncomfortable that I don’t have perspective on problems in society or the structure of the economy (i.e. how my quality of life is maintained). When I mention these ideas to others their reaction is generally to describe the problems as being too hard or impossible, I think this is a very interesting form of rationality failure, because the same people would go to enormous lengths to construct a solution to a technical problem if they were told it was not possible. Why don’t creative, intellectual and rational people apply their problem solving skills to these kinds of issues? Why don’t they ‘have a go’?
My guess would be that the situation is that the “self-help” genre has a really bad name among creative/intellectual/rational people because the quality of people who have written in it is so low, and that consequently creative/intellectual/rational people feel squeamish about even entertaining the thought of doing an analysis of the type you describe.
Basically, when problems are really obviously important, lots of low quality people get attracted to them, so that when high quality people work on them they’re at risk of signaling that they’re of low quality. When high quality people work on more arcane things that are of subtle importance there’s not the issue of being confused with hoards of low quality people.
The dynamic described above has the very unfortunate consequence that many of the most important problems are simply not addressed.
In regards to prediction: I just heard (starts at 9:20) some claims that no method of prediction for the economy is doing better than extremely crude models. Unfortunately, I haven’t been able to find a cite for the “two young economists” who did the research.
However, I’m not sure that prediction is a matter of wisdom—I think of wisdom as very general principles, and prediction seems to require highly specific knowledge.
It was obvious that real estate prices couldn’t go up forever, especially as more and more people were speculating in real estate, but as far as I can tell, it was not at all obvious that such a large amount of the economy was entangled in real estate speculation that a real estate bust would have such large side effects.
Solutions to difficult technical problems became much more feasible after science was around for a while. I’m not dead certain we even have the beginnings for understanding complex social systems.
Part of the difficulty of prediction is that it’s dependent on both science and tech which hasn’t yet been discovered (our current world is shaped by computation having become easy while battery tech is still fairly recalcitrant) and on what people are doing—and people are making guesses about what to do in a highly chaotic situation.
Taleb is interesting for working on how to live well when only modest amounts of prediction are feasible.
Interesting points.
I suspect that predicting the economy with economics is like predicting a persons behaviour from studying their biology. My desire for wisdom is in the form of perspective, I want to know the rough landscape of the economy (like the internal workings of a body).
For example I have little grasp of the industries contributing most to GDP or the taxes within my (or any other) country. In terms of government spending this site provides a nice overview for the UK, but it is only the start. I would love to know the chain of businesses and systems that provide the products I use each day. In particular, I’m very interested in the potential for technologically supported self sufficiency as a means for providing a robust underpinning to society. To do this effectively its necesary to understand the systems that we depend upon.
While such understanding might not enable prediction, I think it does provide perspective on potential opportunities and threats (just as biology does). It also helps to focus on relative importance, similar to how concentrating on cash flow helps prioritise business decisions. E.g. the social equivalent of worrying about too much paper usage in office printers when there are entire business units that aren’t profitable. Or similarly, being blind to opportunities that could render many other problems irrelevant (such as easy self sufficiency reducing the necesity for potentially problematic government infrastructure).