I believe that people like me feel that to fully accept the importance of friendly AI research would deprive us of the things we value and need.
The idea of moral demandingness is really a separate issue. The ability to save thousands of lives through donation to 3rd world public health poses a similar problem for many people. Risks like nuclear war (plus nuclear winter/irrecoverable social collapse), engineered pandemics, and other non-AI existential risks give many orders of magnitude increased stakes (within views that weight future people additively). AI may give some additional orders of magnitude, but in terms of demandingness it’s not different in kind.
Most people who care a lot about 3rd world public health, e.g. Giving What We Can members, are not cutting out their other projects, and come to stable accommodations between their various concerns, including helping others, family, reading books, various forms of self-expression, and so forth.
If you have desires pulling you towards X and Y, just cut a deal between the components of your psychology and do both X and Y well, rather than feeling conflicted and doing neither well. See this Bostrom post, or this Yudkowsky one on the virtues of divvying up specialized effort into separate buckets.
Carl, you hit the biggest nail on the head. But I think there’s another nail there. If not for XiXiDu, then for many others. Working on fooming AI issues makes one a weirdo. Wearing a tinfoil hat would only be slightly less embarrassing. Working on environmental problems is downright normal, at least within some (comfortably large) social circles.
Back to that biggest nail—it needs another whack. AI threatens to dramatically worsen the world within our children’s lifetimes. Robin Hanson, sitting next to his son, will feel significantly less comfortable upon thinking such thoughts. That provides a powerful motive to rationalize the problem away—or to worry at it, I suppose, depending on one’s personality, but I find denial to be more popular than worrywarting.
The idea of moral demandingness is really a separate issue.
I don’t see what difference it makes if you are selfish or not regarding friendly AI research. I believe that being altruistic is largely instrumental in maximizing egoistic satisfactions.
Thanks for the post by Nick Bostrom. But it only adds to the general confusion.
There seem to be dramatic problems with both probability-utility calculations and moral decision making. Taking those problems into account makes it feel like one could as well throw a coin to decide.
For instance, you must have made decisions for your children that were more in alignment with what they would want if they were smarter. If you made judgments in alignment with their actual preferences (like wanting to eat candy all day — I don’t know your kids but I know a lot of kids would do this), they would suffer for it in the longer term.
This sounds good but seems to lead to dramatic problems. In the end it is merely an appeal to intuition without any substance.
If you don’t try to satisfy your actual preferences, what else?
In the example, stated by Anissimov, what actually happens is that parents try to satisfy their own preferences by not allowing their children to die of candy-intoxication.
If we were going to disregard our current preferences and postpone having fun in favor of gathering more knowledge then we would eventually end up as perfectly rational agents in static game theoretic equilibria.
The problem with the whole utility-maximization heuristic is that it eventually does deprive us of human nature by reducing our complex values to mere game theoretic models.
Part of human nature, what we value, is the way we like to decide. It won’t work to just point at hyperbolic discounting and say it is time-inconsistent and therefore irrational.
Human preferences are always actual, we do not naturally divide decisions according to instrumental and terminal goals.
I don’t want a paperclip maximizer to burn the cosmic commons. I don’t want to devote most of my life to mitigate that risk. This is not a binary decision, that is not how human nature seems to work.
If you try to force people into a binary decision between their actual preferences and some idealistic far mode then you cause them to act according to academic considerations rather than their complex human values they are supposed protect.
Suppose you want to eat candies all day and are told that you can eat a lot more candies after the Singularity, if only you work hard enough right now. The problem is that there is always another Singularity that promises more candies. At what point are you going to actually eat candies? But that is a rather academic problem. There is a more important problem concerning human nature, as demonstrated by extreme sport. Humans care much more about living their life’s according to their urges than about maximizing utility.
What does it even mean to “maximize utility”? Many sportsmen and sportswomen are aware of the risks associated with their favorite activity. Yet they take the risk.
It seems that humans are able to assign infinite utility to pursuing a certain near-mode activity.
Deliberately risking your life doesn’t seem to be maximizing experience utility, as you would be able to experience a lot more of the same or similar experience differently. And how can one “maximize” terminal decision utility?
When applying your objections to my own perspective, I find that I see my actions that aren’t focused on reducing involuntary death (eating candies, playing video games, sleeping) as necessary for the actual pursuit of my larger goals.
I am a vastly inefficient engine. My productive power goes to the future, but much of it bleeds away—not as heat and friction, but as sleep and candy-eating. Those things are necessary for the engine to run, but they aren’t necessary evils. I need to do them to be happy, because a happy engine is an efficient one.
I recognized two other important points. One is that I must work daily to improve the efficiency of my engine. I stopped playing video games, so I could work harder. I stopped partying so often so I could be more productive. Etcetera.
The other point is that it’s crucial to remember why I’m doing this stuff in the first place. I only care about reducing existential risk and signing up for cryonics and destroying death because of the other things I care about: eating candies, sleeping, making friends, traveling, learning, improving, laughing, dancing, drinking, moving, seeing, breathing, thinking… I am trying to satisfy my actual preferences.
The light at the end of the tunnel is utopia. If I want to get there, I need to make sure the engine runs clean. I don’t think working on global warming will do it—but if I did, that’s where I’d be putting in my time.
The idea of moral demandingness is really a separate issue. The ability to save thousands of lives through donation to 3rd world public health poses a similar problem for many people. Risks like nuclear war (plus nuclear winter/irrecoverable social collapse), engineered pandemics, and other non-AI existential risks give many orders of magnitude increased stakes (within views that weight future people additively). AI may give some additional orders of magnitude, but in terms of demandingness it’s not different in kind.
Most people who care a lot about 3rd world public health, e.g. Giving What We Can members, are not cutting out their other projects, and come to stable accommodations between their various concerns, including helping others, family, reading books, various forms of self-expression, and so forth.
If you have desires pulling you towards X and Y, just cut a deal between the components of your psychology and do both X and Y well, rather than feeling conflicted and doing neither well. See this Bostrom post, or this Yudkowsky one on the virtues of divvying up specialized effort into separate buckets.
Carl, you hit the biggest nail on the head. But I think there’s another nail there. If not for XiXiDu, then for many others. Working on fooming AI issues makes one a weirdo. Wearing a tinfoil hat would only be slightly less embarrassing. Working on environmental problems is downright normal, at least within some (comfortably large) social circles.
Back to that biggest nail—it needs another whack. AI threatens to dramatically worsen the world within our children’s lifetimes. Robin Hanson, sitting next to his son, will feel significantly less comfortable upon thinking such thoughts. That provides a powerful motive to rationalize the problem away—or to worry at it, I suppose, depending on one’s personality, but I find denial to be more popular than worrywarting.
I agree with these points. I was responding to XiXiDu’s focus in his post about availability of time and resources for other interests.
I don’t see what difference it makes if you are selfish or not regarding friendly AI research. I believe that being altruistic is largely instrumental in maximizing egoistic satisfactions.
Thanks for the post by Nick Bostrom. But it only adds to the general confusion.
There seem to be dramatic problems with both probability-utility calculations and moral decision making. Taking those problems into account makes it feel like one could as well throw a coin to decide.
Michael Anissimov recently wrote:
This sounds good but seems to lead to dramatic problems. In the end it is merely an appeal to intuition without any substance.
If you don’t try to satisfy your actual preferences, what else?
In the example, stated by Anissimov, what actually happens is that parents try to satisfy their own preferences by not allowing their children to die of candy-intoxication.
If we were going to disregard our current preferences and postpone having fun in favor of gathering more knowledge then we would eventually end up as perfectly rational agents in static game theoretic equilibria.
The problem with the whole utility-maximization heuristic is that it eventually does deprive us of human nature by reducing our complex values to mere game theoretic models.
Part of human nature, what we value, is the way we like to decide. It won’t work to just point at hyperbolic discounting and say it is time-inconsistent and therefore irrational.
Human preferences are always actual, we do not naturally divide decisions according to instrumental and terminal goals.
I don’t want a paperclip maximizer to burn the cosmic commons. I don’t want to devote most of my life to mitigate that risk. This is not a binary decision, that is not how human nature seems to work.
If you try to force people into a binary decision between their actual preferences and some idealistic far mode then you cause them to act according to academic considerations rather than their complex human values they are supposed protect.
Suppose you want to eat candies all day and are told that you can eat a lot more candies after the Singularity, if only you work hard enough right now. The problem is that there is always another Singularity that promises more candies. At what point are you going to actually eat candies? But that is a rather academic problem. There is a more important problem concerning human nature, as demonstrated by extreme sport. Humans care much more about living their life’s according to their urges than about maximizing utility.
What does it even mean to “maximize utility”? Many sportsmen and sportswomen are aware of the risks associated with their favorite activity. Yet they take the risk.
It seems that humans are able to assign infinite utility to pursuing a certain near-mode activity.
Deliberately risking your life doesn’t seem to be maximizing experience utility, as you would be able to experience a lot more of the same or similar experience differently. And how can one “maximize” terminal decision utility?
When applying your objections to my own perspective, I find that I see my actions that aren’t focused on reducing involuntary death (eating candies, playing video games, sleeping) as necessary for the actual pursuit of my larger goals.
I am a vastly inefficient engine. My productive power goes to the future, but much of it bleeds away—not as heat and friction, but as sleep and candy-eating. Those things are necessary for the engine to run, but they aren’t necessary evils. I need to do them to be happy, because a happy engine is an efficient one.
I recognized two other important points. One is that I must work daily to improve the efficiency of my engine. I stopped playing video games, so I could work harder. I stopped partying so often so I could be more productive. Etcetera.
The other point is that it’s crucial to remember why I’m doing this stuff in the first place. I only care about reducing existential risk and signing up for cryonics and destroying death because of the other things I care about: eating candies, sleeping, making friends, traveling, learning, improving, laughing, dancing, drinking, moving, seeing, breathing, thinking… I am trying to satisfy my actual preferences.
The light at the end of the tunnel is utopia. If I want to get there, I need to make sure the engine runs clean. I don’t think working on global warming will do it—but if I did, that’s where I’d be putting in my time.