The idea of moral demandingness is really a separate issue.
I don’t see what difference it makes if you are selfish or not regarding friendly AI research. I believe that being altruistic is largely instrumental in maximizing egoistic satisfactions.
Thanks for the post by Nick Bostrom. But it only adds to the general confusion.
There seem to be dramatic problems with both probability-utility calculations and moral decision making. Taking those problems into account makes it feel like one could as well throw a coin to decide.
For instance, you must have made decisions for your children that were more in alignment with what they would want if they were smarter. If you made judgments in alignment with their actual preferences (like wanting to eat candy all day — I don’t know your kids but I know a lot of kids would do this), they would suffer for it in the longer term.
This sounds good but seems to lead to dramatic problems. In the end it is merely an appeal to intuition without any substance.
If you don’t try to satisfy your actual preferences, what else?
In the example, stated by Anissimov, what actually happens is that parents try to satisfy their own preferences by not allowing their children to die of candy-intoxication.
If we were going to disregard our current preferences and postpone having fun in favor of gathering more knowledge then we would eventually end up as perfectly rational agents in static game theoretic equilibria.
The problem with the whole utility-maximization heuristic is that it eventually does deprive us of human nature by reducing our complex values to mere game theoretic models.
Part of human nature, what we value, is the way we like to decide. It won’t work to just point at hyperbolic discounting and say it is time-inconsistent and therefore irrational.
Human preferences are always actual, we do not naturally divide decisions according to instrumental and terminal goals.
I don’t want a paperclip maximizer to burn the cosmic commons. I don’t want to devote most of my life to mitigate that risk. This is not a binary decision, that is not how human nature seems to work.
If you try to force people into a binary decision between their actual preferences and some idealistic far mode then you cause them to act according to academic considerations rather than their complex human values they are supposed protect.
Suppose you want to eat candies all day and are told that you can eat a lot more candies after the Singularity, if only you work hard enough right now. The problem is that there is always another Singularity that promises more candies. At what point are you going to actually eat candies? But that is a rather academic problem. There is a more important problem concerning human nature, as demonstrated by extreme sport. Humans care much more about living their life’s according to their urges than about maximizing utility.
What does it even mean to “maximize utility”? Many sportsmen and sportswomen are aware of the risks associated with their favorite activity. Yet they take the risk.
It seems that humans are able to assign infinite utility to pursuing a certain near-mode activity.
Deliberately risking your life doesn’t seem to be maximizing experience utility, as you would be able to experience a lot more of the same or similar experience differently. And how can one “maximize” terminal decision utility?
When applying your objections to my own perspective, I find that I see my actions that aren’t focused on reducing involuntary death (eating candies, playing video games, sleeping) as necessary for the actual pursuit of my larger goals.
I am a vastly inefficient engine. My productive power goes to the future, but much of it bleeds away—not as heat and friction, but as sleep and candy-eating. Those things are necessary for the engine to run, but they aren’t necessary evils. I need to do them to be happy, because a happy engine is an efficient one.
I recognized two other important points. One is that I must work daily to improve the efficiency of my engine. I stopped playing video games, so I could work harder. I stopped partying so often so I could be more productive. Etcetera.
The other point is that it’s crucial to remember why I’m doing this stuff in the first place. I only care about reducing existential risk and signing up for cryonics and destroying death because of the other things I care about: eating candies, sleeping, making friends, traveling, learning, improving, laughing, dancing, drinking, moving, seeing, breathing, thinking… I am trying to satisfy my actual preferences.
The light at the end of the tunnel is utopia. If I want to get there, I need to make sure the engine runs clean. I don’t think working on global warming will do it—but if I did, that’s where I’d be putting in my time.
I don’t see what difference it makes if you are selfish or not regarding friendly AI research. I believe that being altruistic is largely instrumental in maximizing egoistic satisfactions.
Thanks for the post by Nick Bostrom. But it only adds to the general confusion.
There seem to be dramatic problems with both probability-utility calculations and moral decision making. Taking those problems into account makes it feel like one could as well throw a coin to decide.
Michael Anissimov recently wrote:
This sounds good but seems to lead to dramatic problems. In the end it is merely an appeal to intuition without any substance.
If you don’t try to satisfy your actual preferences, what else?
In the example, stated by Anissimov, what actually happens is that parents try to satisfy their own preferences by not allowing their children to die of candy-intoxication.
If we were going to disregard our current preferences and postpone having fun in favor of gathering more knowledge then we would eventually end up as perfectly rational agents in static game theoretic equilibria.
The problem with the whole utility-maximization heuristic is that it eventually does deprive us of human nature by reducing our complex values to mere game theoretic models.
Part of human nature, what we value, is the way we like to decide. It won’t work to just point at hyperbolic discounting and say it is time-inconsistent and therefore irrational.
Human preferences are always actual, we do not naturally divide decisions according to instrumental and terminal goals.
I don’t want a paperclip maximizer to burn the cosmic commons. I don’t want to devote most of my life to mitigate that risk. This is not a binary decision, that is not how human nature seems to work.
If you try to force people into a binary decision between their actual preferences and some idealistic far mode then you cause them to act according to academic considerations rather than their complex human values they are supposed protect.
Suppose you want to eat candies all day and are told that you can eat a lot more candies after the Singularity, if only you work hard enough right now. The problem is that there is always another Singularity that promises more candies. At what point are you going to actually eat candies? But that is a rather academic problem. There is a more important problem concerning human nature, as demonstrated by extreme sport. Humans care much more about living their life’s according to their urges than about maximizing utility.
What does it even mean to “maximize utility”? Many sportsmen and sportswomen are aware of the risks associated with their favorite activity. Yet they take the risk.
It seems that humans are able to assign infinite utility to pursuing a certain near-mode activity.
Deliberately risking your life doesn’t seem to be maximizing experience utility, as you would be able to experience a lot more of the same or similar experience differently. And how can one “maximize” terminal decision utility?
When applying your objections to my own perspective, I find that I see my actions that aren’t focused on reducing involuntary death (eating candies, playing video games, sleeping) as necessary for the actual pursuit of my larger goals.
I am a vastly inefficient engine. My productive power goes to the future, but much of it bleeds away—not as heat and friction, but as sleep and candy-eating. Those things are necessary for the engine to run, but they aren’t necessary evils. I need to do them to be happy, because a happy engine is an efficient one.
I recognized two other important points. One is that I must work daily to improve the efficiency of my engine. I stopped playing video games, so I could work harder. I stopped partying so often so I could be more productive. Etcetera.
The other point is that it’s crucial to remember why I’m doing this stuff in the first place. I only care about reducing existential risk and signing up for cryonics and destroying death because of the other things I care about: eating candies, sleeping, making friends, traveling, learning, improving, laughing, dancing, drinking, moving, seeing, breathing, thinking… I am trying to satisfy my actual preferences.
The light at the end of the tunnel is utopia. If I want to get there, I need to make sure the engine runs clean. I don’t think working on global warming will do it—but if I did, that’s where I’d be putting in my time.