What ethical principles can we use to decide between “Shut Up and Multiply” and “Shut Up and Divide”?
Why do we have to decide between them? Long before I ever heard of “Shut Up and Multiply,” I used a test that produced the same results, but worked equally well for “Shut Up and Divide.” My general statement was, “Be consistent.” I would put things in the appropriate context and make sure to apply similar value functions regardless of size or scope—or, perhaps to phrase it better, making sure my consistently applied value function definitely considered size and scope.
Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?
From where should we derive our values? Well, we’ve got the option of using what’s already there (the value function implemented in the human brain), or we have the option of appealing to something else, or we can just apply our reason and alter the function as needed. It seems to me that we don’t really have access to that “something else,” so I doubt we have a choice on this part. Our natural empathic hardwiring will shoot off all kinds of flares when we see suffering up close and personal, and will fail to activate when it should on the larger scale. We can still place arbitrary hacks into the value function to try and correct the scope insensitivity. The function was arbitrary in the first place, so there’s no conflict other than ease of application.
And an interesting meta-question arises here as well: how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist?
How much of our values are from hardwiring as opposed to reasoned thought? Well, probably however much we haven’t put thought into. For most people, I expect this to be a large portion. However, once we’ve thought about it, and applied our function to our functions, we can label them good or bad, and work at adding more arbitrary hacks to the arbitrary, evolution-designed, hardwired values. I see this in a particular way: a piece of the function, an item on the list of human morality, is “this list may change or update as needed,” or, “this function is subject to revision based upon its output when ran against itself.” Again, the ease of doing this is a more interesting debate, in my opinion.
And if many of our values are just the result of cognitive errors or limitations, have we lived with them long enough that they’ve become an essential part of us?
If by “essential” you mean, “someone without it would not be human,” then I grant that it’s possible. But if you mean, “we can’t change it,” then I would disagree. We can change our values, now and certainly in the future as we begin rewiring things on a more fundamental level. I see it as another question of definitions: if we change ourselves “for the better,” are we “extincting the human race,” or “continuing as human and more”? It seems that practical reality won’t care either way.
Why do we have to decide between them? Long before I ever heard of “Shut Up and Multiply,” I used a test that produced the same results, but worked equally well for “Shut Up and Divide.” My general statement was, “Be consistent.” I would put things in the appropriate context and make sure to apply similar value functions regardless of size or scope—or, perhaps to phrase it better, making sure my consistently applied value function definitely considered size and scope.
From where should we derive our values? Well, we’ve got the option of using what’s already there (the value function implemented in the human brain), or we have the option of appealing to something else, or we can just apply our reason and alter the function as needed. It seems to me that we don’t really have access to that “something else,” so I doubt we have a choice on this part. Our natural empathic hardwiring will shoot off all kinds of flares when we see suffering up close and personal, and will fail to activate when it should on the larger scale. We can still place arbitrary hacks into the value function to try and correct the scope insensitivity. The function was arbitrary in the first place, so there’s no conflict other than ease of application.
How much of our values are from hardwiring as opposed to reasoned thought? Well, probably however much we haven’t put thought into. For most people, I expect this to be a large portion. However, once we’ve thought about it, and applied our function to our functions, we can label them good or bad, and work at adding more arbitrary hacks to the arbitrary, evolution-designed, hardwired values. I see this in a particular way: a piece of the function, an item on the list of human morality, is “this list may change or update as needed,” or, “this function is subject to revision based upon its output when ran against itself.” Again, the ease of doing this is a more interesting debate, in my opinion.
If by “essential” you mean, “someone without it would not be human,” then I grant that it’s possible. But if you mean, “we can’t change it,” then I would disagree. We can change our values, now and certainly in the future as we begin rewiring things on a more fundamental level. I see it as another question of definitions: if we change ourselves “for the better,” are we “extincting the human race,” or “continuing as human and more”? It seems that practical reality won’t care either way.