Updated, I left the original wording as intact as possible. The ‘emptiness’ of the personal anecdote I think is important because it demonstrates the messaging challenge faced by someone in this position. If the torches and pitchforks are out in ‘this’ community, imagine how the general public would react.
“I have an idea that makes the world a worse place. I could potentially profit somewhat personally by bringing it to life, but this would be unethical. How badly do I need the money?” Is, in my opinion, probably a fairly common thought in many fields. Ethics can sometimes be expensive, and the prevailing morality, at least in the USA, is ‘just make the money’. Fortunately, in my own case, I do not have visions of large sums of money or prestige on the other side of disclosure, so I am not being tempted very strongly.
Farmers are regularly paid not to grow certain crops, and this makes economic sense somehow. How could someone in my position be incentivized to avoid disclosure of harmful ideas, without requiring that disclosure?
Arguably, an alternative to dealing with the social opprobium of making a pitch like mine would be to rationalize disclosure, argue that the idea is not harmful but is in some way helpful, say that people who say otherwise have flawed arguments, and attempt to maximize profit while minimizing the harms to myself and my own community.
Like an award-winning pornographer who makes a strenuous effort to keep his children and family away from his work.
There are plenty of ideas which can be used for good or ill. (Disrupting the messaging system of (viruses/bacteria) that they use to coordinate attacks on the host once they’ve built up a sufficiently large population sounds obviously good—until you ask ‘once the population gets high enough, won’t they manage to coordinate a larger scale attack even if you’re trying to disrupt their signals?’)
The sense in which something can only be used for one is harder to pin down. (Using machinery for creating viruses/etc to create deadly virus, and then launching a bio-attack on people with said virus, qualifies as only evil.) Perhaps specificity is the key. Is there a way the idea can be generalized for some good use (especially one which outweighs the risk)?
Unfortunately, you really nailed the issue. Out of an abundance of caution, I won’t use your violent analogy of a bio-weapon here, as that could be construed as furthering the ‘blackmail’ misinterpretation of my writing.
To use the analogy I added to the OP, there may in theory be good reasons to market things to vulnerable populations (like children), and there may in theory be good reasons to study nicotine marketing (market less harmful products to existing users), but someone with knowledge of both fields who realizes something like ‘by synthesizing existing work on nicotine marketing with existing work on marketing things to children, I have identified a magic formula that will double the number of smokers in the next generation’ has discovered a dangerous idea.
If for example, this person is employed at a marketing agency that took work from a client who sells nicotine products, his manager will make a strong appeal to his selfishness (‘so what have you been working on?’)
As altruists, we would like that idea to remain unknown, how do we as altruists appeal to that person’s selfishness without demanding disclosures to some entity that promises not to actually do anything with the idea?
The Unabomber had a proposed solution to this problem—people he judged to be producing ideas that were harmful to whatever it was that he cared about received bombs in the mail, thus appealing to engineers’ desire to not get hurt in bombings. I understand that there is a country in the middle east which has historically taken the same approach.
Perhaps I should view the ‘delete this’ command and suggestion that I was violating a social norm that is often punished by violent men (posting a threat in a public forum bad decision wut wut) in the most upvoted comment on this thread as an endorsement of that ‘negative reinforcement’ strategy by this community?
an endorsement of that ‘negative reinforcement’ strategy by this community?
Only socially I imagine—via Downvotes yes, bombs no.
I’d guess it’s mostly about the belief that blackmail was involved, but there’s only one way to test that.
If for example, this person is employed at a marketing agency that took work from a client who sells nicotine products, his manager will make a strong appeal to his selfishness (‘so what have you been working on?’)
I imagine people react differently to “my work has bad incentives in place, it’s a shame I’m not payed for not doing X” than “I’m looking for a job which doesn’t encourage/involve doing bad things.” (Yes, people demand ‘altruism’ of others.)
a magic formula that will double the number of smokers in the next generation’
… [is] a dangerous idea.
The question is, can this be reversed? Can a formula for reducing the number of smokers be devised instead? Or is the thing you describe just the reverse of this (work on how to reduce harm turned into work on how to increase harm)?
To use the zombie-words example I raised in a previous comment.
Imagine a “human shellcode compiler”, which requires a large amount of processing power and can generate a phrase that a human who hears it will instantly obey, and no countermeasures are available other than ‘not hearing the phrase’. Theoretically, this could have good applications if very carefully controlled (“stop using heroin!”).
Imagine someone runs this to make a command like ‘devour all the living human flesh you can find’. The compiler is salvageable, this particular compiled command is not.
I believe my idea to be closer to the second example than the first, though not nearly to the same level of harm. Based on the qualia computing post linked elsewhere, my most ethical option is ‘be quiet about this one and hope I find a better idea to sell’.
Updated, I left the original wording as intact as possible. The ‘emptiness’ of the personal anecdote I think is important because it demonstrates the messaging challenge faced by someone in this position. If the torches and pitchforks are out in ‘this’ community, imagine how the general public would react.
“I have an idea that makes the world a worse place. I could potentially profit somewhat personally by bringing it to life, but this would be unethical. How badly do I need the money?” Is, in my opinion, probably a fairly common thought in many fields. Ethics can sometimes be expensive, and the prevailing morality, at least in the USA, is ‘just make the money’. Fortunately, in my own case, I do not have visions of large sums of money or prestige on the other side of disclosure, so I am not being tempted very strongly.
Farmers are regularly paid not to grow certain crops, and this makes economic sense somehow. How could someone in my position be incentivized to avoid disclosure of harmful ideas, without requiring that disclosure?
Arguably, an alternative to dealing with the social opprobium of making a pitch like mine would be to rationalize disclosure, argue that the idea is not harmful but is in some way helpful, say that people who say otherwise have flawed arguments, and attempt to maximize profit while minimizing the harms to myself and my own community.
Like an award-winning pornographer who makes a strenuous effort to keep his children and family away from his work.
There are plenty of ideas which can be used for good or ill. (Disrupting the messaging system of (viruses/bacteria) that they use to coordinate attacks on the host once they’ve built up a sufficiently large population sounds obviously good—until you ask ‘once the population gets high enough, won’t they manage to coordinate a larger scale attack even if you’re trying to disrupt their signals?’)
The sense in which something can only be used for one is harder to pin down. (Using machinery for creating viruses/etc to create deadly virus, and then launching a bio-attack on people with said virus, qualifies as only evil.) Perhaps specificity is the key. Is there a way the idea can be generalized for some good use (especially one which outweighs the risk)?
Unfortunately, you really nailed the issue. Out of an abundance of caution, I won’t use your violent analogy of a bio-weapon here, as that could be construed as furthering the ‘blackmail’ misinterpretation of my writing.
To use the analogy I added to the OP, there may in theory be good reasons to market things to vulnerable populations (like children), and there may in theory be good reasons to study nicotine marketing (market less harmful products to existing users), but someone with knowledge of both fields who realizes something like ‘by synthesizing existing work on nicotine marketing with existing work on marketing things to children, I have identified a magic formula that will double the number of smokers in the next generation’ has discovered a dangerous idea.
If for example, this person is employed at a marketing agency that took work from a client who sells nicotine products, his manager will make a strong appeal to his selfishness (‘so what have you been working on?’)
As altruists, we would like that idea to remain unknown, how do we as altruists appeal to that person’s selfishness without demanding disclosures to some entity that promises not to actually do anything with the idea?
The Unabomber had a proposed solution to this problem—people he judged to be producing ideas that were harmful to whatever it was that he cared about received bombs in the mail, thus appealing to engineers’ desire to not get hurt in bombings. I understand that there is a country in the middle east which has historically taken the same approach.
Perhaps I should view the ‘delete this’ command and suggestion that I was violating a social norm that is often punished by violent men (posting a threat in a public forum bad decision wut wut) in the most upvoted comment on this thread as an endorsement of that ‘negative reinforcement’ strategy by this community?
Only socially I imagine—via Downvotes yes, bombs no.
I’d guess it’s mostly about the belief that blackmail was involved, but there’s only one way to test that.
I imagine people react differently to “my work has bad incentives in place, it’s a shame I’m not payed for not doing X” than “I’m looking for a job which doesn’t encourage/involve doing bad things.” (Yes, people demand ‘altruism’ of others.)
The question is, can this be reversed? Can a formula for reducing the number of smokers be devised instead? Or is the thing you describe just the reverse of this (work on how to reduce harm turned into work on how to increase harm)?
To use the zombie-words example I raised in a previous comment.
Imagine a “human shellcode compiler”, which requires a large amount of processing power and can generate a phrase that a human who hears it will instantly obey, and no countermeasures are available other than ‘not hearing the phrase’. Theoretically, this could have good applications if very carefully controlled (“stop using heroin!”).
Imagine someone runs this to make a command like ‘devour all the living human flesh you can find’. The compiler is salvageable, this particular compiled command is not.
I believe my idea to be closer to the second example than the first, though not nearly to the same level of harm. Based on the qualia computing post linked elsewhere, my most ethical option is ‘be quiet about this one and hope I find a better idea to sell’.