I am holding a lot of dangerous knowledge and am encumbered by a variety of laws and non-disclosure agreements. This is not actually uncommon. So arguably, I am already being paid to keep my mouth shut about a variety of things, but these are mostly not original thoughts. This specific idea is, in my best judgement, both dangerous, and unencumbered by those laws and NDAs.
The assertion that my default position is ‘altrusitic silence’ means that this is not ‘posting a threat on a public forum’. It would be a real shame if a large variety of things that are currently not generally known were to become public. While I would indeed like to be paid not to make them public (and, as previously stated, in some cases already am), this should not be taken as an assertion to the reader, that if they fail to provide me with some tangible benefit, that I will do something harmful.
This is in a broader sense, a question: ‘If there exists an idea which is simply harmful, for example, a phrase which when spoken aloud turns a human into a raging canninal, such that there is no value whatsoever to increasing the number of people aware of the idea, how can people who generate such ideas be incentivized to not spread them?’
Maybe the best thing to do is to look for originators of new ideas perceived as dangerous, and encourage them to drink hemlock tea before they can hurt anyone else. https://en.m.wikipedia.org/wiki/Trial_of_Socrates
Perhaps your post would have been received differently if the title were “How can dangerous ideas be disposed of” or “How can society incentivize people to not unleash terrible ideas on the world” and the post proceeded accordingly. (The dangerous and empty* personal anecdote, could be replaced with a more mundane musing (‘[This technology] can obviously be used in bad ways**‘, or ‘Given how nukes impacted history, and might in the future, how can things be altered, or what institutions or incentives can be created or implemented, so problems like that don’t happen/are less likely in the future?’)
*People are probably unhappy about that.
**A well known example would do.
this should not be taken as an assertion to the reader, that if they fail to provide me with some tangible benefit, that I will do something harmful.
I recommend updating the post to make that slightly more clear.
Updated, I left the original wording as intact as possible. The ‘emptiness’ of the personal anecdote I think is important because it demonstrates the messaging challenge faced by someone in this position. If the torches and pitchforks are out in ‘this’ community, imagine how the general public would react.
“I have an idea that makes the world a worse place. I could potentially profit somewhat personally by bringing it to life, but this would be unethical. How badly do I need the money?” Is, in my opinion, probably a fairly common thought in many fields. Ethics can sometimes be expensive, and the prevailing morality, at least in the USA, is ‘just make the money’. Fortunately, in my own case, I do not have visions of large sums of money or prestige on the other side of disclosure, so I am not being tempted very strongly.
Farmers are regularly paid not to grow certain crops, and this makes economic sense somehow. How could someone in my position be incentivized to avoid disclosure of harmful ideas, without requiring that disclosure?
Arguably, an alternative to dealing with the social opprobium of making a pitch like mine would be to rationalize disclosure, argue that the idea is not harmful but is in some way helpful, say that people who say otherwise have flawed arguments, and attempt to maximize profit while minimizing the harms to myself and my own community.
Like an award-winning pornographer who makes a strenuous effort to keep his children and family away from his work.
There are plenty of ideas which can be used for good or ill. (Disrupting the messaging system of (viruses/bacteria) that they use to coordinate attacks on the host once they’ve built up a sufficiently large population sounds obviously good—until you ask ‘once the population gets high enough, won’t they manage to coordinate a larger scale attack even if you’re trying to disrupt their signals?’)
The sense in which something can only be used for one is harder to pin down. (Using machinery for creating viruses/etc to create deadly virus, and then launching a bio-attack on people with said virus, qualifies as only evil.) Perhaps specificity is the key. Is there a way the idea can be generalized for some good use (especially one which outweighs the risk)?
Unfortunately, you really nailed the issue. Out of an abundance of caution, I won’t use your violent analogy of a bio-weapon here, as that could be construed as furthering the ‘blackmail’ misinterpretation of my writing.
To use the analogy I added to the OP, there may in theory be good reasons to market things to vulnerable populations (like children), and there may in theory be good reasons to study nicotine marketing (market less harmful products to existing users), but someone with knowledge of both fields who realizes something like ‘by synthesizing existing work on nicotine marketing with existing work on marketing things to children, I have identified a magic formula that will double the number of smokers in the next generation’ has discovered a dangerous idea.
If for example, this person is employed at a marketing agency that took work from a client who sells nicotine products, his manager will make a strong appeal to his selfishness (‘so what have you been working on?’)
As altruists, we would like that idea to remain unknown, how do we as altruists appeal to that person’s selfishness without demanding disclosures to some entity that promises not to actually do anything with the idea?
The Unabomber had a proposed solution to this problem—people he judged to be producing ideas that were harmful to whatever it was that he cared about received bombs in the mail, thus appealing to engineers’ desire to not get hurt in bombings. I understand that there is a country in the middle east which has historically taken the same approach.
Perhaps I should view the ‘delete this’ command and suggestion that I was violating a social norm that is often punished by violent men (posting a threat in a public forum bad decision wut wut) in the most upvoted comment on this thread as an endorsement of that ‘negative reinforcement’ strategy by this community?
an endorsement of that ‘negative reinforcement’ strategy by this community?
Only socially I imagine—via Downvotes yes, bombs no.
I’d guess it’s mostly about the belief that blackmail was involved, but there’s only one way to test that.
If for example, this person is employed at a marketing agency that took work from a client who sells nicotine products, his manager will make a strong appeal to his selfishness (‘so what have you been working on?’)
I imagine people react differently to “my work has bad incentives in place, it’s a shame I’m not payed for not doing X” than “I’m looking for a job which doesn’t encourage/involve doing bad things.” (Yes, people demand ‘altruism’ of others.)
a magic formula that will double the number of smokers in the next generation’
… [is] a dangerous idea.
The question is, can this be reversed? Can a formula for reducing the number of smokers be devised instead? Or is the thing you describe just the reverse of this (work on how to reduce harm turned into work on how to increase harm)?
To use the zombie-words example I raised in a previous comment.
Imagine a “human shellcode compiler”, which requires a large amount of processing power and can generate a phrase that a human who hears it will instantly obey, and no countermeasures are available other than ‘not hearing the phrase’. Theoretically, this could have good applications if very carefully controlled (“stop using heroin!”).
Imagine someone runs this to make a command like ‘devour all the living human flesh you can find’. The compiler is salvageable, this particular compiled command is not.
I believe my idea to be closer to the second example than the first, though not nearly to the same level of harm. Based on the qualia computing post linked elsewhere, my most ethical option is ‘be quiet about this one and hope I find a better idea to sell’.
I am holding a lot of dangerous knowledge and am encumbered by a variety of laws and non-disclosure agreements. This is not actually uncommon. So arguably, I am already being paid to keep my mouth shut about a variety of things, but these are mostly not original thoughts. This specific idea is, in my best judgement, both dangerous, and unencumbered by those laws and NDAs.
The assertion that my default position is ‘altrusitic silence’ means that this is not ‘posting a threat on a public forum’. It would be a real shame if a large variety of things that are currently not generally known were to become public. While I would indeed like to be paid not to make them public (and, as previously stated, in some cases already am), this should not be taken as an assertion to the reader, that if they fail to provide me with some tangible benefit, that I will do something harmful.
This is in a broader sense, a question: ‘If there exists an idea which is simply harmful, for example, a phrase which when spoken aloud turns a human into a raging canninal, such that there is no value whatsoever to increasing the number of people aware of the idea, how can people who generate such ideas be incentivized to not spread them?’
Maybe the best thing to do is to look for originators of new ideas perceived as dangerous, and encourage them to drink hemlock tea before they can hurt anyone else. https://en.m.wikipedia.org/wiki/Trial_of_Socrates
Perhaps your post would have been received differently if the title were “How can dangerous ideas be disposed of” or “How can society incentivize people to not unleash terrible ideas on the world” and the post proceeded accordingly. (The dangerous and empty* personal anecdote, could be replaced with a more mundane musing (‘[This technology] can obviously be used in bad ways**‘, or ‘Given how nukes impacted history, and might in the future, how can things be altered, or what institutions or incentives can be created or implemented, so problems like that don’t happen/are less likely in the future?’)
*People are probably unhappy about that.
**A well known example would do.
I recommend updating the post to make that slightly more clear.
Updated, I left the original wording as intact as possible. The ‘emptiness’ of the personal anecdote I think is important because it demonstrates the messaging challenge faced by someone in this position. If the torches and pitchforks are out in ‘this’ community, imagine how the general public would react.
“I have an idea that makes the world a worse place. I could potentially profit somewhat personally by bringing it to life, but this would be unethical. How badly do I need the money?” Is, in my opinion, probably a fairly common thought in many fields. Ethics can sometimes be expensive, and the prevailing morality, at least in the USA, is ‘just make the money’. Fortunately, in my own case, I do not have visions of large sums of money or prestige on the other side of disclosure, so I am not being tempted very strongly.
Farmers are regularly paid not to grow certain crops, and this makes economic sense somehow. How could someone in my position be incentivized to avoid disclosure of harmful ideas, without requiring that disclosure?
Arguably, an alternative to dealing with the social opprobium of making a pitch like mine would be to rationalize disclosure, argue that the idea is not harmful but is in some way helpful, say that people who say otherwise have flawed arguments, and attempt to maximize profit while minimizing the harms to myself and my own community.
Like an award-winning pornographer who makes a strenuous effort to keep his children and family away from his work.
There are plenty of ideas which can be used for good or ill. (Disrupting the messaging system of (viruses/bacteria) that they use to coordinate attacks on the host once they’ve built up a sufficiently large population sounds obviously good—until you ask ‘once the population gets high enough, won’t they manage to coordinate a larger scale attack even if you’re trying to disrupt their signals?’)
The sense in which something can only be used for one is harder to pin down. (Using machinery for creating viruses/etc to create deadly virus, and then launching a bio-attack on people with said virus, qualifies as only evil.) Perhaps specificity is the key. Is there a way the idea can be generalized for some good use (especially one which outweighs the risk)?
Unfortunately, you really nailed the issue. Out of an abundance of caution, I won’t use your violent analogy of a bio-weapon here, as that could be construed as furthering the ‘blackmail’ misinterpretation of my writing.
To use the analogy I added to the OP, there may in theory be good reasons to market things to vulnerable populations (like children), and there may in theory be good reasons to study nicotine marketing (market less harmful products to existing users), but someone with knowledge of both fields who realizes something like ‘by synthesizing existing work on nicotine marketing with existing work on marketing things to children, I have identified a magic formula that will double the number of smokers in the next generation’ has discovered a dangerous idea.
If for example, this person is employed at a marketing agency that took work from a client who sells nicotine products, his manager will make a strong appeal to his selfishness (‘so what have you been working on?’)
As altruists, we would like that idea to remain unknown, how do we as altruists appeal to that person’s selfishness without demanding disclosures to some entity that promises not to actually do anything with the idea?
The Unabomber had a proposed solution to this problem—people he judged to be producing ideas that were harmful to whatever it was that he cared about received bombs in the mail, thus appealing to engineers’ desire to not get hurt in bombings. I understand that there is a country in the middle east which has historically taken the same approach.
Perhaps I should view the ‘delete this’ command and suggestion that I was violating a social norm that is often punished by violent men (posting a threat in a public forum bad decision wut wut) in the most upvoted comment on this thread as an endorsement of that ‘negative reinforcement’ strategy by this community?
Only socially I imagine—via Downvotes yes, bombs no.
I’d guess it’s mostly about the belief that blackmail was involved, but there’s only one way to test that.
I imagine people react differently to “my work has bad incentives in place, it’s a shame I’m not payed for not doing X” than “I’m looking for a job which doesn’t encourage/involve doing bad things.” (Yes, people demand ‘altruism’ of others.)
The question is, can this be reversed? Can a formula for reducing the number of smokers be devised instead? Or is the thing you describe just the reverse of this (work on how to reduce harm turned into work on how to increase harm)?
To use the zombie-words example I raised in a previous comment.
Imagine a “human shellcode compiler”, which requires a large amount of processing power and can generate a phrase that a human who hears it will instantly obey, and no countermeasures are available other than ‘not hearing the phrase’. Theoretically, this could have good applications if very carefully controlled (“stop using heroin!”).
Imagine someone runs this to make a command like ‘devour all the living human flesh you can find’. The compiler is salvageable, this particular compiled command is not.
I believe my idea to be closer to the second example than the first, though not nearly to the same level of harm. Based on the qualia computing post linked elsewhere, my most ethical option is ‘be quiet about this one and hope I find a better idea to sell’.