Mass-murdering neuroscience Ph.D. student
A Ph.D student in neuroscience shot at least 50 people at a showing of the new Batman movie. He also appears to have released some kind of gas from a canister. Because of his educational background this person almost certainly knows a lot about molecular biology. How long will it be (if ever) before a typical bio-science Ph.D will have the capacity to kill, say,a million people?
Edit: I’m not claiming that this event should cause a fully informed person to update on anything. Rather I was hoping that readers of this blog with strong life-science backgrounds could provide information that would help me and other interested readers assess the probability of future risks. Since this blog often deals with catastrophic risks and the social harms of irrationality and given that the events I described will likely dominate the U.S. news media for a few days I thought my question worth asking. Given the post’s Karma rating (currently −4), however, I will update my beliefs about what constitutes an appropriate discussion post.
In other news, over 91,000 people have died since midnight EST.
Most everyone dies sooner or later, artificially and knowingly making it sooner is where the ethical and legal issues start.
It’s where the legal issues start, certainly. But I would argue that ethically, what matters is how easily any of those 91,012 lives could have been saved. And many could have been saved very easily with malaria nets.
Donate page for anyone suddenly moved to do so
Just did, thanks for the reminder. Maybe we should put together an LW donations page on there to link to and encourage donation via peer pressure?
I’m pretty sure that it’s a hell of a lot easier to avoid shooting at people in a cinema than to earn enough money for the AMF to save a dozen lives. I do the former all the time—in fact, I’m doing that right now as I’m typing.
Yes, but it’s a lot harder for us as a society to prevent people from doing all random acts of violence like that.
It’s much easier to directly save one person from malaria, than to save one person from a mad gunman. Not just on the societal level, as RobertLumley stated, but as simple individual actions.
I guess I misunderstood his point, then. I took “ethically, what matters” to mean ‘what matters to the question how bad a guy the gunman was, compared to how bad a guy or gal the typical person is’. There was an action X the gunman could have done such that, if counterfactually that day the gunman had done X instead of what he actually did, at the end of the day there would have been 12 fewer dead people—namely, staying out of the cinema. There was no such obvious action in my case—at least, none which wouldn’t have left me in several thousand dollars of debt.
How do you know it wasn’t 91,011, or 91,013? :-)
I was just adding the 90,000 figure I used and then the 12. It was rhetorical.
Of course! [slaps forehead]
Seems like you have an ax to grind and so are getting completely off-topic. Time for me to disengage.
I’m curious as to why this has been downvoted. Isn’t tapping out generally considered the polite thing to do around here?
Hmm, I’m guessing that it is because yours and EY’s stance (if I understand it right) is along the lines of “every life is sacred, every life is great” and is a common sentiment on LW; and that’s why your comment was upvoted and mine downvoted (probably misunderstood as “he has no valid argument to offer and so disguised this fact by tapping out”). Again, this is only a guess.
I certainly don’t think consciously, or act as if, “every life is sacred, every life is great”.
Nevertheless the people I care about personally, and myself, are still far more likely to die from some disease that is curable but is not eradicated due to lack of funds—including most or all causes of natural death—than due to the actions of madmen, gunmen, evil biology professors, or their tiny intersection.
Which is why when I read news like in this post, I think: “why am I wasting my time thinking about this?”
Hm. Well, FWIW, I don’t think it should have been.
None of them can be saved. Death can only be delayed.
By that logic, the death of the people who were shot by the student was only advanced.
Death coming sooner is, in itself, no more or less a moral issue than a train leaving the station early, before all the ticketed passengers have boarded.
The immoral act was shooting them, not failing to give them mosquito nets.
In case you’re wondering why everyone is downvoting you, it’s because pretty much everyone here disagrees with you. Most LWers are consequentialist. As one result of this, we don’t think there’s much of a difference between killing someone and letting them die. See this fantastic essay on the topic.
(Some of the more pedantic people here will pick me up on some inaccuracies in my previous sentence. Read the link above, and you’ll get a more nuanced view.)
I just read some of your comment history, and it looks like I wrote that a bit below your level. No offense intended. I’ll leave what I wrote above there for reference of people who don’t know.
No problem. You clearly communicated what you intended to, which is never a problem.
From the link, though:
‘Die trying’, is one moral answer. ‘Gain permission from the child’ is another. ‘Perform an immoral act for the greater good’ is a third answer. I choose not to make the claim “In some cases you should non-consensually kick a small child in the face because hurting people is bad.”
‘Die trying’ doesn’t save the 101 people. If anything, I’d think about the TDT-related benefits of having precommitted to not giving in to blackmail, but in this particular example it’s far from clear that the king wouldn’t have offered you the deal in the first place had he been sure you were going to refuse it—though it is in most similar situations I’m actually likely to face in real life.
Two of the three actions I suggested saved 102 people (assuming that you aren’t one of the 100 innocent people). Two of them are possible in the least convenient universe. Two of them are moral. Those three tradeoffs are the only ones I considered- or do you consider kicking a child in the face to be a moral act?
There is no benefit to committing to not give into blackmail in this case, except that it might reduce the chances of the scenario happening. One of the advantages to noncompliance is that it reduces the chance of the scenario recurring- can you be blackmailed into kicking any number of children with the same 100 hostages?
I’m aware that my position is unpopular.
What proportion of consequentialist LWers have donated a kidney?
Why do you care? Do you plan to say that whatever fraction it is, it is too small, and this somehow discredits consequentialism itself?
I’m observing that most of them considers themselves an immoral hypocrite. Exceptions apply to people medically unsuitable for kidney donation.
The same observation applies to most consequentialists with two typical lungs, and to those who have not agreed to donate their organs postmortem.
From a strict consequentialist viewpoint, it is a moral imperative to have kidneys forcibly harvested (or harvested under threat of imprisonment) from people who are suitable and provided to people without functioning kidneys. (Assuming that the harm of having a kidney forcibly harvested, or harvested under the threat of imprisonment, isn’t on the same scale as the benefit of having a functional kidney.)
The fact that the conclusions which are necessarily drawn from the premises of consequentialism are absurd is what discredits consequentialism as the primary driver of moral decisions. Anyone who considers themselves moral, but has all of their original organs in good working order, agrees that the primary driver of morality is something other than general consequentialism.
No, they don’t.
I like my kidney. I value my kidney more than I value someone else having my kidney unless they are a relative or close friend. I have impolite words to say to anyone who dares suggest that me keeping my kidney is immoral.
You don’t understand consequentialism. The straw man you criticism more closely resembles some form of utilitarianism—which I do reject as abhorrent.
So, you think that the inconvenience of surgery are more significant than the inconvenience of requiring dialysis, because the inconvenience of surgery will be borne by you but the inconvenience of dialysis will be borne by a stranger.
I don’t see anything wrong with that morality, but it isn’t mainstream consequentialism to value oneself that much more highly than others. You also consider it moral to steal from strangers, if there was no chance of getting caught, or to perform any other action where the ratio of benefit to you to damage to strangers was at least as good as the ratio involved in the kidney calculation, right?
I am fairly confident that you are mistaken about what mainstream consequentialism asserts, see wikipedia for instance.
I also think the original downvoting occurred not due to non-consequentialist thinking but due to the probably false claim that death is inevitable.
I think that I have struck precisely at the flaw in mainstream consequentialism that I was aiming at- It is an inconsistent position for somebody in good overall health to not donate a kidney and a lung, but to correct the cashier when they have received too much change.
Has there been a physics breakthrough of which I am unaware? Is there a way to reduce entropy in an isolated system?
Because once there isn’t enough delta-T left for any electron to change state, everything even remotely analogous to being alive will have stopped.
This depends on the your preferences and, as such, is not generally true of all consequentialist systems.
If you generalize consequentialism to mean ‘whatever supports your preferences’, then you’ve expanded it beyond an ethical system to include most decision-making systems. We’re not discussing consequentialism in the general sense, either.
I’m rejecting the cases where what is ‘good’ or ‘morally right’ is defined as being whatever one prefers. That form of morality is exactly what would be used by Hostile AI, with a justification similar to “I wish to create as many replicating nanomachines as possible, therefore any action which produces fewer, like failing to consume refined materials such as ‘structural supports’, is immoral.” A system which makes literally whatever you want the only moral choice doesn’t provide any benefits over a lack of morality.
I suppose it is technically possible to believe that donating one out of two functioning kidneys is a worse consequence than living with no functioning kidneys. Of course, since the major component of donating a kidney is the surgery, and a similar surgery is needed to receive a kidney, there is either a substantial weighting towards oneself, or one would not accept a donated kidney if suffering from total renal failure. (Any significant weighting towards oneself results in the act of returning excess change to be immoral in strict consequentialism, assuming that the benefit to oneself is precisely equal to the loss to a stranger).
I’m with you here.
You’ve removed a set of consequentialist theories—consequentialist theories dependent on preferences fit the definition you give above. So you can’t say that consequentialism implies an inconsistency in the example you gave. You can say that this restricted subset of consequentialism implies such a consistency.
On a side note:
This suggests to me that you don’t understand the preference based consequentialist moral theory that is somewhat popular around here. I’m just warning you before you get into what might be fruitless debates.
I’ll bite- what benefit does is provided by any moral system that defines ‘morally right’ to be ‘that which furthers my goals’, and ‘morally wrong’ to be ‘that which opposes or my goals’, over the absence of a moral system, in which instead of describing those actions in moral terms I describe those actions in terms of personal preference?
If you prefer, you can substitute ‘the goals of the actor’ for ‘my goals’, but then you must concede that it is impossible for any actor to want to take an immoral action, only for an actor to be confused about what their goals are or mistaken about what the results of an action will be.
A moral system that is based on preferences is not equivalent to those preferences. Specifically, a moral system is what you need when preferences contradict, either with other entities (assuming you want your moral system to be societal) or with each other. From my point of view, a moral system should not change from moment to moment, though preferences may and often do. As an example: The rule “Do not Murder” is an attempt to resolve either a societal preference vs individual desires or to impose a more reflective decision-making on the kind of decisions you may make in the heat of the moment (or both). Assuming my desire to live by a moral code is strong, then having a code that prohibits murder will stop me from murdering people in a rage, even though my preferences at that moment are to do so, because my preference over the long term is not to.
Another purpose of a moral system is to off-load thinking to clear moments. You can reflectively and with foresight make general moral precepts that lead to better outcomes that you may not be able to decide on a case by case basis at anything approaching enough speed.
It’s late at night and I’m not sure how clear this is.
First of all, if you desire to follow a moral code which prohibits murder more than you desire to murder, then you do not want to murder, any more than if you desire to buy a candy bar for $1 if want $1 more than you want the candy bar.
Now, consider the class of rules that require maximizing a weighted average or sum of everyone’s preferences. Within that class, ‘do not murder’ is a valid rule, considering that people wish to avoid being murdered and also to live in a world which is in general free from murder. ‘Do not seize kidneys’ is marginally valid. The choice ‘I choose not to donate my kidney’ is valid only if one’s own preference is weighted more highly than the preference of a stranger. The choice ‘I will try to find the person who dropped this, even though I would rather keep it.’ is moral only if the preferences of a stranger are weighted equally or greater to one’s own.
Personally, I would be suspicious of any ethical system in which perfection was so easy that a nontrivial fraction of adherents were perfect.
Are you suspicious of all ethical systems on general principle, or is it only the ones that can be easily followed that you suspect, or some other possibility?
The easily followed ones.
What makes you think that any system is easily followed in all common circumstances?
What makes you think, then, that any discussion of donation rates or ‘hypocrisy’ is of any interest or relevance?
Because donating a kidney IS fairly easy to do. So easy, in fact, that when I realized that I really, really, don’t want to, I had to come to terms with the fact that I needed to reevaluate either morality or my character.
We must have different standards of what easy is, if donating a kidney strikes you as an easy way to help people as compared to, say, donating a thousand bucks to GiveWell’s top charity.
Which doesn’t answer the point. If ease/low standards doesn’t matter to evaluating a theory of ethics, then your questions about kidney are just irrelevant and rhetoric; if ease does matter in deciding whether a theory of ethics is correct or not, why do you implicitly seem to think that easiness is the default and high standards (like in utilitarianism) need to be justified?
I’m not measuring a standard of ethics by looking at the people who support it. I’m saying that if the people who claim to support a ethical principle violate it without considering themselves either immoral or hypocrites, then they believe something different from what they think they believe.
And donating to charity until I become a charity case is unreasonable- if donating to charity is a moral obligation, at what point does it stop being a moral obligation?
Is ‘immoral’ the best word to use in this context? If you asked them, ‘do you think you are as moral as possible or are doing the very most optimal things to do?‘, I suspect most of them would answer ‘no’. Problem solved, apparently, if that was what you really meant all along...
You already explained at what point donating stops. As for ‘unreasonable’, I think that’s more rhetoric on your part since I’m not sure where exactly in reason we can find the one true ethics which tells us to eat, drink, be merry, and stop donating well before that point. If it’s really unreasonable, you’re going to be picking fights with an awful lot of religions, I’d also point out, who didn’t seem to find it unreasonable behavior on the parts of ethical paragons like saints and monks.
“Are you currently violating the moral principles you believe in?” would be the best phrasing.
From one standpoint, it becomes unreasonable when there is something else that I would rather do with that money. Coincidentally, that happens to be exactly the principle I use to decide how much I donate to charity.
There recently was a post on LW (to which I’ll provide a link as soon as I get behind a ‘proper’ computer rather than a smartphone) making the point that the expected number of lives you save is much higher if you donate $400 than if you donate a kidney, so if you’re indifferent between losing $400 and losing a kidney (and given what that post said about the inconvenience of the surgery for kidney explantation, I’d say $400 is even a conservative estimate) you’d better donate the former. (FWIW, I have agreed to donate my organs after death—more precisely, it’s opt-out in my country, but I know how to opt out and haven’t done so.)
Oh, so I suppose you have neither $400 nor any redundant organs, then? I was ignoring the hypocrisy of not being impoverished, because not having any significant amount of money has larger long-term effects than not having an extra kidney.
So you wouldn’t mind dying tomorrow rather than in forty years, would you?
I think this is only a very small update to our picture of spree killer or terrorist demographics. We already know about the enrichment of terrorism with engineers, and Aum Shinrikyo had access to some smart generalists (neuroscience is not a directly deadly field). We also know that such folk are much more likely to succeed at super-simple plans, like this one, than at doing complex technological endeavours for the first time in the face of countermeasures.
That doesn’t have to ever happen. Ubiquitous DNA sensors that identify deadly agents, DNA vaccines, ubiquitous surveillance at the planning stage, tripwires in synthesis machines, and so forth can proliferate. The typical bio-science PhD is pretty crappy (Sturgeon’s Law), and likely to fail at something like making the smallpox virus unless there is a cookie-cutter script kiddy style approach pre-packaged. That might happen, but we also might see varied government interference that makes failure much more likely, just as interference at every stage of potential nuclear terrorism makes it impracticable (although that is eased by the rare materials).
It doesn’t have to happen. Certainly, if you take defensive measures, it’s less likely to happen.
But warfare has always been an offensive/defensive battle. Offense has seemed to have the advantage for a while, and random violence in a free society has a huge advantage as well. 1K seems easy. 10K takes some skill. 1000k seems unlikely without a very contagious bug with just the right incubation time.
“Ubiquitous surveillance at the planning stage”—that sounds rather ominous there, Big Brother.
What’s more amazing to me is how little damage crazies do, when it just doesn’t seem like it would be that hard.
http://www.gwern.net/Terrorism%20is%20not%20Effective
1k seems easy? That is, killing 1k people? That seems possible, but not easy. At very least it deserves a “takes some skill and a huge amount of dedicated planning”.
I started replying, but I just don’t think it’s worth it. I have no particularly brilliant idea here, and likely you’d still just disagree, but I don’t see mileage in giving people ideas about this on the internet just so we can chat.
Sensors and vaccines all rely on that the pathogen is known, even coded. You could have a register of virulent genes, in order for it to be effective against hybrids, but even then it would be very hard to say whether a something is dangerous or not.
It is much harder to make and test a new deadly pathogen than to create smallpox from the genome, and avoiding even known dangerous genes forces evildoers to do real science.
The whole premise of the original question was that the evildoers are capable of doing real science, within the limits of one or a few people, years, and personal budgets and tools.
“the genome” wouldn’t help really be of much use, in order creating pox from a genome you need to transplant the genome into a similar bacteria, which is no small feat, it would be much easier just to get your hands on the pathogen somehow, and you don’t need to know anything about bio-science to do that, just have a source. In this matter advancing technology is will not likely result increase risk.
I know, it’s hard to restore smallpox without a sample, harder still to create a new and unrecognizable disease, harder still to make one without using identifiable dangerous already-studied genes.
Noun and Chyba (2008, “Biotechnology and biosecurity”) reviewed the risks of hostile uses of biotechnology in Global Catastrophic Risks (eds. Bostrom and Ćirković).
An excerpt from section 20.7 (“Catastrophic biological attacks”):
To look at it in another way, it is surprising that someone with that level of knowledge intent on killing people didn’t kill far more people. There are lots of simple ways someone with decent chemical knowledge and access to lab equipment could kill a lot of people in a confined space, but instead he chose to use guns primarily.
My leading hypothesis would be that people in this sort of mental state are not motivated by maximising the number of people they kill but by fitting into the mold of a gunman or fulfilling some other psychological desire. So if that is the case we should be comforted that even if the access of people to dangerous chemicals increases they won’t use them. What would be really dangerous is if someone psychologically normal decided to kill a lot of people.
Since most of the damage seems (thusfar) to have been by gun rather than by gas this particular event, while tragic, does not seem to be evidence for a particular timeline on this sort of risk question.
What’s relevant is that someone capable of getting into a Ph.D. bio-science program had such a strong preference for mass murder that he was willing to spend the rest of his life in jail to achieve this ends.
My best guess is that this individual has an organic mental disorder, e.g., schizophrenia with paranoid delusions.
What possible experiences would you anticipate if the first statement was true that you wouldn’t also expect if the second statement was true?
I would expect to find people with very skewed senses of reality (as seen in schizophrenia). I don’t consider that the same as a preference. What’s currently called antisocial personality disorder, also known as psychopathy or sociopathy, I might consider more a preference (in that it deals with how they value other people’s wellbeing, not from their perception of reality).
I wouldn’t be surprised to hear that someone who attacked strangers for no apparent reason was experiencing delusions or hallucinations. I would be surprised to hear that someone with sociopathy did so, because they normally hurt people only for personal gain, and there’s nothing to gain from opening fire on a crowd.
Hearing voices is not a preference.
People with schizophrenia usually do not attack people for no reason either. The independent association of schizophrenia with violent behavior is low, and most of the difference in rates of violence between schizophrenics and non schizophrenics seems to be attributable to the higher rate of substance abuse among schizophrenics.
If you know that someone is a mass murderer, it should give you a high posterior probability assessment for mental illness, but not a high probability assessment for schizophrenia.
Most sociopaths are not mass murderers or serial killers, but as best I can determine (I’ve found articles that allude to it, but none that give an actual percentage, and wikipedia pages for individual mass murderers seem to support it) most mass murderers and serial killers are sociopaths. However, most mass murderers and serial killers are not schizophrenics, although it seems that a significantly greater proportion of serial killers and mass murderers are schizophrenics than the proportion of the population in non mass murderers or serial killers.
Sorry, you’re right. I spent last year working on a psych ward, and I agree that most people with schizophrenia are unlikely to hurt others.
My guess is that mass murderers with some ideological or practical reason for choosing the people they murder are more likely to be sociopaths. I can’t think of a reason to target people at a movie theater, which makes me put a higher prior on delusions or hallucinations in this case.
Well, given the specific evidence that the culprit dressed up and identified himself as the Joker and informed the police that he had booby trapped his apartment, I’d assign a high probability of schizophrenia, but I wouldn’t write off preference based reasons for a sociopath to gun people down at a movie theater in general. It could be motivated by a fantasy rather than a delusion of the perpetrator. Winston Moseley, for instance, sexually assaulted and killed Kitty Genovese and two other women because he had violent sexual fantasies. Alyssa Bustamante (who I read about recently while looking up information on young female murderers,) committed premeditated murder by her own testimony because she wanted to know what it felt like. Personally, I sometimes find myself frustrated at the ineptitude of both terrorists and the Department of Homeland Security, and have thoughts along the lines of “that’s pathetic, I could show them how it’s done,” (I consider this to be one of the manifestations of my imp of the perverse,) but I’m not inclined to do it because I don’t want to terrorize the population. If I were a sociopath, on the other hand, I might actually be tempted to do it.
Other people’s preferences are not necessarily going to be relatable. Even if there’s no potential for profit or cause for vengeance, for the right sort of person, murder could be a thrill activity, like skydiving (another example of something people do for pleasure that other people can’t imagine why one would ever want to do it.)
Thanks for the serious response. I see what you mean. My unease about these kind of explanations is the asymmetry between the kinds of terms used to explain why someone would choose to commit a horrible act and the kinds of terms used to explain why someone wouldn’t choose to commit a horrible act. After rethinking it though, I agree that making reference to beliefs in addition to preferences does add explanatory power, but I’m not very certain of this. After all, we make reference to the same physical laws to explain why a bridge stays up as when it falls down; but when we explain “normal” behavior, we talk about preferences and constraints, without saying anything about having non-schizophrenia.
And yet, fame is a very common preference among humans and this guy is now world famous (he even has his own thread on Less Wrong!). Depending on how strong his fame-preference is, we can’t rule out that these tactics weren’t instrumentally rational in his particular case.
Maybe, IDK. Do you never hear voices? I hear a voice (it sounds very similar to my normal speaking voice, but not exactly). As far as I can tell, this is just what linguistic thinking feels like from the inside. I can even make it go away by using my visual imagination, doing strenuous physical exercise, or practicing mindfulness meditation. The extent that I do make it go away seems to be determined by my personal preferences and the constraints imposed by my environment.
I don’t understand the analogy. Explanations why a bridge stays up usually point out different physical laws than explanations why it had fallen down. You rarely hear “the bridge stays up because the construction hasn’t corroded and the engineers made no mistake”, unless there is a reason to suspect something wrong with the bridge.
This is a good question for one to ask one’s self. That said, juliawise nailed it. If I had seen your reply before juliawise posted, I would have said, “‘Disorder’ imples the individual’s map doesn’t match the territory (well ok, consensus reality) in conspicuous ways, e.g., schizophrenia with paranoid delusions.”
The killer apparently identified himself to police as “the Joker” (with hair dyed red or orange instead of the comic book character’s green). He also rigged his apartment with booby traps but then told police about them. I’m not getting a strong sense of coherent goal-seeking here, in spite of the reported fact that the venue of the attacks was apparently chosen for maximum killing.
Yeah, I see it now. I’m pretty sure I would anticipate them having a significantly more faulty web of belief than non-serial killers.
If you think that the venue was chosen for maximum killing, you haven’t considered what someone who was optimizing for killing would do.
Another possibility is that Cyan and James Holmes are not as creative or intelligent as Decius.
http://www.amazon.com/Schizophrenia-A-Very-Short-Introduction/dp/0192802216/
I’m not sure if that’s surprising. I’ve seen somewhere that psycopaths have higher than average IQ, and there were some serial killers who certainly could have managed a PhD program… Wait, Kaczynski?
Actually I’d be more scared of a Kaczynski than of Holmes. Holmes seems to enjoy his mass murder up close and personal. Kaczynski was driven by ideology; that type seems much more dangerous.
Kaczynski killed 3 people in 20 years. Holmes killed 12 people in a few minutes. I can’t easily get away from an estimate that I am around 4X as likely to suffer from someone like Holmes than from someone like Kaczynski, and so if I had my fear under any sort of rational control and thought it was helpful in lowering my chances of harm, I would fear Holmes more, myself.
I suppose it depends on whether you expect to be the target of someone’s ideology.
Kaczynski targeted mostly people in technical fields (science professors, grad students, and people in the computer business). Anders Breivik, who copied parts of Kaczynski’s manifesto in his own, targeted mostly members of a particular political party.
Why is the category of ‘capable of getting into a Ph.D. bio-science program’ relevant? I can kindof see where you are going with this; it’s easy to say that someone who can design something that could hurt a lot of people has better access and is therefore more likely to do so.
But while this is evidence that someone of the category you’ve named can want to engage in mass murder, it isn’t evidence that he had better access or was more likely to do so given that he may have had better access. In actual fact, since the harm here largely involved a firearm and we don’t know if the canister was even related to his bio-science program you should be updating downward on someone in that category relying on a biochemical agent rather than just using a gun. You have evidence that, even with a (perhaps strong) bio-science background people who engage in mass murder for what appear to be their own preferences just use a damn gun like everyone else with the same preference.
People who kill with guns aren’t trying to kill lots of people. Mass murderers prefer armies, or if armies aren’t available, bombs. A backpack-sized IED have killed more people than a gun.
Generally I agree, for certain values of “a lot of people” and with the restrictions of a given person at a given time. Fortunately, not all mass murderers are optimal mass murderers.
Very fortunately, there have been very, very few optimized murderers. The most effective ones use armies and countries or religions, while the ones who use bombs are many orders of magnitude lower in effectiveness.
Part of the lesson is that guns in the hands of citizens helps prevent mass murderers, even though it facilitates group murders.
The technology already exists for hundreds or thousands of deaths, you will grant; but they are not obviously being used, and instances where unusual methods are being used are low body counts (the 2001 anthrax attacks). Given that the spree killers are not already using them, why would we expect this this change?
Are you arguing either that even a small probability of a spree killer using them is too much when the damage could run into the hundreds of thousands, or that the increasing capabilities themselves will increase the probability?
I’m wondering how much damage a guy like this could do in the future if he decided to kill as many people as possible. I figured that some readers would have a strong enough life science background to be able to make a reasonable estimate.
This is assuming that his goal is just to kill as many people as possible. He could much more easily have set up some kind of bomb in the movie theater and killed as many, if not more, people. My impression is that he wanted the visceral rush of murdering all of those people first-person.
I certainly think there is a considerable risk, as bio-engineering becomes routine even in smaller labs. One could rely on brute force methods, like creating an array of random compositions of influence viruses for example, attempt to infect a few people, hope something sticks(selection). And then several different strains with non-overlapping surface antigens—you get no overlapping immunity—emerge from your batch, instead of one super pathogen.
How hard is it to get one’s hands on some of the select agents?
The second one on the list can be created accidentally in the kitchen, using ingredients already present in most kitchens. I’m not sure if there is a safeguard against slipping it into some portion of the food pipeline, such that it struck as many as a million prior to a recall being issued.
May be he did released some biological agent and shooting was a cover up for gas with viruses releese.
I don’t think a doctor in bio-science would have too much trouble figuring out how to build a nuke. I’ve read that a lot of nuclear testing facilities aren’t nearly as guarded as we’d hope. If what they were implying was correct, then it’s already possible.
Shouldn’t Iran already have nukes then? Surely they have many employees that are equivalent in skill to a “doctor in bio-science”.
You know, figuring out how to build a nuke isn’t that hard, but getting enough enriched uranium to build one is.
Iran isn’t going to resort to stealing the uranium.
Why not? They probably just blew up a bus full of Israelis in Bulgaria.
It won’t make the Iranians happy. They believe their country has as much a right to nukes as any other, and it would be wrong for America to stop them when America has nukes. I doubt they feel the same about stealing nukes.
Also, they’re denying blowing up the bus. Nukes aren’t very useful if you deny having them.
Depends how believable your denial is. Israel does it all the time.
Unless you intend to use them as actual weapons, rather than strategic deterrence. Then they’re most useful if noone else believes you have them.
That would be worse than not having nukes at all. If anyone finds out who used the nukes, they will end you. If not, you will horribly damage the economy, which will hurt you a lot.
Sometimes states (or rulers or generals) are deluded about their chances in war; or accept high risks of being destroyed in exchange for a high chance of destroying someone else first; or don’t think in terms of rational cost/benefit or risk/prize analysis at all.
I don’t understand. If you nuke an enemy and nobody knows it was you, then presumably you damage their economy, not your own, which would not hurt you. What did you mean?
There’s a world economy. If you damage a country that much, they won’t be able to trade with you.
That’s not an argument against nukes, but against all war and indeed all large scale hostile actions. The argument itself aside, I observe that in actual practice this does not deter nations from waging ruinous war. Hell, it doesn’t even stop them from waging ruinous civil war, ethnic cleansing, etc. which damage their own economy. People may not be rational economic agents, but more importantly, they aren’t agents who care about the economy over other things.
Yeah. I guess I was thinking more along the lines of nuclear annihilation, but you can’t really do that without being overt. The best you could do is a few suitcase nukes.
There is a rationality link here, but it’s not the one you were thinking of.
We’ve Seen This Movie Before—By ROGER EBERT
Colorado Gun Laws Remain Lax, Despite Changes After Columbine
I think that the numbers for responsible gun owners still speak for themselves- No legally owned fully-automatic firearm in private hands in the US has been linked to a crime.
That is a fascinating piece data, if true. Is the comment downvoted heavily because it is false or because of some political concern? Can someone sufficiently interested in US gun laws to be aware of the statistics confirm for us?
My country doesn’t allow random citizens legal ownership of fully automatic weapons and don’t particularly object to that. But even so it’d be rather amusing if even in the US where fully automatic weapons are legal it is still only black market fully automatic weapons that have ever been linked to a crimes. If true I assume the factors of price and of being ridiculously easy to trace back to you due to registration are the deterrent. (Possibly combined with a sufficiently free flowing black market.)
Mostly price. Since fully automatic weapons are legal, but producing or importing them for civilian use in the USA has been illegal for many years, the few remaining command quite a premium. They are not used in stickups for the same reason Rolls-Royces are not used as getaway cars.
I wouldn’t use a gun registered in my name and of a type that is relatively rare for the same reason I wouldn’t use a Rolls-Royce with my number plate right there on the back as a getaway car. I don’t want the authorities to have a reason to privilege me as a hypothesis just because Mortimer and I are the only two people registered with that kind of weapon in the entire city.
Black market FA firearms are:
Cheaper
Available to people who cannot pass a background check
Less traceable (if a black market weapon is found at the scene of a crime, it is harder to determine who owned it last.
Fairly easy to create through easily researched and fairly simple modifications to grey market or legal Semi-automatic firearms
Compare the Waco Branch Davidians, who were served with warrants because they purchased the materials under circumstances which indicated probable cause to believe that they were being used for illegal purposes.
Try an extra newline before the bulleted list there.
Try asterisks for your bulleted list there.
a -a
For balance:
A report I heard on the evening of the 21st of June said that according to then-current information all his firearms had been bought legally. It was later confirmed separately. However, as Decius notes below, it is unlikely that he used a fully automatic firearm, as the difficulty in acquiring one makes it more likely that if the man used a fully automatic weapon, it was an illegally modified semi-automatic.
Disinformation is NOT noise. I included 4 adjective phrases in my claim, and I know that the claim is false considering any combination of only three of them. The sources you linked choose explicitly to not state if the firearms in question are fully-automatic, but the shops listed typically don’t make five-figure sales, like a fully-auto AR-15.
For reference, the least expensive firearm matching the ‘fully-automatic’ and ‘legal in the US’ qualifiers are around $2500 each if the seller needs to sell them within hours, and $4000 if the you want to buy one in a reasonable amount of time.
Thanks for looking into it more thoroughly. If there are any other updates besides the sources I linked to (which were the only ones available at the time) please inform me—I normally do not follow those organizations’ news coverage either. I’m not sure why you mention the costs of the firearms, but it might be worthwhile to know that I heard on the evening of the 21st of June on All Things Considered that he bought the firearms in May and June.
Why is it disinformation? Those were the only sources available at the time, and they both quoted Oates, who I believe is a sheriff who initially worked the case before the FBI took over. If this is still disinformation, then please correct me so I may better find my news.
It’s disinformation because I made a specific claim, and you responded by refuting a claim different from the claim I made.
The costs are relevant because they are one of the reasons why legally acquired FA firearms are not used in crimes. They typically cost about 10 times as much as illegally modified semi-auto firearms of the same model, and involve a more significant background check to acquire than firearms in general.
If there was a legally acquired FA firearm used, the serial number and ownership history would be available within minutes of the time the name and DOB (or DOB and zipcode) of the owner was known.