I got some results I didn’t quite expect while thinking about this.
I assumed that if it was good to press the button once, it would be good to consider the effects of pressing the button a lot, and so I tried considering what came up with a scenario that was equivalent on a larger scale:
You’re being given an opportunity to break a regulatory tie to approve a massive new mosquito net factory in China.
The mosquito net factory is equal to pressing the button 100,000 times a year. Assume you personally couldn’t do that, without getting carpal tunnel.
So it will stop 500,000 Malaria deaths a year, (There are actually more than that many: http://www.who.int/mediacentre/factsheets/fs094/en/) and not only that, but if you approve it, You’ll receive a lucrative stock gift, which will pay you 600 MILLION dollars a year. You’ll have a golden parachute for the rest of your life! For the purposes of this scenario, that’s legal, according to new stock regulations.
Okay, yes, it is VERY polluting. But China doesn’t have any air quality standards for these pollutants, so again, it’s all legal, and the projected deaths are only 100,000 people a year, mostly in China.
But still, we’re saving 400,000 people’s lives every year! And we know it’s going to be at least one year for the circumstances to change.
And hey, if you want to use your money to invest in better air pollution scrubbers for the plant, go right ahead! I don’t have the cost benefit ratios on that right now, since I just had all of my analytics people generate the numbers for the factory.
It seems like the right answer to the scaled up problem is “Shouldn’t I either run or have run for me the cost benefits ratio on the air pollution scrubbers BEFORE making a decision which costs at least 100,000 deaths?” It would also seem likely that the result would be that I could install scrubbers at some price below 600,000,000 dollars a year, and then I go home happy with no deaths and the remaining money.
But if I attempt to run the cost benefits on the magic box, it occurs to me that the default assumption is that there IS no cost benefit ratio which applies to saving those people. It feels like the implied result is that they’re just magically executed with no protection. Even if they’re a billionaire who has invested in cryonics, is in perfect health, and is waiting in a hospital with doctors and cryonicists, they’re just permanently dead and can’t be saved.
However, that assumption isn’t necessarily TRUE. There’s no evidence that those people are unsavable in the small scale, just that they likely are in an isomorphic large scale where the deaths actually have a non magical cause.
So if the first thing I would probably have to do is attempt to figure out “What is the cause of the mystery deaths from the box, is it mitigatable, and at what price other than not pressing the button?
Assuming the likely answer “The world is inconvenient, Omega will execute those people and Omega can’t be stopped.” Then I have a feeling I would end up paralyzed by “But isn’t there a way to save everyone?” Except, I’m not paralyzed by that, because if I was, then I would already be paralyzed by that, because I’m faced with those decisions, and I don’t feel paralyzed by that. But then again, I don’t usually face Omega either.
So in the large scale, build the mosquito net factory, build the air scrubbers with my personal money, save everyone, make money. In the small scale, “Building the scrubbers” is essentially “Kill Omega, and use his box without him going around executing people.” But killing Omega is assumed impossible (A potential further complication, Omega may need to be alive for the box to work.)
This makes it seem like my actual answer is “Shove your box Omega, I’m going to make money off of an environmentally safe for profit mosquito net factory using your technology and save everyone.” I’m not sure if I should change that answer or not, or if it even makes sense. But it appears to be my current answer.
I’ll try to think about this and see if I come up with a better answer.
I got some results I didn’t quite expect while thinking about this.
I assumed that if it was good to press the button once, it would be good to consider the effects of pressing the button a lot, and so I tried considering what came up with a scenario that was equivalent on a larger scale:
You’re being given an opportunity to break a regulatory tie to approve a massive new mosquito net factory in China.
The mosquito net factory is equal to pressing the button 100,000 times a year. Assume you personally couldn’t do that, without getting carpal tunnel.
So it will stop 500,000 Malaria deaths a year, (There are actually more than that many: http://www.who.int/mediacentre/factsheets/fs094/en/) and not only that, but if you approve it, You’ll receive a lucrative stock gift, which will pay you 600 MILLION dollars a year. You’ll have a golden parachute for the rest of your life! For the purposes of this scenario, that’s legal, according to new stock regulations.
Okay, yes, it is VERY polluting. But China doesn’t have any air quality standards for these pollutants, so again, it’s all legal, and the projected deaths are only 100,000 people a year, mostly in China.
But still, we’re saving 400,000 people’s lives every year! And we know it’s going to be at least one year for the circumstances to change.
And hey, if you want to use your money to invest in better air pollution scrubbers for the plant, go right ahead! I don’t have the cost benefit ratios on that right now, since I just had all of my analytics people generate the numbers for the factory.
It seems like the right answer to the scaled up problem is “Shouldn’t I either run or have run for me the cost benefits ratio on the air pollution scrubbers BEFORE making a decision which costs at least 100,000 deaths?” It would also seem likely that the result would be that I could install scrubbers at some price below 600,000,000 dollars a year, and then I go home happy with no deaths and the remaining money.
But if I attempt to run the cost benefits on the magic box, it occurs to me that the default assumption is that there IS no cost benefit ratio which applies to saving those people. It feels like the implied result is that they’re just magically executed with no protection. Even if they’re a billionaire who has invested in cryonics, is in perfect health, and is waiting in a hospital with doctors and cryonicists, they’re just permanently dead and can’t be saved.
However, that assumption isn’t necessarily TRUE. There’s no evidence that those people are unsavable in the small scale, just that they likely are in an isomorphic large scale where the deaths actually have a non magical cause.
So if the first thing I would probably have to do is attempt to figure out “What is the cause of the mystery deaths from the box, is it mitigatable, and at what price other than not pressing the button?
Assuming the likely answer “The world is inconvenient, Omega will execute those people and Omega can’t be stopped.” Then I have a feeling I would end up paralyzed by “But isn’t there a way to save everyone?” Except, I’m not paralyzed by that, because if I was, then I would already be paralyzed by that, because I’m faced with those decisions, and I don’t feel paralyzed by that. But then again, I don’t usually face Omega either.
So in the large scale, build the mosquito net factory, build the air scrubbers with my personal money, save everyone, make money. In the small scale, “Building the scrubbers” is essentially “Kill Omega, and use his box without him going around executing people.” But killing Omega is assumed impossible (A potential further complication, Omega may need to be alive for the box to work.)
This makes it seem like my actual answer is “Shove your box Omega, I’m going to make money off of an environmentally safe for profit mosquito net factory using your technology and save everyone.” I’m not sure if I should change that answer or not, or if it even makes sense. But it appears to be my current answer.
I’ll try to think about this and see if I come up with a better answer.