I’m game!
Fredrik
What if I were to try to create such a web app. Should I take 5 minutes every lunchbreak asking friends and colleagues to brainstorm for questions? Maybe write a LW post asking for questions? Maybe there could be a section of the site dedicated to collecting and curating good questions (crowdsourced or centrally moderated).
No matter. Just received word!
I guess I wasn’t selected if I haven’t received an email by now? Or are you staying up late sorting applications? Will you email just the selectees or all applicants?
I had the same experience.
Right… I might have my chance then to save the world. The problem is, everyone will get access to the technology at roughly the same time, I imagine. What if the military get there first? This has probably been discussed elsewhere here on LW though...
Well, presumably Roko means we would be restricting the freedom of the irrational sticklers—possibly very efficiently due to our superior intelligence—rather than overriding their will entirely (or rather, making informed guesses as to what is in their ultimate interests, and then acting on that).
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
Well, the AI would “presume to know” what’s in everyone’s best interests. How is that different? It’s smarter than us, that’s it. Self-governance isn’t holy.
Just out of curiosity, are you for or against the Friendly AI project? I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
I might be wrong in my beliefs about their best interests, but that is a separate issue.
Given the assumption that undergoing the treatment is in everyone’s best interests, wouldn’t it be rational to forgo autonomous choice? Can we agree on that it would be?
Well, the attention of those capable of solving FAI should be undivided. Those who aren’t equipped to work on FAI and who could potentially make progress on intelligence enhancing therapies, should do so.
Culture has also produced radical Islam. Just look at http://www.youtube.com/watch?v=xuAAK032kCA to get a bit more pessimistic about the natural moral zeitgeist evolution in culture.
So individual autonomy is more important? I just don’t get that. It’s what’s behind the wheels of the autonomous individuals that matters. It’s a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to “way too fracking high”.
It’s everyone’s happiness and progress that matters. If you can raise the floor for everyone, so that we’re all just better, what’s not to like about giving everybody that treatment?
You don’t have to trust the government, you just have to trust the scientists who developed the drug or gene therapy. They are the ones who would be responsible for the drug working as advertised and having negligible side-effects.
But yes, I sympathize with you, I’m just like that myself actually. Some people wouldn’t be able to appreciate the usefulness of the drug, no matter how hard you tried to explain to them that it’s safe, helpful and actually globally risk-alleviating. Those who were memetically sealed off to believing that or just weren’t capable of grasping it, would oppose it strongly—possiby enough to base a war on the rest of the world on it.
It would also take time to reach the whole population with a governmentally mandated treatment. There isn’t even a world government right now. We are weak and slow. And one comparatively insane man on the run is one too many.
Assuming an efficient treatment for human stupidity could be developed (and assuming that would be a rational solution to our predicament), then the right thing to do would be delivering it in the manner causing the least bit of social upheaval and opposition. That would be a covert dispersal, most definitely. A globally coordinated release of a weaponized retro virus, for example.
We still have some time before even that can be accomplished, though. And once that tech gets here we have the hugely increasing risk of bioterrorism or just accidental catastrophies by the hand of some clumsy research assistant, before we have a chance to even properly prototype & test our perfect smart drug.
Even in such a scenario, some rotten eggs would probably refuse the smart drug treatment or the gene therapy injection—perhaps exactly those who would be the instigators of extinction events? Or at least the two groups would overlap somewhat, I fear.
I’m starting to think it would be rational to disperse our world-saving drug of choice by means of an engineered virus of our own, or something equally radically effective. But don’t quote me on that. Or whatever, go ahead.
X-risk-alleviating AGI just has to be days late to the party for a supervirus created by a terrorist cell to have crashed it. I guess I’d judge against putting all our eggs in the AI basket.
I wonder how many Swedish readers there are. A meetup in Stockholm or Gotheburg would be kind of nice.
So you haven’t read his Sweet Dreams: Philosophical Obstacles to a Science of Consciousness?
I am trying to build a collaborative argumentation analysis platform. It sounds like we want the almost exact same thing. Who are you working with? What is your detailed vision?
Please join our FB group at https://www.facebook.com/groups/arguable or contact me at branstrom at gmail.com.