GiveWell is a pretty small organization, and they haven’t yet devoted any resources to evaluating research-based charities—they’re looking for charities that can prove that they’re providing benefits today, and lots of research ends up leading nowhere. How many increments of $1,000 - the amount it takes to cure an otherwise fatal case of tuberculosis—have been spent on medical research that amounted to nothing?
For the record, I agree that SIAI is doing important work that must be done someday, but I don’t expect to see AGI in my lifetime; there’s no particular urgency involved. If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971? I’d tell them that the first thing they need to do is to go “discover” that DDT (first synthesized in 1874) kills insects and show the world how it can be used to kill disease vectors such as mosquitoes; DDT is probably the single man-made chemical that, to date, has saved more human lives than any other.
If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971?
In 1890, the most important thing to do is still FAI research. The best case scenario is that we already had invented the math for FAI before the first vacuum tube, let alone microchip. Existential risk reduction is the single highest utility thing around. Sure, trying to get nukes never made or made by someone capable of creating an effective singleton is important, but FAI is way more so.
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
He would then spend the rest of his life ensuring that it is preserved. If necessary he would go around hunting for obscure caves with a chisel in hand. Depending, of course, on how much he cares about influencing the future of the universe as opposed to other less abstract goals.
Anyway, I have much more confidence that Eliezer and future generations of Friendly AI researchers will succeed in making sure that nobody turns on an AGI that isn’t Friendly than in Eliezer and his disciples solving both the AGI and Friendly AI problems in his own lifetime. Friendly AI is a problem that needs to be solved in the future, but, barring something like a Peak Oil-induced collapse of civilization to pre-1920 levels, the future will be a lot better at solving these problems than the present is—and we can leave it to them to worry about. After all, the present is certainly better positioned to solve problems like epidemic disease and global warming than the past was.
Actually, I would; I’ve donated a small amount of money already. Investing in anti-aging research won’t pay off for at least thirty years—that’s the turnaround time of medical research from breakthrough to useable treatment—but it’s a lot less of a pie-in-the-sky concern. (Although as long as people are dying for want of $1,000 TB medication, it still might be more cost effective to save those lives than to extend the lives of relatively rich people in developed countries.)
I don’t think he is if the point is to establish that “lack of FAI could at some point lead to Earth’s destruction” isn’t a unconditionally applicable argument.
I am curious: are you very old or suffering from a fatal disease? I am 25 and healthy, so “lifetime” probably means something different to me...
That would make my winking outright cruel! No, I’m referring to the general problem of betting against the success of the person with whom you are making the bet. In CronoDAS’s case the threshold for a self-sabotage outcome is somewhat reduced by his expressed suicidal inclinations.
GiveWell is a pretty small organization, and they haven’t yet devoted any resources to evaluating research-based charities—they’re looking for charities that can prove that they’re providing benefits today, and lots of research ends up leading nowhere. How many increments of $1,000 - the amount it takes to cure an otherwise fatal case of tuberculosis—have been spent on medical research that amounted to nothing?
For the record, I agree that SIAI is doing important work that must be done someday, but I don’t expect to see AGI in my lifetime; there’s no particular urgency involved. If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971? I’d tell them that the first thing they need to do is to go “discover” that DDT (first synthesized in 1874) kills insects and show the world how it can be used to kill disease vectors such as mosquitoes; DDT is probably the single man-made chemical that, to date, has saved more human lives than any other.
In 1890, the most important thing to do is still FAI research. The best case scenario is that we already had invented the math for FAI before the first vacuum tube, let alone microchip. Existential risk reduction is the single highest utility thing around. Sure, trying to get nukes never made or made by someone capable of creating an effective singleton is important, but FAI is way more so.
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
He would then spend the rest of his life ensuring that it is preserved. If necessary he would go around hunting for obscure caves with a chisel in hand. Depending, of course, on how much he cares about influencing the future of the universe as opposed to other less abstract goals.
Yes, who today cares what any Greek mathematician had to say...
Now you’re just moving the goal posts.
Sorry. :(
Anyway, I have much more confidence that Eliezer and future generations of Friendly AI researchers will succeed in making sure that nobody turns on an AGI that isn’t Friendly than in Eliezer and his disciples solving both the AGI and Friendly AI problems in his own lifetime. Friendly AI is a problem that needs to be solved in the future, but, barring something like a Peak Oil-induced collapse of civilization to pre-1920 levels, the future will be a lot better at solving these problems than the present is—and we can leave it to them to worry about. After all, the present is certainly better positioned to solve problems like epidemic disease and global warming than the past was.
Would you consider SENS a viable alternative to SIAI? Or do you think ending aging is also impossible/something to be put off?
Actually, I would; I’ve donated a small amount of money already. Investing in anti-aging research won’t pay off for at least thirty years—that’s the turnaround time of medical research from breakthrough to useable treatment—but it’s a lot less of a pie-in-the-sky concern. (Although as long as people are dying for want of $1,000 TB medication, it still might be more cost effective to save those lives than to extend the lives of relatively rich people in developed countries.)
My guess is that SENS is more cost effective, but I haven’t done the calculating. Does anyone have access to those sorts of figures?
Ball parking:
$1000 buys you 45 extra person-years.
$10 billion buys you 30 extra person-years for a billion people.
Of course that depends on how much you agree with the figures given by de Grey.
I don’t think he is if the point is to establish that “lack of FAI could at some point lead to Earth’s destruction” isn’t a unconditionally applicable argument.
That’s an easy prediction for you to make. ;)
Well, I don’t expect that my brother will see AGI in his lifetime, either.
I am curious: are you very old or suffering from a fatal disease? I am 25 and healthy, so “lifetime” probably means something different to me…
It’s an ironic remark about my depression. I’m 27 and physically healthy.
That would make my winking outright cruel! No, I’m referring to the general problem of betting against the success of the person with whom you are making the bet. In CronoDAS’s case the threshold for a self-sabotage outcome is somewhat reduced by his expressed suicidal inclinations.
See my reply to Lucas.
Edit: Also, I’m sympathetic to your skepticism re: SIAI as the best charity.