If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971?
In 1890, the most important thing to do is still FAI research. The best case scenario is that we already had invented the math for FAI before the first vacuum tube, let alone microchip. Existential risk reduction is the single highest utility thing around. Sure, trying to get nukes never made or made by someone capable of creating an effective singleton is important, but FAI is way more so.
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
He would then spend the rest of his life ensuring that it is preserved. If necessary he would go around hunting for obscure caves with a chisel in hand. Depending, of course, on how much he cares about influencing the future of the universe as opposed to other less abstract goals.
Anyway, I have much more confidence that Eliezer and future generations of Friendly AI researchers will succeed in making sure that nobody turns on an AGI that isn’t Friendly than in Eliezer and his disciples solving both the AGI and Friendly AI problems in his own lifetime. Friendly AI is a problem that needs to be solved in the future, but, barring something like a Peak Oil-induced collapse of civilization to pre-1920 levels, the future will be a lot better at solving these problems than the present is—and we can leave it to them to worry about. After all, the present is certainly better positioned to solve problems like epidemic disease and global warming than the past was.
Actually, I would; I’ve donated a small amount of money already. Investing in anti-aging research won’t pay off for at least thirty years—that’s the turnaround time of medical research from breakthrough to useable treatment—but it’s a lot less of a pie-in-the-sky concern. (Although as long as people are dying for want of $1,000 TB medication, it still might be more cost effective to save those lives than to extend the lives of relatively rich people in developed countries.)
I don’t think he is if the point is to establish that “lack of FAI could at some point lead to Earth’s destruction” isn’t a unconditionally applicable argument.
In 1890, the most important thing to do is still FAI research. The best case scenario is that we already had invented the math for FAI before the first vacuum tube, let alone microchip. Existential risk reduction is the single highest utility thing around. Sure, trying to get nukes never made or made by someone capable of creating an effective singleton is important, but FAI is way more so.
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
He would then spend the rest of his life ensuring that it is preserved. If necessary he would go around hunting for obscure caves with a chisel in hand. Depending, of course, on how much he cares about influencing the future of the universe as opposed to other less abstract goals.
Yes, who today cares what any Greek mathematician had to say...
Now you’re just moving the goal posts.
Sorry. :(
Anyway, I have much more confidence that Eliezer and future generations of Friendly AI researchers will succeed in making sure that nobody turns on an AGI that isn’t Friendly than in Eliezer and his disciples solving both the AGI and Friendly AI problems in his own lifetime. Friendly AI is a problem that needs to be solved in the future, but, barring something like a Peak Oil-induced collapse of civilization to pre-1920 levels, the future will be a lot better at solving these problems than the present is—and we can leave it to them to worry about. After all, the present is certainly better positioned to solve problems like epidemic disease and global warming than the past was.
Would you consider SENS a viable alternative to SIAI? Or do you think ending aging is also impossible/something to be put off?
Actually, I would; I’ve donated a small amount of money already. Investing in anti-aging research won’t pay off for at least thirty years—that’s the turnaround time of medical research from breakthrough to useable treatment—but it’s a lot less of a pie-in-the-sky concern. (Although as long as people are dying for want of $1,000 TB medication, it still might be more cost effective to save those lives than to extend the lives of relatively rich people in developed countries.)
My guess is that SENS is more cost effective, but I haven’t done the calculating. Does anyone have access to those sorts of figures?
Ball parking:
$1000 buys you 45 extra person-years.
$10 billion buys you 30 extra person-years for a billion people.
Of course that depends on how much you agree with the figures given by de Grey.
I don’t think he is if the point is to establish that “lack of FAI could at some point lead to Earth’s destruction” isn’t a unconditionally applicable argument.