It’s worth pointing out that Eliezer’s views on the relative hopelessness of the situation do not reflect those of the rest of the field. Nearly everyone else outside of MIRI is more optimistic than he is (though that is of course no guarantee he is wrong).
As an interested observer who has followed the field from a distance for about 6 years at this point, I don’t think there has ever been a more interesting time with more things going on than now. When I talk to some of my friends that work in the field, many of their agendas sound kind of obvious to me, which is IMO an indication that there’s a lot of low-hanging fruit in the field. I don’t think you have to be a supergenius to make progress (unless perhaps you’re working on agent foundations).
• The probability of doom given the development of AGI, + the probability of solving aging given AGI, nearly equals 1.
I’m not sure I understand what this means. Do you mean the “and” instead of “+”? Otherwise this statement is a little vague.
If you consider solving aging a high priority and are concerned that delays of AI might delay such a solution, here are a few things to cosider:
Probably over a hundred billion people have died building the civilization we live in today. It would be pretty disrespectful to their legacy if we threw all that away at the last minute just because we couldn’t wait 20 more years to build a machine god we could actually control. Not to mention all the people who will live in the future if we get this thing right. In the grand scheme of the cosmos, one or two generations is nothing.
If you care deeply about this, you might consider working on cryonics both to make it cheaper for everyone and to increase the odds of personality and memory recovery following the revival process.
I live in Scandinavia and see no major (except for maybe EA dk?) political movements addressing these issues. I’m eager to make an impact but feel unsure about how to do so effectively without dedicating my entire life to AI risk.
One potential answer here is “earn to give”. If you have a chance to enter a lucrative career you can use your earnings from that career to help fund work done by others.
If that’s not an option or doesn’t sound like something you’d enjoy, perhaps you could move? There are programs like SERI MATS you could attempt to enroll in if you’re a newcomer to the field of AI safety but have a relevant background in math or computer science (or are willing to teach yourself before the program begins).
In regards to the ‘probability assertions’ I made, the following (probably) sums it up best: P(solving aging∩doomc∣AGI)+P(doom∣AGI)≈1.
I understand the ethical qualms. The point I was trying to make was more in the line of ‘if I can effect the system in a positive direction, could this maximise my/humanity’s mean-utility function’. Acknowledging this is a weird way to put it (as I assume a utility-function for myself/humanity), I’d hoped it would provide insight into my thought process.
Note: in the post I didn’t specify the ∩doomc part. I’d hoped it was implicit—as I don’t care much for the scenario, where aging is solved and AI enacts doom right afterwards. I’m aware this is still an incomplete model (and is quite non-rigorous).
It’s worth pointing out that Eliezer’s views on the relative hopelessness of the situation do not reflect those of the rest of the field. Nearly everyone else outside of MIRI is more optimistic than he is (though that is of course no guarantee he is wrong).
As an interested observer who has followed the field from a distance for about 6 years at this point, I don’t think there has ever been a more interesting time with more things going on than now. When I talk to some of my friends that work in the field, many of their agendas sound kind of obvious to me, which is IMO an indication that there’s a lot of low-hanging fruit in the field. I don’t think you have to be a supergenius to make progress (unless perhaps you’re working on agent foundations).
I’m not sure I understand what this means. Do you mean the “and” instead of “+”? Otherwise this statement is a little vague.
If you consider solving aging a high priority and are concerned that delays of AI might delay such a solution, here are a few things to cosider:
Probably over a hundred billion people have died building the civilization we live in today. It would be pretty disrespectful to their legacy if we threw all that away at the last minute just because we couldn’t wait 20 more years to build a machine god we could actually control. Not to mention all the people who will live in the future if we get this thing right. In the grand scheme of the cosmos, one or two generations is nothing.
If you care deeply about this, you might consider working on cryonics both to make it cheaper for everyone and to increase the odds of personality and memory recovery following the revival process.
One potential answer here is “earn to give”. If you have a chance to enter a lucrative career you can use your earnings from that career to help fund work done by others.
If that’s not an option or doesn’t sound like something you’d enjoy, perhaps you could move? There are programs like SERI MATS you could attempt to enroll in if you’re a newcomer to the field of AI safety but have a relevant background in math or computer science (or are willing to teach yourself before the program begins).
Thanks for the advice @GeneSmith!
In regards to the ‘probability assertions’ I made, the following (probably) sums it up best:
P(solving aging∩doomc∣AGI)+P(doom∣AGI)≈1.
I understand the ethical qualms. The point I was trying to make was more in the line of ‘if I can effect the system in a positive direction, could this maximise my/humanity’s mean-utility function’. Acknowledging this is a weird way to put it (as I assume a utility-function for myself/humanity), I’d hoped it would provide insight into my thought process.
Note: in the post I didn’t specify the ∩doomc part. I’d hoped it was implicit—as I don’t care much for the scenario, where aging is solved and AI enacts doom right afterwards. I’m aware this is still an incomplete model (and is quite non-rigorous).
Again, I appreciate the response and the advice;)