In response to Eliezer’s response on Video #5, indicating that smart people should be working on AI, and not String Theory.
I tend to agree, as those are fields which are not likely going to give us any new technologies that are going to make the world a safer place… and
Any work that speeds the arrival of AI will also speed the solution to any problems in sciences such as String Theory, as a recursively improving intelligence will be able to aid in the discovery of solutions much more rapidly than the addition of five or ten really smart people will aid in the discovery of solutions.
Shouldn’t we hedge our bets a little? I don’t know what the probability is that the Singularity Institute succeeds in building an FAI in time to prevent any existential disasters that would otherwise occur but it isn’t 1. Any work done to reduce existential risk in the meantime (and in possible futures where no Friendly AI exists) seems to me worthwhile.
In response to Eliezer’s response on Video #5, indicating that smart people should be working on AI, and not String Theory.
I tend to agree, as those are fields which are not likely going to give us any new technologies that are going to make the world a safer place… and
Any work that speeds the arrival of AI will also speed the solution to any problems in sciences such as String Theory, as a recursively improving intelligence will be able to aid in the discovery of solutions much more rapidly than the addition of five or ten really smart people will aid in the discovery of solutions.
Shouldn’t we hedge our bets a little? I don’t know what the probability is that the Singularity Institute succeeds in building an FAI in time to prevent any existential disasters that would otherwise occur but it isn’t 1. Any work done to reduce existential risk in the meantime (and in possible futures where no Friendly AI exists) seems to me worthwhile.
Am I wrong?