The fact that Eliezer does not appear to have seriously contemplated or addressed the the two points above and their implications diminishes my confidence in his odds of success still further.
That you have this impression greatly diminishes my confidence in your intuitions on the matter. Are you seriously suggesting that Eliezer has not contemplated AI researchers’ opinions about AGI? Or that he hasn’t thought about just how much effort should go into a scientific breakthrough?
Someone please throw a few hundred relevant hyperlinks at this person.
I’m not saying that Eliezer has given my two points no consideration. I’m saying that Eliezer has not given my two points sufficient consideration. By all means, send hyperlinks that you find relevant my way—I would be happy to be proven wrong.
That you have this impression greatly diminishes my confidence in your intuitions on the matter. Are you seriously suggesting that Eliezer has not contemplated AI researchers’ opinions about AGI? Or that he hasn’t thought about just how much effort should go into a scientific breakthrough?
Someone please throw a few hundred relevant hyperlinks at this person.
I’m not saying that Eliezer has given my two points no consideration. I’m saying that Eliezer has not given my two points sufficient consideration. By all means, send hyperlinks that you find relevant my way—I would be happy to be proven wrong.