Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation?
That’s part of the reason why Eliezer Yudkowsky thinks we’re doomed and Robin Hanson thinks that we shouldn’t try to do much now. The difference between the two is take-off speed: For EY we either solve alignment before arrival of superintelligence (which is unlikely) or be doomed, RH thinks we have time to make alignment work during arrival of superintelligence.
Well, Eliezer is the one making extraordinary claims, so I think I am justified in applying a high dose of skepticism before evidence of AI severely acting against humanity’s best interest pops up.
That’s part of the reason why Eliezer Yudkowsky thinks we’re doomed and Robin Hanson thinks that we shouldn’t try to do much now. The difference between the two is take-off speed: For EY we either solve alignment before arrival of superintelligence (which is unlikely) or be doomed, RH thinks we have time to make alignment work during arrival of superintelligence.
Well, Eliezer is the one making extraordinary claims, so I think I am justified in applying a high dose of skepticism before evidence of AI severely acting against humanity’s best interest pops up.
Are you able to strong man the argument in favor of AI being an existential risk to humanity?