Totally unedited. Please give feedback. If it’s good, I can spend a couple more hours on it. If you’re not going to use it, please don’t tell me it’s good, because I have lots of other work to do.
The connection between AI and rationality could be made stronger.
Indeed, that’s been my impression for a little while. I’m unconvinced that AI is the #1 existential risk. The set of problems descending from the fact that known life resides in a single biosphere — ranging from radical climate change, to asteroid collisions, to engineered pathogens — seems to be right up there. I want all AI researchers to be familiar with FAI concerns; but there are more people in the world whose decisions have any effect at all on climate change risks — and maybe even on pathogen research risks! — than on AI risks.
But anyone who wants humanity to solve these problems should want better rationality and better (trans?)humanist ethics.
I shall not complain. :)
OK, here’s my crack: http://techhouse.org/~lincoln/singinst-copy.txt
Totally unedited. Please give feedback. If it’s good, I can spend a couple more hours on it. If you’re not going to use it, please don’t tell me it’s good, because I have lots of other work to do.
It’s good enough that if we use it, we will do the editing. Thanks!
The connection between AI and rationality could be made stronger.
Indeed, that’s been my impression for a little while. I’m unconvinced that AI is the #1 existential risk. The set of problems descending from the fact that known life resides in a single biosphere — ranging from radical climate change, to asteroid collisions, to engineered pathogens — seems to be right up there. I want all AI researchers to be familiar with FAI concerns; but there are more people in the world whose decisions have any effect at all on climate change risks — and maybe even on pathogen research risks! — than on AI risks.
But anyone who wants humanity to solve these problems should want better rationality and better (trans?)humanist ethics.