A promise of friendly yet superior AI in the long-term is therefore snake-oil.
...and contains this:
I am all for AGI… but not religiously
Build all-powerful friendly superintelligent AGI
It will take care of our needs!
It will make us happy!
It will give us mind uploading!
Religious AGI—all religious transhumanism—diverts valuable thought and resources.
Very briefly: to my eyes, the scene here looks to me as though neuroscience is fighting a complicated and difficult-to-understand foe which has been identified as the enemy—but which is difficult to know how to attack.
For me, this document didn’t make its case. At the end, I was no more convinced that the designated “bad” approach was bad—or that the designated “good” approach was good—than I was when I started.
It is kind-of interesting to see FUD being directed at the self-proclaimed “friendly” folk, though. Usually they are the ones dishing it out.
I had a look at your Pattern Survival Agrees with Universal Darwinism as well.
It finishes with some fighting talk:
...and contains this:
Very briefly: to my eyes, the scene here looks to me as though neuroscience is fighting a complicated and difficult-to-understand foe which has been identified as the enemy—but which is difficult to know how to attack.
For me, this document didn’t make its case. At the end, I was no more convinced that the designated “bad” approach was bad—or that the designated “good” approach was good—than I was when I started.
It is kind-of interesting to see FUD being directed at the self-proclaimed “friendly” folk, though. Usually they are the ones dishing it out.