Patently untrue. Suzanne is quite well aware of wireheading, the term, etc. Her investigation, of which only the beginning was mentioned in her post, concerns the broader problem of creating self-improving superintelligent general AI. Don’t rush to conclusions, instead stay tuned.
I’ve been thinking about this problem for several years now, and others much longer. Suzanne cited none of their thoughts or ideas—and the content of her presentation strongly suggested that she was not aware of most of them.
I’m sure Suzanne’s input will be welcomed, but at the moment it is pretty obvious that she really needs to first do some research.
My take on it is that this post at Physics and Cake was simply a follow-up post to her talk at H+, which was itself also a forum intended for a broader audience within minimal time for any introductions of background material (which, from discussions with Suzanne, I know she is aware of). I would like to repeat that you should not rush to conclusions, but instead stay tuned.
A promise of friendly yet superior AI in the long-term is therefore snake-oil.
...and contains this:
I am all for AGI… but not religiously
Build all-powerful friendly superintelligent AGI
It will take care of our needs!
It will make us happy!
It will give us mind uploading!
Religious AGI—all religious transhumanism—diverts valuable thought and resources.
Very briefly: to my eyes, the scene here looks to me as though neuroscience is fighting a complicated and difficult-to-understand foe which has been identified as the enemy—but which is difficult to know how to attack.
For me, this document didn’t make its case. At the end, I was no more convinced that the designated “bad” approach was bad—or that the designated “good” approach was good—than I was when I started.
It is kind-of interesting to see FUD being directed at the self-proclaimed “friendly” folk, though. Usually they are the ones dishing it out.
Suzanne is apparently starting to grapple with the wirehead problem—but she doesn’t seem to know what it is called :-|
Patently untrue. Suzanne is quite well aware of wireheading, the term, etc. Her investigation, of which only the beginning was mentioned in her post, concerns the broader problem of creating self-improving superintelligent general AI. Don’t rush to conclusions, instead stay tuned.
Welcome to Less Wrong, randalkoene!
Thank you, Alexander. :)
I’ve been thinking about this problem for several years now, and others much longer. Suzanne cited none of their thoughts or ideas—and the content of her presentation strongly suggested that she was not aware of most of them.
I’m sure Suzanne’s input will be welcomed, but at the moment it is pretty obvious that she really needs to first do some research.
My take on it is that this post at Physics and Cake was simply a follow-up post to her talk at H+, which was itself also a forum intended for a broader audience within minimal time for any introductions of background material (which, from discussions with Suzanne, I know she is aware of). I would like to repeat that you should not rush to conclusions, but instead stay tuned.
I had a look at your Pattern Survival Agrees with Universal Darwinism as well.
It finishes with some fighting talk:
...and contains this:
Very briefly: to my eyes, the scene here looks to me as though neuroscience is fighting a complicated and difficult-to-understand foe which has been identified as the enemy—but which is difficult to know how to attack.
For me, this document didn’t make its case. At the end, I was no more convinced that the designated “bad” approach was bad—or that the designated “good” approach was good—than I was when I started.
It is kind-of interesting to see FUD being directed at the self-proclaimed “friendly” folk, though. Usually they are the ones dishing it out.