Thanks! I’d already read the first link, and remember thinking that it needed to be argued better.
Mainly, I still think people conflate tools with agents in a box. It seems obvious (in principle) that you could build an AI that doesn’t do anything but Math/CS/Physics, and doesn’t even know humans exist.
I’m planning on writing up my disagreements more formally. But first, I’m waiting on getting a copy of Nick Bostrom’s new book, so that I can be certain that I’m critiquing the strongest arguments.
I hadn’t seen the second link. I’ll definitely have to give it a more thorough read-through later.
Your first point was discussed in detail here. Your second point was discussed in many places on LW, most recently here, I think.
Thanks! I’d already read the first link, and remember thinking that it needed to be argued better. Mainly, I still think people conflate tools with agents in a box. It seems obvious (in principle) that you could build an AI that doesn’t do anything but Math/CS/Physics, and doesn’t even know humans exist.
I’m planning on writing up my disagreements more formally. But first, I’m waiting on getting a copy of Nick Bostrom’s new book, so that I can be certain that I’m critiquing the strongest arguments.
I hadn’t seen the second link. I’ll definitely have to give it a more thorough read-through later.