In terms of suggestions for SIAI, I’d like to see SIAI folks write up their thinking on the controversial AI topics that SIAI has taken a stand on, such as this, this, and the likelihood of hard takeoff. When I talk to Eliezer, there’s a lot that he seems to take for granted that I haven’t seen any published explanation of his thinking for. I get the impression he’s had a lot of unproductive AI discussions with people who aren’t terribly rational, but AI seems like an important enough topic for him to try to identify and prevent the failure modes that those discussions enter and do a write-up of his thinking on this topic. (A good solution might be to discuss the topics on the internet only and rarely in real life—I rate internet discussion as being more productive than meatspace discussion.) I always feel like I’m bothering Eliezer when I talk to him about this stuff, but it seems like the sort of stuff where if SIAI gets it wrong, they could be wasting their time, which is why I’d like to make sure they’ve gotten it right before donating really substantial amounts.
The answer to your question is dozens of pages long and I’ve done a ton of writing on it already, I just don’t want to spread it around anywhere unless it’s part of a complete project. But if you talk to me in person I can share some of it and we can talk about it. I am writing a book but will need more funding to complete it.
Also, have you considered giving us a preview of at least some of your ideas in blog post form so we can see arguments and counterarguments hashed out?
In terms of suggestions for SIAI, I’d like to see SIAI folks write up their thinking on the controversial AI topics that SIAI has taken a stand on, such as this, this, and the likelihood of hard takeoff. When I talk to Eliezer, there’s a lot that he seems to take for granted that I haven’t seen any published explanation of his thinking for. I get the impression he’s had a lot of unproductive AI discussions with people who aren’t terribly rational, but AI seems like an important enough topic for him to try to identify and prevent the failure modes that those discussions enter and do a write-up of his thinking on this topic. (A good solution might be to discuss the topics on the internet only and rarely in real life—I rate internet discussion as being more productive than meatspace discussion.) I always feel like I’m bothering Eliezer when I talk to him about this stuff, but it seems like the sort of stuff where if SIAI gets it wrong, they could be wasting their time, which is why I’d like to make sure they’ve gotten it right before donating really substantial amounts.
Just FYI I’m available to answer any IRL questions.
But not virtual ones? ;-)
The answer to your question is dozens of pages long and I’ve done a ton of writing on it already, I just don’t want to spread it around anywhere unless it’s part of a complete project. But if you talk to me in person I can share some of it and we can talk about it. I am writing a book but will need more funding to complete it.
Is this at all related to the Peter Platzer Popular Book Project?
Also, have you considered giving us a preview of at least some of your ideas in blog post form so we can see arguments and counterarguments hashed out?
Updating SIAI’s website, at least, wouldn’t hurt.
No, I haven’t, releasing them prematurely would ruin their potential impact.
What on the website did you want updated?