For the record, the reason I didn’t speak up was less “MIRI would have been crushed” and more “I had some hope”.
I had in fact had a convo with Elon and one or two convos with Sam while they were kicking the OpenAI idea around (and where I made various suggestions that they ultimately didn’t take). There were in fact internal forces at OpenAI trying to cause it to be a force for good—forces that ultimately led them to write their 2018 charter, so, forces that were not entirely fictitious. At the launch date, I didn’t know to what degree those internal forces would succeed, and I didn’t want to be openly publicly hostile in a way that might undermine those efforts.
To be clear, my mainline guess was that OpenAI was going to be a force for ill, and I now think that my post on the topic was a mistake, and I now think it would have been significantly better for me to just bluntly say that I thought this was a bad development (barring some turnaround). (I also think that I was optimistically overestimating the potential of the internal forces for trying to make the whole operation net-good, in a way that probably wouldn’t have withstood careful consideration—consideration that I didn’t give.) But the intent in my communication was to extend an olive branch and leave room for the forces of change to produce such a turnaround, not to avoid retribution.
(And, to be explicit: I consider myself to have been taught a lesson about how it’s pretty important to just straightforwardly speak your mind, and I’ve been trying to do that since, and I think I’d do better next time, and I appreciate the feedback that helped me learn that lesson.)
I can confirm that Nate is not backdating memories—he and Eliezer were pretty clear within MIRI at the time that they thought Sam and Elon were making a tremendous mistake and that they were trying to figure out how to use MIRI’s small influence within a worsened strategic landscape.
For the record, the reason I didn’t speak up was less “MIRI would have been crushed” and more “I had some hope”.
I had in fact had a convo with Elon and one or two convos with Sam while they were kicking the OpenAI idea around (and where I made various suggestions that they ultimately didn’t take). There were in fact internal forces at OpenAI trying to cause it to be a force for good—forces that ultimately led them to write their 2018 charter, so, forces that were not entirely fictitious. At the launch date, I didn’t know to what degree those internal forces would succeed, and I didn’t want to be openly publicly hostile in a way that might undermine those efforts.
To be clear, my mainline guess was that OpenAI was going to be a force for ill, and I now think that my post on the topic was a mistake, and I now think it would have been significantly better for me to just bluntly say that I thought this was a bad development (barring some turnaround). (I also think that I was optimistically overestimating the potential of the internal forces for trying to make the whole operation net-good, in a way that probably wouldn’t have withstood careful consideration—consideration that I didn’t give.) But the intent in my communication was to extend an olive branch and leave room for the forces of change to produce such a turnaround, not to avoid retribution.
(And, to be explicit: I consider myself to have been taught a lesson about how it’s pretty important to just straightforwardly speak your mind, and I’ve been trying to do that since, and I think I’d do better next time, and I appreciate the feedback that helped me learn that lesson.)
I can confirm that Nate is not backdating memories—he and Eliezer were pretty clear within MIRI at the time that they thought Sam and Elon were making a tremendous mistake and that they were trying to figure out how to use MIRI’s small influence within a worsened strategic landscape.