As a general point, these notes should not be used to infer anything about what Sam Altman thought was important enough to talk a lot about, or what his general tone/attitude was. This is because
The notes are filtered through what the note-takers thought was important. There’s a lot of stuff that’s missing.
What Sam spoke about was mostly a function of what he was asked about (it was a Q&A after all). If you were there live you could maybe get some idea of how he was inclined to interpret questions, what he said in response to more open questions, etc. But here, the information about what questions were asked is entirely missing.
General attitude/tone is almost completely destroyed by the compression of answers into notes.
For example, IIRC, the thing about GPT being empathic was in response to some question like “How can we make AI empathic?” (i.e., it was not his own idea to bring up empathy). The answer was obviously much longer than the summary by notes (so less dismissive). And directionally, it is certainly the case already that GPT-3 will act more empathic if you tell it to do so.
Did he really speak that little about AI Alignment/Safety? Does anyone have additional recollections on this topic?
He did make some general claims that it was one of his top few concerns, that he felt like OpenAI had been making some promising alignment work over the last year, that it was still an important goal for OpenAI’s safety work to catch up with its capabilities work, that it was good for more people to go into safety work, etc. Not very many specifics as far as I can remember.
As a general point, these notes should not be used to infer anything about what Sam Altman thought was important enough to talk a lot about, or what his general tone/attitude was. This is because
The notes are filtered through what the note-takers thought was important. There’s a lot of stuff that’s missing.
What Sam spoke about was mostly a function of what he was asked about (it was a Q&A after all). If you were there live you could maybe get some idea of how he was inclined to interpret questions, what he said in response to more open questions, etc. But here, the information about what questions were asked is entirely missing.
General attitude/tone is almost completely destroyed by the compression of answers into notes.
For example, IIRC, the thing about GPT being empathic was in response to some question like “How can we make AI empathic?” (i.e., it was not his own idea to bring up empathy). The answer was obviously much longer than the summary by notes (so less dismissive). And directionally, it is certainly the case already that GPT-3 will act more empathic if you tell it to do so.
He did make some general claims that it was one of his top few concerns, that he felt like OpenAI had been making some promising alignment work over the last year, that it was still an important goal for OpenAI’s safety work to catch up with its capabilities work, that it was good for more people to go into safety work, etc. Not very many specifics as far as I can remember.