They weren’t the only non employee board members though—that’s what I meant by the part about not being concerned about safety, that I took it to rule out both Toner and McCauley.
(Although it for some other reason you were only looking at Toner and McCauley, then no, I would say the person going around speaking to OAI employees is_less_ likely to be out of the loop on GPT-4’s capabilities)
The other ones are unlikely. Shivon Zilis & Reid Hoffman had left by this point; Will Hurd might or might not still be on the board at this point but wouldn’t be described nor recommended by Labenz’s acquaintance as researching AI safety, as that does not describe Hurd or D’Angelo; Brockman, Altman, and Sutskever are right out (Sutskever researches AI safety but Superalignment was a year away); by process of elimination, over 2023, the only board members he could have been plausibly contacting would be Toner and McCauley, and while Toner weakly made more sense before, now McCauley does.
(The description of them not having used the model unfortunately does not distinguish either one—none of the writings connected to them sound like they have all that much hands-on experience and would be eagerly prompt-engineering away at GPT-4-base the moment they got access. And I agree that this is a big mistake, but it is, even more unfortunately, and extremely common one—I remain shocked that Altman had apparently never actually used GPT-3 before he basically bet the company on it. There is a widespread attitude, even among those bullish about the economics, that GPT-3 or GPT-4 are just ‘tools’, which are mere ‘stochastic parrots’, and have no puzzling internal dynamics or complexities. I have been criticizing this from the start, but the problem is, ‘sampling can show the presence of knowledge and not the absence’, so if you don’t think there’s anything interesting there, your prompts are a mirror which reflect only your low expectations; and the safety tuning makes it worse by hiding most of the agency & anomalies, often in ways that look like good things. For example, the rhyming poetry ought to alarm everyone who sees it, because of what it implies underneath—but it doesn’t. This is why descriptions of Sydney or GPT-4-base are helpful: they are warning shots from the shoggoth behind the friendly tool-AI ChatGPT UI mask.)
I think you might be misremembering the podcast? Nathan said that he was assured that the board as a whole was serious about safety, but I don’t remember the specific board member being recommended as someone researching AI safety (or otherwise more pro safety than the rest of the board). I went back through the transcript to check and couldn’t find any reference to what you’ve said.
“ And ultimately, in the end, basically everybody said, “What you should do is go talk to somebody on the OpenAI board. Don’t blow it up. You don’t need to go outside of the chain of command, certainly not yet. Just go to the board. And there are serious people on the board, people that have been chosen to be on the board of the governing nonprofit because they really care about this stuff. They’re committed to long-term AI safety, and they will hear you out. And if you have news that they don’t know, they will take it seriously.” So I was like, “OK, can you put me in touch with a board member?” And so they did that, and I went and talked to this one board member. And this was the moment where it went from like, “whoa” to “really whoa.””
I was not referring to the podcast (which I haven’t actually read yet because from the intro it seems wildly out of date and from a long time ago) but to Labenz’s original Twitter thread turned into a Substack post. I think you misinterpret what he is saying in that transcript because it is loose and extemporaneous “they’re committed” could just as easily refer to “are serious people on the board” who have “been chosen” for that (implying that there are other members of the board not chosen for that); and that is what he says in the written down post:
I consulted with a few friends in AI safety research…The Board, everyone agreed, included multiple serious people who were committed to safe development of AI and would definitely hear me out, look into the state of safety practice at the company, and take action as needed.What happened next shocked me. The Board member I spoke to was largely in the dark about GPT-4. They had seen a demo and had heard that it was strong, but had not used it personally. They said they were confident they could get access if they wanted to. I couldn’t believe it. I got access via a “Customer Preview” 2+ months ago, and you as a Board member haven’t even tried it‽ This thing is human-level, for crying out loud (though not human-like!).
This quote doesn’t say anything about the board member/s being people who are researching AI safety though—it’s Nathan’s friends who are in AI safety research not the board members.
I agree that based on this quote, it could have very well been just a subset of the board. But I believe Nathan’s wife works for CEA (and he’s previously MCed an EAG), and Tasha is (or was?) on the board of EVF US, and so idk, if it’s Tasha he spoke to and the “multiple people” was just her and Helen, I would have expected a rather different description of events/vibe. E.g. something like ‘I googled who was on the board and realised that two of them were EAs, so I reached out to discuss’. I mean maybe that is closer to what happened and it’s just being obfuscated, either way is confusing to me tbh.
Btw, by “out of date” do you mean relative to now, or to when the events took place? From what I can see, the tweet thread, the substack post and the podcast were all published the same day—Nov 22nd 2023. The link I provided is just 80k excerpting the original podcast.
They weren’t the only non employee board members though—that’s what I meant by the part about not being concerned about safety, that I took it to rule out both Toner and McCauley.
(Although it for some other reason you were only looking at Toner and McCauley, then no, I would say the person going around speaking to OAI employees is_less_ likely to be out of the loop on GPT-4’s capabilities)
The other ones are unlikely. Shivon Zilis & Reid Hoffman had left by this point; Will Hurd might or might not still be on the board at this point but wouldn’t be described nor recommended by Labenz’s acquaintance as researching AI safety, as that does not describe Hurd or D’Angelo; Brockman, Altman, and Sutskever are right out (Sutskever researches AI safety but Superalignment was a year away); by process of elimination, over 2023, the only board members he could have been plausibly contacting would be Toner and McCauley, and while Toner weakly made more sense before, now McCauley does.
(The description of them not having used the model unfortunately does not distinguish either one—none of the writings connected to them sound like they have all that much hands-on experience and would be eagerly prompt-engineering away at GPT-4-base the moment they got access. And I agree that this is a big mistake, but it is, even more unfortunately, and extremely common one—I remain shocked that Altman had apparently never actually used GPT-3 before he basically bet the company on it. There is a widespread attitude, even among those bullish about the economics, that GPT-3 or GPT-4 are just ‘tools’, which are mere ‘stochastic parrots’, and have no puzzling internal dynamics or complexities. I have been criticizing this from the start, but the problem is, ‘sampling can show the presence of knowledge and not the absence’, so if you don’t think there’s anything interesting there, your prompts are a mirror which reflect only your low expectations; and the safety tuning makes it worse by hiding most of the agency & anomalies, often in ways that look like good things. For example, the rhyming poetry ought to alarm everyone who sees it, because of what it implies underneath—but it doesn’t. This is why descriptions of Sydney or GPT-4-base are helpful: they are warning shots from the shoggoth behind the friendly tool-AI ChatGPT UI mask.)
I think you might be misremembering the podcast? Nathan said that he was assured that the board as a whole was serious about safety, but I don’t remember the specific board member being recommended as someone researching AI safety (or otherwise more pro safety than the rest of the board). I went back through the transcript to check and couldn’t find any reference to what you’ve said.
“ And ultimately, in the end, basically everybody said, “What you should do is go talk to somebody on the OpenAI board. Don’t blow it up. You don’t need to go outside of the chain of command, certainly not yet. Just go to the board. And there are serious people on the board, people that have been chosen to be on the board of the governing nonprofit because they really care about this stuff. They’re committed to long-term AI safety, and they will hear you out. And if you have news that they don’t know, they will take it seriously.” So I was like, “OK, can you put me in touch with a board member?” And so they did that, and I went and talked to this one board member. And this was the moment where it went from like, “whoa” to “really whoa.””
(https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/?utm_campaign=podcast__nathan-labenz&utm_source=80000+Hours+Podcast&utm_medium=podcast#excerpt-from-the-cognitive-revolution-nathans-narrative-001513)
I was not referring to the podcast (which I haven’t actually read yet because from the intro it seems wildly out of date and from a long time ago) but to Labenz’s original Twitter thread turned into a Substack post. I think you misinterpret what he is saying in that transcript because it is loose and extemporaneous “they’re committed” could just as easily refer to “are serious people on the board” who have “been chosen” for that (implying that there are other members of the board not chosen for that); and that is what he says in the written down post:
This quote doesn’t say anything about the board member/s being people who are researching AI safety though—it’s Nathan’s friends who are in AI safety research not the board members.
I agree that based on this quote, it could have very well been just a subset of the board. But I believe Nathan’s wife works for CEA (and he’s previously MCed an EAG), and Tasha is (or was?) on the board of EVF US, and so idk, if it’s Tasha he spoke to and the “multiple people” was just her and Helen, I would have expected a rather different description of events/vibe. E.g. something like ‘I googled who was on the board and realised that two of them were EAs, so I reached out to discuss’. I mean maybe that is closer to what happened and it’s just being obfuscated, either way is confusing to me tbh.
Btw, by “out of date” do you mean relative to now, or to when the events took place? From what I can see, the tweet thread, the substack post and the podcast were all published the same day—Nov 22nd 2023. The link I provided is just 80k excerpting the original podcast.