Reading through these notes I was alarmed by how much they misrepresented what Sam Altman said. I feel bad for the guy that he came on and so thoughtfully answered a ton of questions and then it gets posted online as “Sam Altman claims _____!”
An example:
GPT-5 might be able to pass the Turing test. But probably not worth the effort.
A question was asked about how far out he thought we were from being able to pass the Turing Test. Sam thought that this was technically feasible in the near term but would take a lot of effort that was better spent elsewhere, so they were quite unlikely to work on it. So “GPT-5 might be able to pass the Turing test.” is technically true because “might” makes the whole sentence almost meaningless, but to the extent that it does have meaning, that meaning is giving you directionally false information.
I didn’t take notes, and I don’t want to try to correct the record from memory and plausibly make things worse. But just, take these all with a huge grain of salt. There’s a lot where these notes say “X” but what I remember him saying was along the lines of “that’s a good question, it’s tricky, I’m currently leaning towards X over Y”. And some things that are flat wrong.
Instead I wrote: “These notes are not verbatim [...]While note-taking I also tended to miss parts of further answers, so this is far from complete and might also not be 100% accurate. Corrections welcome.”
Talk about badly misrepresenting …
I fail to see how “A question was asked about how far out he thought we were from being able to pass the Turing Test. Sam thought that this was technically feasible in the near term but would take a lot of effort that was better spent elsewhere, so they were quite unlikely to work on it.” is misrepresented by “GPT-5 might be able to pass the Turing test. But probably not worth the effort.”
Sam Altman literally said that passing the Turing test might be feasible with GPT-5, but not worth the effort. Where is the “directionallly false information”? To me your longer version is pretty much exactly what I express here.
My notes are 30+ bullet point style sentences that catch the gist of what Sam Altman said in answers that were often several minutes long. But this example is the “misrepresentation” you want to give as example? Seriously?
If some things are flat out wrong, say which and offer a more precise version.
The main issue is still missing context. For example, if someone asks “is x possible” and he answers that it is, summarizing that was “x is possible” is misleading. Simply because there is a difference between calling out a thing unprompted, and answering a question about it. Former is what I meant by “Sam claims”.
His answer about Turing test was that they were planning to not do it, though if they tried, they thought they could build that with a lot of effort. You summarized it as gpt5 might be able to pass it. I don’t know what else to say about that, they seem pretty different to me.
For example, if someone asks “is x possible” and he answers that it is, summarizing that was “x is possible” is misleading. Simply because there is a difference between calling out a thing unprompted, and answering a question about it. Former is what I meant by “Sam claims”.
At the very top it says “Q&A”, i.e. all of these are answers to questions, none are Sam Altman shouting from the rooftop.
His answer about Turing test was that they were planning to not do it, though if they tried, they thought they could build that with a lot of effort. You summarized it as gpt5 might be able to pass it. I don’t know what else to say about that, they seem pretty different to me.
I did not summarize it as “GPT-5 might be able to pass it”. I said “GPT-5 might be able to pass it. But probably not worth the effort.” Which to my mind clearly shows that a) there would be an engineering effort involved b) this would be a big effort and c) therefore they are not going to do it. He specifically mentioned GPT-5 as a model were this might become feasible.
Also: In one breath you complain that in Sam Altman’s answers there was a lot of hedging that is missing here, and in the next you say “”might” makes the whole sentence almost meaningless”. Like, what do you want? I can’t simultaneously hedge more and less.
Yeah, it’s a bit of an blind men/elephant thing. Like the Turing test thing was all of those, because he said something along the lines of “we don’t want to aim for passing the Turing test (because that’s pointless/useless and OA can only do a few things at a time) but we could if we put a few years into it and a hypothetical GPT-5* alone could probably do it”. All 3 claims (“we could solve Turing test”, “a GPT-5 would probably solve Turing”, “we don’t plan to solve Turing”) are true and logically connected, but different people will be interested in different parts.
* undefined but presumably like a GPT-4 or GPT-3 in being another 2 OOM or so beyond the previous GPT
I didn’t take notes, and I don’t want to try to correct the record from memory and plausibly make things worse.
Maybe some attendees could make a private Google Doc that tries to be a little more precise about the original claims/context/vibe, then share it after enough attendees have glanced over it to make you confident in the summary?
I don’t expect this would be a huge night-and-day difference from the OP, but it may matter a fair bit for a few of the points. And part of what bothers me right now is that I don’t know which parts to be more skeptical of.
Reading through these notes I was alarmed by how much they misrepresented what Sam Altman said. I feel bad for the guy that he came on and so thoughtfully answered a ton of questions and then it gets posted online as “Sam Altman claims _____!”
An example:
A question was asked about how far out he thought we were from being able to pass the Turing Test. Sam thought that this was technically feasible in the near term but would take a lot of effort that was better spent elsewhere, so they were quite unlikely to work on it. So “GPT-5 might be able to pass the Turing test.” is technically true because “might” makes the whole sentence almost meaningless, but to the extent that it does have meaning, that meaning is giving you directionally false information.
I didn’t take notes, and I don’t want to try to correct the record from memory and plausibly make things worse. But just, take these all with a huge grain of salt. There’s a lot where these notes say “X” but what I remember him saying was along the lines of “that’s a good question, it’s tricky, I’m currently leaning towards X over Y”. And some things that are flat wrong.
Nowhere did I write “Sam Altman claims … !”
Instead I wrote: “These notes are not verbatim [...]While note-taking I also tended to miss parts of further answers, so this is far from complete and might also not be 100% accurate. Corrections welcome.”
Talk about badly misrepresenting …
I fail to see how “A question was asked about how far out he thought we were from being able to pass the Turing Test. Sam thought that this was technically feasible in the near term but would take a lot of effort that was better spent elsewhere, so they were quite unlikely to work on it.” is misrepresented by “GPT-5 might be able to pass the Turing test. But probably not worth the effort.”
Sam Altman literally said that passing the Turing test might be feasible with GPT-5, but not worth the effort. Where is the “directionallly false information”? To me your longer version is pretty much exactly what I express here.
My notes are 30+ bullet point style sentences that catch the gist of what Sam Altman said in answers that were often several minutes long. But this example is the “misrepresentation” you want to give as example? Seriously?
If some things are flat out wrong, say which and offer a more precise version.
You’ve improved the summary, thank you.
The main issue is still missing context. For example, if someone asks “is x possible” and he answers that it is, summarizing that was “x is possible” is misleading. Simply because there is a difference between calling out a thing unprompted, and answering a question about it. Former is what I meant by “Sam claims”.
His answer about Turing test was that they were planning to not do it, though if they tried, they thought they could build that with a lot of effort. You summarized it as gpt5 might be able to pass it. I don’t know what else to say about that, they seem pretty different to me.
Other people have mentioned some wrong things.
At the very top it says “Q&A”, i.e. all of these are answers to questions, none are Sam Altman shouting from the rooftop.
I did not summarize it as “GPT-5 might be able to pass it”. I said “GPT-5 might be able to pass it. But probably not worth the effort.” Which to my mind clearly shows that a) there would be an engineering effort involved b) this would be a big effort and c) therefore they are not going to do it. He specifically mentioned GPT-5 as a model were this might become feasible.
Also: In one breath you complain that in Sam Altman’s answers there was a lot of hedging that is missing here, and in the next you say “”might” makes the whole sentence almost meaningless”. Like, what do you want? I can’t simultaneously hedge more and less.
Yeah, it’s a bit of an blind men/elephant thing. Like the Turing test thing was all of those, because he said something along the lines of “we don’t want to aim for passing the Turing test (because that’s pointless/useless and OA can only do a few things at a time) but we could if we put a few years into it and a hypothetical GPT-5* alone could probably do it”. All 3 claims (“we could solve Turing test”, “a GPT-5 would probably solve Turing”, “we don’t plan to solve Turing”) are true and logically connected, but different people will be interested in different parts.
* undefined but presumably like a GPT-4 or GPT-3 in being another 2 OOM or so beyond the previous GPT
Maybe some attendees could make a private Google Doc that tries to be a little more precise about the original claims/context/vibe, then share it after enough attendees have glanced over it to make you confident in the summary?
I don’t expect this would be a huge night-and-day difference from the OP, but it may matter a fair bit for a few of the points. And part of what bothers me right now is that I don’t know which parts to be more skeptical of.