Wow. Marc Andreeson says he had meetings at DC where he was told to stop raising AI startups because it was going to be closed up in a similar way to defence tech, a small number of organisations with close government ties. He said to them, ‘you can’t restrict access to math, it’s already out there’, and he says they said “during the cold war we classified entire areas of physics, and took them out of the research community, and entire branches of physics basically went dark and didn’t proceed, and if we decide we need to, we’re going to do the same thing to the math underneath AI”.
So, 1: This confirms my suspicion that OpenAI leadership have also been told this. If they’re telling Andreeson, they will have told Altman.
And for me that makes a lot of sense of the behavior of OpenAI, a de-emphasizing of the realities of getting to human-level, a closing of the dialog, comically long timelines, shrugging off responsibilities, and a number of leaders giving up and moving on. There are a whole lot of obvious reasons they wouldn’t want to tell the public that this is a thing, and I’d agree with some of those reasons.
2: Vanishing areas of physics? A perplexity search suggests that may be referring to nuclear science, radar, lasers, and some semiconductors. But they said “entire areas of physics”. Does any of that sound like entire areas of physics? To me that phrase is strongly reminiscent of certain stories I’ve heard (possibly overexcited ones), physics that, let’s say, could be used to make much faster missiles, missiles so fast that it’s not obvious that they could be intercepted even using missiles of the same kind. A technology that we’d prefer to consign to secrecy than use, and then later have to defend ourselves against it once our adversaries develop their own. A black ball. If it is that, if that secret exists, that’s very interesting for many reasons, primarily due to the success of the secrecy, and the extent to which it could very conceivably stay secret for basically ever. And that makes me wonder about what might happen with some other things.
This basically sounds like there are people in DC who listen to the AI safety community and told Andreesen that they plan to follow at least some demands of the AI safety folks.
OpenAI likely lobbied for it.
The military people who know that some physics was classified likely don’t know the exact physics that were classified. While I would like more information I would not take this as evidence for much.
He also said interpretability has been solved, so he’s not the most calibrated when it comes to truthseeking. Similarly, his story here could be wildly exaggerated and not the full truth.
I’m sure it’s running through a lot of interpretation, but it has to. He’s dealing with people who don’t know or aren’t open about (unclear which) the consequences of their own policies.
Wow. Marc Andreeson says he had meetings at DC where he was told to stop raising AI startups because it was going to be closed up in a similar way to defence tech, a small number of organisations with close government ties. He said to them, ‘you can’t restrict access to math, it’s already out there’, and he says they said “during the cold war we classified entire areas of physics, and took them out of the research community, and entire branches of physics basically went dark and didn’t proceed, and if we decide we need to, we’re going to do the same thing to the math underneath AI”.
So, 1: This confirms my suspicion that OpenAI leadership have also been told this. If they’re telling Andreeson, they will have told Altman.
And for me that makes a lot of sense of the behavior of OpenAI, a de-emphasizing of the realities of getting to human-level, a closing of the dialog, comically long timelines, shrugging off responsibilities, and a number of leaders giving up and moving on. There are a whole lot of obvious reasons they wouldn’t want to tell the public that this is a thing, and I’d agree with some of those reasons.
2: Vanishing areas of physics? A perplexity search suggests that may be referring to nuclear science, radar, lasers, and some semiconductors. But they said “entire areas of physics”. Does any of that sound like entire areas of physics? To me that phrase is strongly reminiscent of certain stories I’ve heard (possibly overexcited ones), physics that, let’s say, could be used to make much faster missiles, missiles so fast that it’s not obvious that they could be intercepted even using missiles of the same kind. A technology that we’d prefer to consign to secrecy than use, and then later have to defend ourselves against it once our adversaries develop their own. A black ball. If it is that, if that secret exists, that’s very interesting for many reasons, primarily due to the success of the secrecy, and the extent to which it could very conceivably stay secret for basically ever. And that makes me wonder about what might happen with some other things.
This basically sounds like there are people in DC who listen to the AI safety community and told Andreesen that they plan to follow at least some demands of the AI safety folks.
OpenAI likely lobbied for it.
The military people who know that some physics was classified likely don’t know the exact physics that were classified. While I would like more information I would not take this as evidence for much.
He also said interpretability has been solved, so he’s not the most calibrated when it comes to truthseeking. Similarly, his story here could be wildly exaggerated and not the full truth.
I’m sure it’s running through a lot of interpretation, but it has to. He’s dealing with people who don’t know or aren’t open about (unclear which) the consequences of their own policies.
According to wikipedia, the Biefield brown effect was just ionic drift, https://en.wikipedia.org/wiki/Biefeld–Brown_effect#Disputes_surrounding_electrogravity_and_ion_wind
I’m not sure what wikipedia will have to say about charles buhler, if his work goes anywhere, but it’ll probably turn out to be more of the same.