Yeah… On one hand, I am excited about Sam and Greg hopefully trying more interesting things than just scaling Transformer LLMs, especially considering Sam’ answer to the last question on Nov. 1 at Cambridge Union, 1:01:45 in https://www.youtube.com/watch?v=NjpNG0CJRMM where he seems to think that more than Transformer-based LLMs are needed for AGI/ASI (in particular, he correctly says that “true AI” must be able to discover new physics, and he doubts LLMs are good enough for that).
On the other hand, I was hoping for a single clear leader in the AI race, and I thought that Ilya Sutskever was one of the best possible leaders for an AI safety project. And now Ilya vs. Sam and Greg Brockman are enemies, https://twitter.com/gdb/status/1725736242137182594, and if Sam and Greg would find a way to beat OpenAI, would they be able to be sufficiently mindful about safety?
Hmmm. The way Sam behaves I can’t see a path of him leading an AI company towards safety. The way I interpreted his world tour (22 countries?) talking about OpenAI or AI in general, is him trying to occupy the mindspaces of those countries. A CEO I wish OpenAI has—is someone who stays at the offices, ensuring that we are on track of safely steering arguably the most revolutionary tech ever created—not trying to promote the company or the tech, I think it’s unnecessary to do a world tour if one is doing AI development and deployment safely.
(But I could be wrong too. Well, let’s all see what’s going to happen next.)
It would be good to be able to attribute this disagreement to a particular part of the comment. Is that about me agreeing with Sam about “True AI” needing to be able to do novel physics? Or about me implicitly supporting the statement that LLMs would not be good enough (I am not really sure, I think LLMs would probably be able to create non-LLMs based AIs, so even if they are not good enough to achieve the level of “True AI” directly, they might be able to get there by creating differently-architected AIs)?
Or about having a single clear leader being good for safety? Or about Ilya being one of the best safety project leaders, based on the history of his thinking and his qualification? Or about Sam and Greg having a fighting chance against OpenAI? Or about me being unsure of them being able to do adequate safety work on the level which Ilya is likely to provide?
I am curious which of these seem to cause disagreement...
Do you mean this in the sense that this would be particularly bad safety-wise, or do you mean this in the sense they are likely to just build huge LLMs like everyone else is doing, including even xAI?
(I think he frames it with him as the main person that steers the tech rather than an organisation or humanity steering the tech—that’s how it feels for me, the way he behaves.)
They released a big LLM, the “Grok”. With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component).
I think he frames it with him as the main person that steers the tech
Yeah… I thought he deferred to Ilya and to the new “superalignment team” Ilya has been co-leading safety-wise...
But perhaps he was not doing that consistently enough...
They released a big LLM, the “Grok”. With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component).
I haven’t played around with Grok so I’m not sure how capable or safe it is. But I hope Elon and his team of experts gets the safety problem right—as he has created companies with extraordinary achievements. At least, Elon have demonstrated his aspirations to better humanity in other fields of sciences (Internet /Satellites, Space Exploration and EVs) and hope it translate to xAI and twitter.
Yeah… I thought he deferred to Ilya and to the new “superalignment team” Ilya has been co-leading safety-wise...
I felt different about Ilya co-leading, this seems to me that there’s something happening inside OpenAI. When Ilya needed to co-lead the new safety direction this felt like: “something feels weird inside OpenAI and Ilya needed to co-lead the safety direction.” So maybe the announcement today is related to that too.
Pretty sure there will new info from OpenAI next week or two weeks from now. Hoping it favors more safety directions—long term.
I haven’t played around with Grok so I’m not sure how capable or safe it is.
I expect safety of that to be at zero (they don’t think GPT-3.5-level LLMs are a problem in this sense; besides they market it almost as an “anything goes, anti-censorship LLM”).
But that’s not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem… I hope when they come to that, to approaching AIs which can create better AIs, they’ll start taking safety seriously… Otherwise, we’ll be in trouble...
Ilya co-leading
I thought he was the appropriately competent person (he was probably the AI scientist #1 in the world). The right person for the most important task in the world...
And the “superalignment” team at OpenAI was… not very strong. The original official “superalignment” approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a, and it was obvious that his thinking was different from the previous OpenAI “superalignment” approach and much better (as in, “actually had a chance to succeed”)...
Of course, now, since it looks like the “coup” has mostly been his doing, I am less sure that this is the leadership OpenAI and OpenAI safety needs. The manner of that has certainly been too erratic. Safety efforts should not evoke the feel of “last minute emergency”...
But that’s not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem… I hope when they come to that, to approaching AIs which can create better AIs, they’ll start taking safety seriously… Otherwise, we’ll be in trouble...
Yeah, let’s see where will they steer Grok.
And the “superalignment” team at OpenAI was… not very strong. The original official “superalignment” approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a, and it was obvious that his thinking was different from the previous OpenAI “superalignment” approach and much better (as in, “actually had a chance to succeed”)...
Yeah I agree with your analysis with the superalignment agenda, I think it’s not a good use of the 20% of compute resources that they have. I even think the resource allocation of 20% on AI safety is not deep enough into the problem as I think a 100% allocation[1] is necessary.
I haven’t had much time studying Ilya, but I like the way he explains his arguments. I hope they (Ilya, the board and Mira or new CEO) will be better at expanding the tech than Sam is. Let’s see.
I expect Sam to open up a new AI company.
Yeah… On one hand, I am excited about Sam and Greg hopefully trying more interesting things than just scaling Transformer LLMs, especially considering Sam’ answer to the last question on Nov. 1 at Cambridge Union, 1:01:45 in https://www.youtube.com/watch?v=NjpNG0CJRMM where he seems to think that more than Transformer-based LLMs are needed for AGI/ASI (in particular, he correctly says that “true AI” must be able to discover new physics, and he doubts LLMs are good enough for that).
On the other hand, I was hoping for a single clear leader in the AI race, and I thought that Ilya Sutskever was one of the best possible leaders for an AI safety project. And now Ilya vs. Sam and Greg Brockman are enemies, https://twitter.com/gdb/status/1725736242137182594, and if Sam and Greg would find a way to beat OpenAI, would they be able to be sufficiently mindful about safety?
Hmmm. The way Sam behaves I can’t see a path of him leading an AI company towards safety. The way I interpreted his world tour (22 countries?) talking about OpenAI or AI in general, is him trying to occupy the mindspaces of those countries. A CEO I wish OpenAI has—is someone who stays at the offices, ensuring that we are on track of safely steering arguably the most revolutionary tech ever created—not trying to promote the company or the tech, I think it’s unnecessary to do a world tour if one is doing AI development and deployment safely.
(But I could be wrong too. Well, let’s all see what’s going to happen next.)
Interesting, how sharply people disagree...
It would be good to be able to attribute this disagreement to a particular part of the comment. Is that about me agreeing with Sam about “True AI” needing to be able to do novel physics? Or about me implicitly supporting the statement that LLMs would not be good enough (I am not really sure, I think LLMs would probably be able to create non-LLMs based AIs, so even if they are not good enough to achieve the level of “True AI” directly, they might be able to get there by creating differently-architected AIs)?
Or about having a single clear leader being good for safety? Or about Ilya being one of the best safety project leaders, based on the history of his thinking and his qualification? Or about Sam and Greg having a fighting chance against OpenAI? Or about me being unsure of them being able to do adequate safety work on the level which Ilya is likely to provide?
I am curious which of these seem to cause disagreement...
I did not press the disagreement button but here is where I disagree:
Do you mean this in the sense that this would be particularly bad safety-wise, or do you mean this in the sense they are likely to just build huge LLMs like everyone else is doing, including even xAI?
I’m still figuring out Elon’s xAI.
But with regards with how Sam behaves—if he doesn’t improve his framing[1] of what AI could be for the future of humanity—I expect the same results.
(I think he frames it with him as the main person that steers the tech rather than an organisation or humanity steering the tech—that’s how it feels for me, the way he behaves.)
They released a big LLM, the “Grok”. With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component).
Yeah… I thought he deferred to Ilya and to the new “superalignment team” Ilya has been co-leading safety-wise...
But perhaps he was not doing that consistently enough...
I haven’t played around with Grok so I’m not sure how capable or safe it is. But I hope Elon and his team of experts gets the safety problem right—as he has created companies with extraordinary achievements. At least, Elon have demonstrated his aspirations to better humanity in other fields of sciences (Internet /Satellites, Space Exploration and EVs) and hope it translate to xAI and twitter.
I felt different about Ilya co-leading, this seems to me that there’s something happening inside OpenAI. When Ilya needed to co-lead the new safety direction this felt like: “something feels weird inside OpenAI and Ilya needed to co-lead the safety direction.” So maybe the announcement today is related to that too.
Pretty sure there will new info from OpenAI next week or two weeks from now. Hoping it favors more safety directions—long term.
I expect safety of that to be at zero (they don’t think GPT-3.5-level LLMs are a problem in this sense; besides they market it almost as an “anything goes, anti-censorship LLM”).
But that’s not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem… I hope when they come to that, to approaching AIs which can create better AIs, they’ll start taking safety seriously… Otherwise, we’ll be in trouble...
I thought he was the appropriately competent person (he was probably the AI scientist #1 in the world). The right person for the most important task in the world...
And the “superalignment” team at OpenAI was… not very strong. The original official “superalignment” approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a, and it was obvious that his thinking was different from the previous OpenAI “superalignment” approach and much better (as in, “actually had a chance to succeed”)...
Of course, now, since it looks like the “coup” has mostly been his doing, I am less sure that this is the leadership OpenAI and OpenAI safety needs. The manner of that has certainly been too erratic. Safety efforts should not evoke the feel of “last minute emergency”...
At least it refuses to give you instructions for making cocaine.
Well. If nothing else, the sass is refreshing after the sycophancy of all the other LLMs.
That’s good! So, at least a bit of safety fine-tuning is there...
Good to know...
Yeah, let’s see where will they steer Grok.
Yeah I agree with your analysis with the superalignment agenda, I think it’s not a good use of the 20% of compute resources that they have. I even think the resource allocation of 20% on AI safety is not deep enough into the problem as I think a 100% allocation[1] is necessary.
I haven’t had much time studying Ilya, but I like the way he explains his arguments. I hope they (Ilya, the board and Mira or new CEO) will be better at expanding the tech than Sam is. Let’s see.
I think the safest AI will be the most profitable technoloy as everyone will want to promote and build on top of it.