But that’s not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem… I hope when they come to that, to approaching AIs which can create better AIs, they’ll start taking safety seriously… Otherwise, we’ll be in trouble...
Yeah, let’s see where will they steer Grok.
And the “superalignment” team at OpenAI was… not very strong. The original official “superalignment” approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a, and it was obvious that his thinking was different from the previous OpenAI “superalignment” approach and much better (as in, “actually had a chance to succeed”)...
Yeah I agree with your analysis with the superalignment agenda, I think it’s not a good use of the 20% of compute resources that they have. I even think the resource allocation of 20% on AI safety is not deep enough into the problem as I think a 100% allocation[1] is necessary.
I haven’t had much time studying Ilya, but I like the way he explains his arguments. I hope they (Ilya, the board and Mira or new CEO) will be better at expanding the tech than Sam is. Let’s see.
Yeah, let’s see where will they steer Grok.
Yeah I agree with your analysis with the superalignment agenda, I think it’s not a good use of the 20% of compute resources that they have. I even think the resource allocation of 20% on AI safety is not deep enough into the problem as I think a 100% allocation[1] is necessary.
I haven’t had much time studying Ilya, but I like the way he explains his arguments. I hope they (Ilya, the board and Mira or new CEO) will be better at expanding the tech than Sam is. Let’s see.
I think the safest AI will be the most profitable technoloy as everyone will want to promote and build on top of it.