Would you like to make a case why you believe, say Deepmind, would not produce an AI that poses an x-risk, but a smaller lab would? It’s not intuitive for me why this could be the default case. Is it because we expect smaller labs to have lesser to zero guardrails in place?
Would you like to make a case why you believe, say Deepmind, would not produce an AI that poses an x-risk, but a smaller lab would? It’s not intuitive for me why this could be the default case. Is it because we expect smaller labs to have lesser to zero guardrails in place?