Feel free to disagree vociferously, but if Elon launches his own LLM, I am not nearly as worried as if he launches “SpaceX for AutoGPT.” My view is that LLMs, while potentially dangerous on their own, are not nearly as intrinsically dangerous as LLMs that are hooked up to robotics, the internet, and given liberty to act autonomously. I agree with Zvi that the people claimining the LLM is the fundamental bottleneck to better AutoGPT performance are calling it way too early. Put a billion dollars and the best engineers in the world behind enhancing AutoGPT capabilities, and Things Will Happen. Making destructive stuff will be the easy part.
Feel free to disagree vociferously, but if Elon launches his own LLM, I am not nearly as worried as if he launches “SpaceX for AutoGPT.” My view is that LLMs, while potentially dangerous on their own, are not nearly as intrinsically dangerous as LLMs that are hooked up to robotics, the internet, and given liberty to act autonomously. I agree with Zvi that the people claimining the LLM is the fundamental bottleneck to better AutoGPT performance are calling it way too early. Put a billion dollars and the best engineers in the world behind enhancing AutoGPT capabilities, and Things Will Happen. Making destructive stuff will be the easy part.