“here is how to make LLMs more capable but less humanlike, it will be adopted because it makes LLMs more capable”.
Thankfully, this is a class of problems that humanity has an experience dealing with. The solution boils down to regulating all the ways to make LLMs less human-like out of existence.
You mean, “ban superintelligence”? Because superintelligences are not human-like.
That’s the problem with your proposal of “ethics module”. Let’s suppose that we have system of “ethics module” and “nanotech design module”. Nanotech design module outputs 3D-model of supramolecular unholy abomination. What exactly should ethics module do to ensure that this abomination doesn’t kill everyone? Tell nanotech module “pls don’t kill people”? You are going to have hard time translating this into nanotech designer internal language. Make ethics module sufficiently smart to analyse behavior of complex molecular structures in wide range of environments? You have now all problems with alignment of superintelligences.
You mean, “ban superintelligence”? Because superintelligences are not human-like.
The kind of superintelligence that doesn’t possess human-likeness that we want it to possess.
That’s the problem with your proposal of “ethics module”. Let’s suppose that we have system of “ethics module” and “nanotech design module”. Nanotech design module outputs 3D-model of supramolecular unholy abomination. What exactly should ethics module do to ensure that this abomination doesn’t kill everyone?
Nanotech design module has to be evaluatable by the ethics module. For that it also be made from multiple sequential LLM calls in explicit natural language. Other type of modules should be banned.
Thankfully, this is a class of problems that humanity has an experience dealing with. The solution boils down to regulating all the ways to make LLMs less human-like out of existence.
You mean, “ban superintelligence”? Because superintelligences are not human-like.
That’s the problem with your proposal of “ethics module”. Let’s suppose that we have system of “ethics module” and “nanotech design module”. Nanotech design module outputs 3D-model of supramolecular unholy abomination. What exactly should ethics module do to ensure that this abomination doesn’t kill everyone? Tell nanotech module “pls don’t kill people”? You are going to have hard time translating this into nanotech designer internal language. Make ethics module sufficiently smart to analyse behavior of complex molecular structures in wide range of environments? You have now all problems with alignment of superintelligences.
The kind of superintelligence that doesn’t possess human-likeness that we want it to possess.
Nanotech design module has to be evaluatable by the ethics module. For that it also be made from multiple sequential LLM calls in explicit natural language. Other type of modules should be banned.