A language model (LM) is a great example, because it is missing several features that AI would have to have in order to be dangerous. (1) It is trained to perform a narrow task (predict the next word in a sequence), for which it has zero “agency”, or decision-making authority. A human would have to connect a language model to some other piece of software (i.e. a web-hosted chatbot) to make it dangerous. (2) It cannot control its own inputs (e.g. browsing the web for more data), or outputs (e.g. writing e-mails with generated text). (3) It has no long-term memory, and thus cannot plan or strategize in any way. (4) It runs a fixed-function data pipeline, and has no way to alter its programming, or even expand its computational use, in any way.
I feel fairly confident that, no matter how powerful, current LMs cannot “go rogue” because of these limitations. However, there is also no technical obstacle for an AI research lab to remove these limitations, and many incentives for them to do so. Chatbots are an obvious money-making application of LMs. Allowing an LM to look up data on its own to self-improve (or even just answer user questions in a chatbot) is an obvious way to make a better LM. Researchers are currently equipping LMs with long-term memory (I am a co-author on this work). AutoML is a whole sub-field of AI research, which equips models with the ability to change and grow over time.
The word you’re looking for is “intelligent agent”, and the answer to your question “why don’t we just not build these things?” is essentially the same as “why don’t we stop research into AI?” How do you propose to stop the research?
A language model (LM) is a great example, because it is missing several features that AI would have to have in order to be dangerous. (1) It is trained to perform a narrow task (predict the next word in a sequence), for which it has zero “agency”, or decision-making authority. A human would have to connect a language model to some other piece of software (i.e. a web-hosted chatbot) to make it dangerous. (2) It cannot control its own inputs (e.g. browsing the web for more data), or outputs (e.g. writing e-mails with generated text). (3) It has no long-term memory, and thus cannot plan or strategize in any way. (4) It runs a fixed-function data pipeline, and has no way to alter its programming, or even expand its computational use, in any way.
I feel fairly confident that, no matter how powerful, current LMs cannot “go rogue” because of these limitations. However, there is also no technical obstacle for an AI research lab to remove these limitations, and many incentives for them to do so. Chatbots are an obvious money-making application of LMs. Allowing an LM to look up data on its own to self-improve (or even just answer user questions in a chatbot) is an obvious way to make a better LM. Researchers are currently equipping LMs with long-term memory (I am a co-author on this work). AutoML is a whole sub-field of AI research, which equips models with the ability to change and grow over time.
The word you’re looking for is “intelligent agent”, and the answer to your question “why don’t we just not build these things?” is essentially the same as “why don’t we stop research into AI?” How do you propose to stop the research?