If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.
I thought this was interesting. Wouldn’t an AI solving problems in biology pick up Darwinian habits and be equally dangerous as one trained on text? Why is training on text from the internet necessarily more dangerous? Also, what would “complicating the issue” look like in this context? If, for example, an AI was modeling brain cells that showed signs of autonomy and/or the ability to multiply in virtual space would that be a complication? Or a breakthrough?
The other legal proscriptions mentioned also have interesting implications. Prohibiting large GPU clusters or shutting down large training runs might have the unintended consequence of increased/faster innovation as developers are forced to find ways around legal hurdles.
The question of whether legal prohibitions are effective in this arena has also been brought up. Perhaps we instead should place stricter controls on raw materials that go into chips, circuit boards, semiconductors etc.
I think that Eliezer meant biological problems like “given data about various omics in 10000 samples build causal network, including genes, transcription factors, transcripts, etc, so we could use this model to cure cancer and enhance human intelligence”
It is not a well-thought out exception. If this proposal were meant to be taken seriously it would make enforcement exponentially harder and set up an overhang situation where AI capabilities would increase further in a limited domain and be less likely to be interpretable.
If I had infinite freedom to write laws I don’t know what I would do, I’m torn between caution and progress. Regulations often stifle innovation and the regulated product or technology just ends up dominated by a select few. If you assume a high probability of risk to AI development then maybe this is a good thing.
Rather than individual laws perhaps there should be a regulatory body that focuses on AI safety, like a better business bureau for AI that can grow in size and complexity over time parallel to AI growth.
I thought this was interesting. Wouldn’t an AI solving problems in biology pick up Darwinian habits and be equally dangerous as one trained on text? Why is training on text from the internet necessarily more dangerous? Also, what would “complicating the issue” look like in this context? If, for example, an AI was modeling brain cells that showed signs of autonomy and/or the ability to multiply in virtual space would that be a complication? Or a breakthrough?
The other legal proscriptions mentioned also have interesting implications. Prohibiting large GPU clusters or shutting down large training runs might have the unintended consequence of increased/faster innovation as developers are forced to find ways around legal hurdles.
The question of whether legal prohibitions are effective in this arena has also been brought up. Perhaps we instead should place stricter controls on raw materials that go into chips, circuit boards, semiconductors etc.
I think that Eliezer meant biological problems like “given data about various omics in 10000 samples build causal network, including genes, transcription factors, transcripts, etc, so we could use this model to cure cancer and enhance human intelligence”
It is not a well-thought out exception. If this proposal were meant to be taken seriously it would make enforcement exponentially harder and set up an overhang situation where AI capabilities would increase further in a limited domain and be less likely to be interpretable.
If I had infinite freedom to write laws I don’t know what I would do, I’m torn between caution and progress. Regulations often stifle innovation and the regulated product or technology just ends up dominated by a select few. If you assume a high probability of risk to AI development then maybe this is a good thing.
Rather than individual laws perhaps there should be a regulatory body that focuses on AI safety, like a better business bureau for AI that can grow in size and complexity over time parallel to AI growth.