Potentially people could have the cost function of an AI’s model have include its ease of interpretation by humans a factor. Having people manually check every change in a model for its effect on interperability would be too slow, but an AI could still periodically check its current best model with humans and learn a different one if it’s too hard to interpret.
I’ve seen a lot of mention of the importance of safe AI being competitive with non-safe AI. And I’m wondering what would happen if the government just illegalized or heavily taxed the use of the unsafe AI techniques. Then even with significant capability increases, it wouldn’t be worthwhile to use them.
Is there something very doubtful about governments creating such a regulation? I mean, I’ve already heard some people high in the government concerned about AI safety. And the Future of Life institute got the Californian government to unanimously pass the Asilomar AI Principles. It includes things about AI safety, like rigidly controlling any AI that can recursively self-improve.
It sounds extremely dangerous having widespread use of powerful, unaligned AI. So simply to protect their selves and families, they could potentially benefit a lot from implementing such regulations.
Potentially people could have the cost function of an AI’s model have include its ease of interpretation by humans a factor. Having people manually check every change in a model for its effect on interperability would be too slow, but an AI could still periodically check its current best model with humans and learn a different one if it’s too hard to interpret.
I’ve seen a lot of mention of the importance of safe AI being competitive with non-safe AI. And I’m wondering what would happen if the government just illegalized or heavily taxed the use of the unsafe AI techniques. Then even with significant capability increases, it wouldn’t be worthwhile to use them.
Is there something very doubtful about governments creating such a regulation? I mean, I’ve already heard some people high in the government concerned about AI safety. And the Future of Life institute got the Californian government to unanimously pass the Asilomar AI Principles. It includes things about AI safety, like rigidly controlling any AI that can recursively self-improve.
It sounds extremely dangerous having widespread use of powerful, unaligned AI. So simply to protect their selves and families, they could potentially benefit a lot from implementing such regulations.