That’s a bit like saying: “What are all those AI safety people talking about? Can you please give me three specific examples of how they propose safety mechanisms should work?”
I haven’t seen easy answers or a good link for them. At the same time, the project is one of the answers for the question in the OP.
I actually have been wondering about the safety mechanism stuff, if anyone wants to give examples of actually produced things in AI alignment I’d be interested in hearing about them.
That’s a bit like saying: “What are all those AI safety people talking about? Can you please give me three specific examples of how they propose safety mechanisms should work?”
I haven’t seen easy answers or a good link for them. At the same time, the project is one of the answers for the question in the OP.
I actually have been wondering about the safety mechanism stuff, if anyone wants to give examples of actually produced things in AI alignment I’d be interested in hearing about them.