Is the “aligning incentives” tag you are interested in something AI specific or should it apply to general human institutions / social systems? I could see a case for either, but that impacts what tag names we should use.
I was thinking of an AI specific tag, it seems a bit too broad otherwise.
Is the “aligning incentives” tag you are interested in something AI specific or should it apply to general human institutions / social systems? I could see a case for either, but that impacts what tag names we should use.
I was thinking of an AI specific tag, it seems a bit too broad otherwise.