Did you know that it actually mentions “alignment with human intent” as a key factor for regulation of systemic risks?
I do not know of any other law that frames alignment this way and makes it a key impact area.
It also mentions alignment as part of the Technical documentation that AI developers must make publicly available.
I feel like this already merits acknowledgment by this community. This can enable research (and funds) if cited correctly by universities and non-profits in Europe.
@Charbel-Raphaël- as you’ve mentioned the European AI Act.
Did you know that it actually mentions “alignment with human intent” as a key factor for regulation of systemic risks?
I do not know of any other law that frames alignment this way and makes it a key impact area.
It also mentions alignment as part of the Technical documentation that AI developers must make publicly available.
I feel like this already merits acknowledgment by this community. This can enable research (and funds) if cited correctly by universities and non-profits in Europe.