(Of those, maybe Far.AI would be deserving of that title, but also, I feel like there is something bad about trying to award that title in the first place).
There also is no disambiguation of whether this program is focused on existential risk efforts or near-term bias/filter-bubble/censorship/etc. AI efforts, the latter of which I think is usually bad for the world, but at least a lot less valuable.
The post feels very salesy to me, was written by an org account, and also made statements that seemed false to me like:
(Of those, maybe Far.AI would be deserving of that title, but also, I feel like there is something bad about trying to award that title in the first place).
There also is no disambiguation of whether this program is focused on existential risk efforts or near-term bias/filter-bubble/censorship/etc. AI efforts, the latter of which I think is usually bad for the world, but at least a lot less valuable.