I notice this is downvoted and by a new user. On the surface, it looks like something I would strongly consider applying to, depending on what happens in my personal life over the next month. Can anyone let me know (either here or privately) if this is reputable?
It was quite strange to see it downvoted, and I’m not sure what the issue was. My guess is that the initial username might have played a role, especially since this is my first post on LessWrong, it might have caused some concern maybe?
As for the credibility, you can see that this fellowship has been shared by individuals from the organizations themselves on Twitter, as seen here, here and here.
If you’d like, I’m happy to discuss this further on the call to help alleviate any concerns you may have.
(Of those, maybe Far.AI would be deserving of that title, but also, I feel like there is something bad about trying to award that title in the first place).
There also is no disambiguation of whether this program is focused on existential risk efforts or near-term bias/filter-bubble/censorship/etc. AI efforts, the latter of which I think is usually bad for the world, but at least a lot less valuable.
Thanks for the feedback! Quite helpful to get more context. Quick responses: 1) Yes, we did intend for the hook to be eye-grabbing/mildly salesy, as this is part of our promotional material shared across different platforms, and we were hoping this would be effective at garnering the interest of talented individuals and encourage them to work on AIS. Though we didn’t think it was dishonest/false, instead we designed it be short but effective. 2)It was a sincere mistake that the post was made from an org account. 3) We missed gauging issues with the use of ‘leading AI Safety organisations’ , however, I think you are right. We could have been more cautious in how we framed it. 4) We have taken a note of stating the scope of our efforts and intend to factor this into consideration when designing our next outreach framing.
Just commenting to say that this is convincing enough (and the application sufficiently low-effort) for me to apply later this month, conditional on being in a position where I could theoretically accept such an offer.
I notice this is downvoted and by a new user. On the surface, it looks like something I would strongly consider applying to, depending on what happens in my personal life over the next month. Can anyone let me know (either here or privately) if this is reputable?
Hi,
It was quite strange to see it downvoted, and I’m not sure what the issue was. My guess is that the initial username might have played a role, especially since this is my first post on LessWrong, it might have caused some concern maybe?
As for the credibility, you can see that this fellowship has been shared by individuals from the organizations themselves on Twitter, as seen here, here and here.
If you’d like, I’m happy to discuss this further on the call to help alleviate any concerns you may have.
The post feels very salesy to me, was written by an org account, and also made statements that seemed false to me like:
(Of those, maybe Far.AI would be deserving of that title, but also, I feel like there is something bad about trying to award that title in the first place).
There also is no disambiguation of whether this program is focused on existential risk efforts or near-term bias/filter-bubble/censorship/etc. AI efforts, the latter of which I think is usually bad for the world, but at least a lot less valuable.
Thanks for the feedback! Quite helpful to get more context.
Quick responses:
1) Yes, we did intend for the hook to be eye-grabbing/mildly salesy, as this is part of our promotional material shared across different platforms, and we were hoping this would be effective at garnering the interest of talented individuals and encourage them to work on AIS. Though we didn’t think it was dishonest/false, instead we designed it be short but effective.
2)It was a sincere mistake that the post was made from an org account.
3) We missed gauging issues with the use of ‘leading AI Safety organisations’ , however, I think you are right. We could have been more cautious in how we framed it.
4) We have taken a note of stating the scope of our efforts and intend to factor this into consideration when designing our next outreach framing.
Thanks for your inputs!
Just commenting to say that this is convincing enough (and the application sufficiently low-effort) for me to apply later this month, conditional on being in a position where I could theoretically accept such an offer.