See also this Twitter thread for some additional insights into this story. Given the politically sensitive nature of the topic, it may not be a great idea to discuss it much further on this platform, as that could further antagonize various camps interested in AI, among other potential negative consequences.
I appreciate the thread as context for a different perspective, but it seems to me that it loses track of verifiable facts partway through (around here), though I don’t mean to say it’s wrong after that.
I think in terms of implementation of frameworks around AI, it still seems very meaningful to me how influence and responsibility are handled. I don’t think that a federal agency specifically would do a good job handling an alignment plan, but I also don’t think Yann LeCun setting things up on his own without a dedicated team could handle it.
See also this Twitter thread for some additional insights into this story. Given the politically sensitive nature of the topic, it may not be a great idea to discuss it much further on this platform, as that could further antagonize various camps interested in AI, among other potential negative consequences.
I appreciate the thread as context for a different perspective, but it seems to me that it loses track of verifiable facts partway through (around here), though I don’t mean to say it’s wrong after that.
I think in terms of implementation of frameworks around AI, it still seems very meaningful to me how influence and responsibility are handled. I don’t think that a federal agency specifically would do a good job handling an alignment plan, but I also don’t think Yann LeCun setting things up on his own without a dedicated team could handle it.
I would want to see a strong justification before deciding not to discuss something that is directly relevant to the purpose of the site.