I appreciate the thread as context for a different perspective, but it seems to me that it loses track of verifiable facts partway through (around here), though I don’t mean to say it’s wrong after that.
I think in terms of implementation of frameworks around AI, it still seems very meaningful to me how influence and responsibility are handled. I don’t think that a federal agency specifically would do a good job handling an alignment plan, but I also don’t think Yann LeCun setting things up on his own without a dedicated team could handle it.
I appreciate the thread as context for a different perspective, but it seems to me that it loses track of verifiable facts partway through (around here), though I don’t mean to say it’s wrong after that.
I think in terms of implementation of frameworks around AI, it still seems very meaningful to me how influence and responsibility are handled. I don’t think that a federal agency specifically would do a good job handling an alignment plan, but I also don’t think Yann LeCun setting things up on his own without a dedicated team could handle it.