A research team’s ability to design a robust corporate structure doesn’t necessarily predict their ability to solve a hard technical problem. Maybe there’s some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).
Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren’t well-tolerated (even if they have some element of truth).
A research team’s ability to design a robust corporate structure doesn’t necessarily predict their ability to solve a hard technical problem. Maybe there’s some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).
Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren’t well-tolerated (even if they have some element of truth).