A wise man does not cut the ‘get the AI to do what you want it to do’ department when it is working on AIs it will soon have trouble controlling. When I put myself in ‘amoral investor’ mode, I notice this is not great, a concern that most of the actual amoral investors have not noticed.
My actual expectation is that for raising capital and doing business generally this makes very little difference. There are effects in both directions, but there was overwhelming demand for OpenAI equity already, and there will be so long as their technology continues to impress.
No one ever got fired buying IBM OpenAI. ML is flashy and investors seem to care less about gears-level understanding of why something is potentially profitable than whether they can justify it. It seems to work out well enough for them.
What about employee relations and ability to hire? Would you want to work for a company that is known to have done this? I know that I would not. What else might they be doing? What is the company culture like?
Here’s a sad story of a plausible possible present: OAI fires a lot of people who care more-than-average about AI safety/NKE/x-risk. They (maybe unrelatedly) also have a terrible internal culture such that anyone who can leave, does. People changing careers to AI/ML work are likely leaving careers that were even worse, for one reason or another—getting mistreated as postdocs or adjuncts in academia has gotta be one example, and I can’t speak to it but it seems like repeated immediate moral injury in defense or finance might be another. So… those people do not, actually, care, or at least they can be modelled as not caring because anyone who does care doesn’t make it through interviews.
What else might they be doing? Can’t be worse than callously making the guidance systems for the bombs for blowing up schools or hospitals or apartment blocks. How bad is the culture? Can’t possibly be worse than getting told to move cross-country for a one-year position and then getting talked down to and ignored by the department when you get there.
It pays well if you have the skills, and it looks stable so long as you don’t step out of line. I think their hiring managers are going to be doing brisk business.
No one ever got fired buying
IBMOpenAI. ML is flashy and investors seem to care less about gears-level understanding of why something is potentially profitable than whether they can justify it. It seems to work out well enough for them.Here’s a sad story of a plausible possible present: OAI fires a lot of people who care more-than-average about AI safety/NKE/x-risk. They (maybe unrelatedly) also have a terrible internal culture such that anyone who can leave, does. People changing careers to AI/ML work are likely leaving careers that were even worse, for one reason or another—getting mistreated as postdocs or adjuncts in academia has gotta be one example, and I can’t speak to it but it seems like repeated immediate moral injury in defense or finance might be another. So… those people do not, actually, care, or at least they can be modelled as not caring because anyone who does care doesn’t make it through interviews.
What else might they be doing? Can’t be worse than callously making the guidance systems for the bombs for blowing up schools or hospitals or apartment blocks. How bad is the culture? Can’t possibly be worse than getting told to move cross-country for a one-year position and then getting talked down to and ignored by the department when you get there.
It pays well if you have the skills, and it looks stable so long as you don’t step out of line. I think their hiring managers are going to be doing brisk business.