And on top of that, my not-very-informed-impression-from-a-distance is that [Sam]’s more a smile-and-rub-elbows guy than an actual technical manager
I agree, but I’m not sure that’s insufficient to carve out a productive niche at Microsoft. He appears to be a good negotiator, so if he goes all-in spending his political capital to ensure his subsidiary isn’t crippled by bureaucracy, he has a good chance of achieving it.
The questions are (1) whether he’d realize he needs to do that, and (2) whether he’d care to do that, versus just negotiating for more personal power and trying to climb to Microsoft CEO or whatever.
(1) depends on whether he’s actually generally competent (as in, “he’s able to quickly generalize his competence to domains he’s never navigated before”), as opposed to competent-at-making-himself-appear-competent.
I’ve never paid much attention to him before, so no idea on him specifically. On priors, though, people with his profile are usually the latter type, not the former.
(2) depends on how much he’s actually an AGI believer vs. standard power-maximizer who’d, up to now, just been in a position where appearing to be an AGI believer was aligned with maximizing power.
The current events seem to down-weight “he’s actually an AGI believer”, so that’s good at least.
… Alright, having written this out, I’ve now somewhat updated towards “Microsoft will strangle OpenAI”. Cool.
I’ve seen/heard a bunch of people in the LW-o-sphere saying that the OpenAI corporate drama this past weekend was clearly bad. And I’m not really sure why people think that?
In addition to what’s been discussed, I think there’s some amount of people conflating the updates they made based on what happened with their updates based on what the events revealed.
E. g., prior to the current thing, there was some uncertainty regarding “does Sam Altman actually take AI risk seriously, even if he has a galaxy-brained take on it, as opposed to being motivated by profit motives and being pretty good at paying lip service to safety?” and “would OpenAI’s governance structure perfectly work to rein in profit motives?” and such. I didn’t have much allocated to optimism here, and I expect a lot of people didn’t, but there likely was a fair amount of hopefulness about such things.
Now it’s all been dashed. Turns out, why, profit motives and realpolitik don’t give up in the face of a cleverly-designed local governance structure, and Sam appears to be a competent power-maximizer, not a competent power-maximizer who’s also secretly a good guy.
All of that was true a week ago, we only learned about it now, but the two types of updates are pretty easy to conflate if you’re not being careful.
I agree, but I’m not sure that’s insufficient to carve out a productive niche at Microsoft. He appears to be a good negotiator, so if he goes all-in spending his political capital to ensure his subsidiary isn’t crippled by bureaucracy, he has a good chance of achieving it.
The questions are (1) whether he’d realize he needs to do that, and (2) whether he’d care to do that, versus just negotiating for more personal power and trying to climb to Microsoft CEO or whatever.
(1) depends on whether he’s actually generally competent (as in, “he’s able to quickly generalize his competence to domains he’s never navigated before”), as opposed to competent-at-making-himself-appear-competent.
I’ve never paid much attention to him before, so no idea on him specifically. On priors, though, people with his profile are usually the latter type, not the former.
(2) depends on how much he’s actually an AGI believer vs. standard power-maximizer who’d, up to now, just been in a position where appearing to be an AGI believer was aligned with maximizing power.
The current events seem to down-weight “he’s actually an AGI believer”, so that’s good at least.
… Alright, having written this out, I’ve now somewhat updated towards “Microsoft will strangle OpenAI”. Cool.
In addition to what’s been discussed, I think there’s some amount of people conflating the updates they made based on what happened with their updates based on what the events revealed.
E. g., prior to the current thing, there was some uncertainty regarding “does Sam Altman actually take AI risk seriously, even if he has a galaxy-brained take on it, as opposed to being motivated by profit motives and being pretty good at paying lip service to safety?” and “would OpenAI’s governance structure perfectly work to rein in profit motives?” and such. I didn’t have much allocated to optimism here, and I expect a lot of people didn’t, but there likely was a fair amount of hopefulness about such things.
Now it’s all been dashed. Turns out, why, profit motives and realpolitik don’t give up in the face of a cleverly-designed local governance structure, and Sam appears to be a competent power-maximizer, not a competent power-maximizer who’s also secretly a good guy.
All of that was true a week ago, we only learned about it now, but the two types of updates are pretty easy to conflate if you’re not being careful.