Insofar as Sam would never cooperate with the board firing him at all, even more gracefully, then yeah, I guess the board never really had any governance power at all, and it’s good that the fig leaf has been removed.
And basically, we’ll never know if they bungled a coup that they could have pulled off if they were more savvy, or if this was a foregone conclusion.
With apologies for the long response… I suspect the board DID have governance power, but simply not decisive power.
Also it was probably declining, and this might have been a net positive way to spend what remained of it… or not?
It is hard to say, and I don’t personally have the data I’d need to be very confident. “Being able to maintain a standard of morality for yourself even when you don’t have all the data and can’t properly even access all the data” is basically the core REASON for deontic morality, after all <3
Naive consequentialism has a huge GIGO data problem that Kant’s followers do not have.
(The other side of it (the “cost of tolerated ignorance” so to speak) is that Kantian’s usually are leaving “expected value” (even altruistic expected value FOR OTHERS) on the table by refraining from actions that SEEM positive EV but which have large error bars based on missing data, where some facts could exist that they don’t know about that would later cause them to have appeared to lied or stolen or used a slave or run for high office in a venal empire or whatever.)
I personally estimate that it would have been reasonable and prudent for Sam to cultivate other bases of power, preparing for a breach of amity in advance, and I suspect he did. (This is consistent with suspecting the board’s real power was declining.)
Conflict in general is sad, and often bad, and it usually arises at the boundaries where two proactive agentic processes show up with each of them “feeling like Atlas” and feeling that that role morally authorizes them to regulate others in a top-down way… to grant rewards, or to judge conflicts, or to sanction wrong-doers...
...if two such entities recognize each other as peers, then it can reduce the sadness of their “lonely Atlas feelings”! But also they might have true utility functions, and not just be running on reflexes! Or their real-agency-echoing reflexive tropisms might be incompatible. Or mixtures thereof?
Something I think I’ve seen many times is a “moral reflex” on one side (that runs more on tropisms?) be treated as a “sign of stupidity” by someone who habitually runs a shorter tighter OODA loop and makes a lot of decisions, whose flexibility is taken as a “sign of evil”. Then both parties “go mad” :-(
Before any breach, you might get something with a vibe like “a meeting of sovereigns”, with perhaps explicit peace or honorable war… like with two mafia families, or like two blockchains pondering whether or how to fund dual smart contracts that maintain token-value pegs at a stable ratio, or like the way Putin and Xi are cautious around each other (but probably also “get” each other (and “learn from a distance” from each other’s seeming errors)).
In a democracy, hypothetically, all the voters bring their own honor to a big shared table in this way, and then in Fukuyama’s formula such “Democrats” can look down on both “Peasants” (for shrinking from the table even when invited to speak and vote in safety) and also “Nobles” (for simple power-seeking amorality that only cares about the respect and personhood of other Nobles who have fought for and earned their nobility via conquest or at least via self defense).
I could easily imagine that Sam does NOT think of himself “as primarily a citizen of any country or the world” but rather thinks of himself as something like “a real player”, and maybe only respects “other real players”?
(Almost certainly Sam doesn’t think of himself AS a nominal “noble” or “oligarch” or whatever term. Not nominally. I just suspect, as a constellation of predictions and mechanisms, that he would be happy if offered praise shaped according to a model of him as, spiritually, a Timocracy-aspiring Oligarch (who wants money and power, because those are naturally good/familiar/oikion, and flirts in his own soul (or maybe has a shadow relationship?) with explicitly wanting honor and love), rather than thinking of himself as a Philosopher King (who mostly just wants to know things, and feels the duty of logically coherent civic service as a burden, and does NOT care for being honored or respected by fools, because fools don’t even know what things are properly worthy of honor). In this framework, I’d probably count as a sloth, I think? I have mostly refused the call to adventure, the call of duty, the call to civic service.)
I would totally get it if Sam mightthink that OpenAI was already “bathed in the blood of a coup” from back when nearly everyone with any internal power somehow “maybe did a coup” on Elon?
The Sam in my head would be proud of having done that, and maybe would have wished to affiliate with others who are proud of it in the same way?
From a distance, I would have said that Elon starting them up with such a huge warchest means Elon probably thereby was owed some debt of “governing gratitude” for his beneficence?
If he had a huge say in the words of the non-profit’s bylaws, then an originalist might respect his intent when trying to apply them far away in time and space. (But not having been in any of those rooms, it is hard to say for sure.)
((It can be true that it was “amicable” in some actual pareto positive breakups, whose outer forms can then be copied by people experiencing non-pareto-optimal breakups. Sometimes even the “loser” of a breakup values their (false?) reputation for amicable breakups more than they think they can benefit from kicking up a fuss about having been “done dirty” such that the fuss would cause others to notice ad help them less than the lingering reputation for conflict would hurt.
However there are very many wrinkles to the localized decision theory here!
Like one big and real concern is that a community would LIKE to “not have to take sides” over every single little venal squabble, such as to maintain itself AS A COMMUNITY (with all the benefits of large scale coordination and so on) rather than globally forking every single time any bilateral interaction goes very sour, with people dividing based on loyalty rather than uniting via truth and justice.
This broader social good is part of why a healthy and wise and cheaply available court system is, itself, an enormous public good for a community full of human people who have valid selfish desires to maintain a public reputation as “a just person” and yet also as “a loyal person”.))
So the REAL “psychological” details about “OpenAI’s possible first coup” are very obscure at this point, and imputed values for that event are hard to use (at least hard for me who is truly ignorant of them) in inferences whose conclusions could be safely treated as “firm enough to be worth relying on in plans”?
But if that was a coup, and if OpenAI already had people inside of it who already thought that OpenAI ran on nearly pure power politics (with only a pretense of cooperative non-profit goals), then it seems like it would be easy (and psychologically understandable) for Sam to read allpretense of morality or cooperation (in a second coup) as bullshit.
And if the board predicted this mental state in him, then they might “lock down first”?
For a human person to live like either a naive saint (with no privacy or possessions at all?!) or a naive monster (always being a closer?) would be tragic and inhuman.
Probably digital “AI people” will have some equivalent experience of similar tradeoffs, relative to whatever Malthusian limits they hit (if they ever hit Malthusian limits, and somehow retain any semblance or shape of “personhood” as they adapt to their future niche). My hope is that they “stay person shaped” somehow. Because I’m a huge fan of personhood.)
The intrinsic tensions between sainthood and monsterhood means that any halo of imaginary Elons or imaginary Sams, who I could sketch in my head for lack of real data, might have to be dropped in an instant based on new evidence.
In reality, they are almost certainly just dudes, just people, and neither saints, nor monsters.
Most humans are neither, and the lack of coherent monsters is good for human groups (who would otherwise be preyed upon), and the lack of coherent saints is good for each one of us (as a creature in a world, who has to eat, and who has parents and who hopefully also has children, and for whom sainthood would be locally painful).
Both sainthood and monsterhood are ways of being that have a certain call on us, given the world we live in. Pretending to be a saint is a good path to private power over others, and private power is subjectively nice to have… at least until the peasants with knifes show up (which they sometimes do).
I think that tension is part of why these real world dramatic events FEEL like educational drama, and pull such huge audiences (of children?), who come to see how the highest and strongest and richest and most prestigious people in their society balance such competing concerns within their own souls.
He can leave “to avoid a conflict of interest with AI development efforts at Tesla”, then. It doesn’t have to be “relaxation”-coded. Let him leave with dignity.
Insofar as Sam would never cooperate with the board firing him at all, even more gracefully, then yeah, I guess the board never really had any governance power at all, and it’s good that the fig leaf has been removed.
And basically, we’ll never know if they bungled a coup that they could have pulled off if they were more savvy, or if this was a foregone conclusion.
With apologies for the long response… I suspect the board DID have governance power, but simply not decisive power.
Also it was probably declining, and this might have been a net positive way to spend what remained of it… or not?
It is hard to say, and I don’t personally have the data I’d need to be very confident. “Being able to maintain a standard of morality for yourself even when you don’t have all the data and can’t properly even access all the data” is basically the core REASON for deontic morality, after all <3
Naive consequentialism has a huge GIGO data problem that Kant’s followers do not have.
(The other side of it (the “cost of tolerated ignorance” so to speak) is that Kantian’s usually are leaving “expected value” (even altruistic expected value FOR OTHERS) on the table by refraining from actions that SEEM positive EV but which have large error bars based on missing data, where some facts could exist that they don’t know about that would later cause them to have appeared to lied or stolen or used a slave or run for high office in a venal empire or whatever.)
I personally estimate that it would have been reasonable and prudent for Sam to cultivate other bases of power, preparing for a breach of amity in advance, and I suspect he did. (This is consistent with suspecting the board’s real power was declining.)
Conflict in general is sad, and often bad, and it usually arises at the boundaries where two proactive agentic processes show up with each of them “feeling like Atlas” and feeling that that role morally authorizes them to regulate others in a top-down way… to grant rewards, or to judge conflicts, or to sanction wrong-doers...
...if two such entities recognize each other as peers, then it can reduce the sadness of their “lonely Atlas feelings”! But also they might have true utility functions, and not just be running on reflexes! Or their real-agency-echoing reflexive tropisms might be incompatible. Or mixtures thereof?
Something I think I’ve seen many times is a “moral reflex” on one side (that runs more on tropisms?) be treated as a “sign of stupidity” by someone who habitually runs a shorter tighter OODA loop and makes a lot of decisions, whose flexibility is taken as a “sign of evil”. Then both parties “go mad” :-(
Before any breach, you might get something with a vibe like “a meeting of sovereigns”, with perhaps explicit peace or honorable war… like with two mafia families, or like two blockchains pondering whether or how to fund dual smart contracts that maintain token-value pegs at a stable ratio, or like the way Putin and Xi are cautious around each other (but probably also “get” each other (and “learn from a distance” from each other’s seeming errors)).
In a democracy, hypothetically, all the voters bring their own honor to a big shared table in this way, and then in Fukuyama’s formula such “Democrats” can look down on both “Peasants” (for shrinking from the table even when invited to speak and vote in safety) and also “Nobles” (for simple power-seeking amorality that only cares about the respect and personhood of other Nobles who have fought for and earned their nobility via conquest or at least via self defense).
I could easily imagine that Sam does NOT think of himself “as primarily a citizen of any country or the world” but rather thinks of himself as something like “a real player”, and maybe only respects “other real players”?
(Almost certainly Sam doesn’t think of himself AS a nominal “noble” or “oligarch” or whatever term. Not nominally. I just suspect, as a constellation of predictions and mechanisms, that he would be happy if offered praise shaped according to a model of him as, spiritually, a Timocracy-aspiring Oligarch (who wants money and power, because those are naturally good/familiar/oikion, and flirts in his own soul (or maybe has a shadow relationship?) with explicitly wanting honor and love), rather than thinking of himself as a Philosopher King (who mostly just wants to know things, and feels the duty of logically coherent civic service as a burden, and does NOT care for being honored or respected by fools, because fools don’t even know what things are properly worthy of honor). In this framework, I’d probably count as a sloth, I think? I have mostly refused the call to adventure, the call of duty, the call to civic service.)
I would totally get it if Sam might think that OpenAI was already “bathed in the blood of a coup” from back when nearly everyone with any internal power somehow “maybe did a coup” on Elon?
The Sam in my head would be proud of having done that, and maybe would have wished to affiliate with others who are proud of it in the same way?
From a distance, I would have said that Elon starting them up with such a huge warchest means Elon probably thereby was owed some debt of “governing gratitude” for his beneficence?
If he had a huge say in the words of the non-profit’s bylaws, then an originalist might respect his intent when trying to apply them far away in time and space. (But not having been in any of those rooms, it is hard to say for sure.)
Elon’s ejection back then, if I try to scry it from public data, seems to have happened with the normal sort of “oligarchic dignity” where people make up some bullshit about how a breakup was amicable.
((It can be true that it was “amicable” in some actual pareto positive breakups, whose outer forms can then be copied by people experiencing non-pareto-optimal breakups. Sometimes even the “loser” of a breakup values their (false?) reputation for amicable breakups more than they think they can benefit from kicking up a fuss about having been “done dirty” such that the fuss would cause others to notice ad help them less than the lingering reputation for conflict would hurt.
However there are very many wrinkles to the localized decision theory here!
Like one big and real concern is that a community would LIKE to “not have to take sides” over every single little venal squabble, such as to maintain itself AS A COMMUNITY (with all the benefits of large scale coordination and so on) rather than globally forking every single time any bilateral interaction goes very sour, with people dividing based on loyalty rather than uniting via truth and justice.
This broader social good is part of why a healthy and wise and cheaply available court system is, itself, an enormous public good for a community full of human people who have valid selfish desires to maintain a public reputation as “a just person” and yet also as “a loyal person”.))
So the REAL “psychological” details about “OpenAI’s possible first coup” are very obscure at this point, and imputed values for that event are hard to use (at least hard for me who is truly ignorant of them) in inferences whose conclusions could be safely treated as “firm enough to be worth relying on in plans”?
But if that was a coup, and if OpenAI already had people inside of it who already thought that OpenAI ran on nearly pure power politics (with only a pretense of cooperative non-profit goals), then it seems like it would be easy (and psychologically understandable) for Sam to read all pretense of morality or cooperation (in a second coup) as bullshit.
And if the board predicted this mental state in him, then they might “lock down first”?
Taking the first legibly non-negotiated non-cooperative step generally means that afterwards things will be very complex and time dependent and once inter-agent conflict gets to the “purposeful information hiding stage” everyone is probably in for a bad time :-(
For a human person to live like either a naive saint (with no privacy or possessions at all?!) or a naive monster (always being a closer?) would be tragic and inhuman.
Probably digital “AI people” will have some equivalent experience of similar tradeoffs, relative to whatever Malthusian limits they hit (if they ever hit Malthusian limits, and somehow retain any semblance or shape of “personhood” as they adapt to their future niche). My hope is that they “stay person shaped” somehow. Because I’m a huge fan of personhood.)
The intrinsic tensions between sainthood and monsterhood means that any halo of imaginary Elons or imaginary Sams, who I could sketch in my head for lack of real data, might have to be dropped in an instant based on new evidence.
In reality, they are almost certainly just dudes, just people, and neither saints, nor monsters.
Most humans are neither, and the lack of coherent monsters is good for human groups (who would otherwise be preyed upon), and the lack of coherent saints is good for each one of us (as a creature in a world, who has to eat, and who has parents and who hopefully also has children, and for whom sainthood would be locally painful).
Both sainthood and monsterhood are ways of being that have a certain call on us, given the world we live in. Pretending to be a saint is a good path to private power over others, and private power is subjectively nice to have… at least until the peasants with knifes show up (which they sometimes do).
I think that tension is part of why these real world dramatic events FEEL like educational drama, and pull such huge audiences (of children?), who come to see how the highest and strongest and richest and most prestigious people in their society balance such competing concerns within their own souls.