If you’ve entered an agreement with someone, and later learned that they intend (and perhaps have always intended) to exploit your acting in accordance with it to screw you over, it seems both common-sensically and game-theoretically sound to consider the contract null and void, since it was agreed-to based on false premises.
If you make a trade agreement, and the other side does not actually pay up, then I do not think you are bound to provide the good anyway. It was a trade.
If you make a commitment, and then later come to realize that in requesting that commitment the other party was actually taking advantage of you, I think there are a host of different strategies one could pick. I think my current ideal solution is “nonetheless follow-through on your commitment, but make them pay for it in some other way”, but I acknowledge that there are times when it’s correct pick other strategies like “just don’t do it and when anyone asks you why give them a straight answer” and more.
Your strategy in a given domain will also depend on all sorts of factors like how costly the commitment is, how much they’re taking advantage of you for, what recourse you have outside of the commitment (e.g. if they’ve broken the law they can be prosecuted, but in other cases it is harder to punish them).
The thing I currently believe and want to say here is that it is not good to renege on commitments even if you have reason to, and it is better to not renege on them while setting the incentives right. It can be the right choice to do so in order to set the incentives right, but even when it’s the right call I want to acknowledge that this is a cost to our ability to trust in people’s commitments.
Sure, I agree with all of the local points you’ve stated here (“local” as in “not taking account the particulars of the recent OpenAI drama”). For clarity, my previous disagreement was with the following local claim:
When I make an agreement to work closely with you on a crucial project, if I think you’re deceiving me, I will let you know
In my view, “knowingly executing your part of the agreement in a way misaligned with my understanding of how that part is to be executed” counts as “not paying up in a trade agreement”, and is therefore grounds for ceasing to act in accordance with the agreement on my end too. From this latest comment, it sounds like you’d agree with that?
Reading the other branch of this thread, you seem to disagree that that was the situation in which the OpenAI board had been. Sure, I’m hardly certain of this myself. However, if it were, and if they were highly certain of being in that position, I think their actions are fairly justified.
My understanding is that OpenAI’s foundational conceit was prioritizing safety over profit/power-pursuit, and that their non-standard governance structure was explicitly designed to allow the board to take draconian measures if they concluded the company went astray. Indeed, going off these sort of disclaimers or even the recent actions, it seems they were hardly subtle or apologetic about such matters.
“Don’t make us think that you’d diverge from our foundational conceit if given the power to, or else” was part of the deal Sam Altman effectively signed by taking the CEO role. And if the board had concluded that this term was violated, then taking drastic and discourteous measures to remove him from power seems entirely fine to me.
Paraphrasing: while in the general case of deal-making, a mere “l lost trust in you” is not reasonable grounds for terminating the deal, my understanding is that “we have continued trust in you” was part of this specific deal, meaning losing trust was reasonable grounds for terminating this specific deal.
I acknowledge, though, that it’s possible I’m extrapolating from the “high-risk investment” disclaimer plus their current actions incorrectly, that the board had actually failed to communicate this to Sam when hiring him. Do we have cause to believe that, though?
If you make a trade agreement, and the other side does not actually pay up, then I do not think you are bound to provide the good anyway. It was a trade.
If you make a commitment, and then later come to realize that in requesting that commitment the other party was actually taking advantage of you, I think there are a host of different strategies one could pick. I think my current ideal solution is “nonetheless follow-through on your commitment, but make them pay for it in some other way”, but I acknowledge that there are times when it’s correct pick other strategies like “just don’t do it and when anyone asks you why give them a straight answer” and more.
Your strategy in a given domain will also depend on all sorts of factors like how costly the commitment is, how much they’re taking advantage of you for, what recourse you have outside of the commitment (e.g. if they’ve broken the law they can be prosecuted, but in other cases it is harder to punish them).
The thing I currently believe and want to say here is that it is not good to renege on commitments even if you have reason to, and it is better to not renege on them while setting the incentives right. It can be the right choice to do so in order to set the incentives right, but even when it’s the right call I want to acknowledge that this is a cost to our ability to trust in people’s commitments.
Sure, I agree with all of the local points you’ve stated here (“local” as in “not taking account the particulars of the recent OpenAI drama”). For clarity, my previous disagreement was with the following local claim:
In my view, “knowingly executing your part of the agreement in a way misaligned with my understanding of how that part is to be executed” counts as “not paying up in a trade agreement”, and is therefore grounds for ceasing to act in accordance with the agreement on my end too. From this latest comment, it sounds like you’d agree with that?
Reading the other branch of this thread, you seem to disagree that that was the situation in which the OpenAI board had been. Sure, I’m hardly certain of this myself. However, if it were, and if they were highly certain of being in that position, I think their actions are fairly justified.
My understanding is that OpenAI’s foundational conceit was prioritizing safety over profit/power-pursuit, and that their non-standard governance structure was explicitly designed to allow the board to take draconian measures if they concluded the company went astray. Indeed, going off these sort of disclaimers or even the recent actions, it seems they were hardly subtle or apologetic about such matters.
“Don’t make us think that you’d diverge from our foundational conceit if given the power to, or else” was part of the deal Sam Altman effectively signed by taking the CEO role. And if the board had concluded that this term was violated, then taking drastic and discourteous measures to remove him from power seems entirely fine to me.
Paraphrasing: while in the general case of deal-making, a mere “l lost trust in you” is not reasonable grounds for terminating the deal, my understanding is that “we have continued trust in you” was part of this specific deal, meaning losing trust was reasonable grounds for terminating this specific deal.
I acknowledge, though, that it’s possible I’m extrapolating from the “high-risk investment” disclaimer plus their current actions incorrectly, that the board had actually failed to communicate this to Sam when hiring him. Do we have cause to believe that, though?