I wrote a LOT of words in response to this, talking about personal professional experiences that are not something I coherently understand myself as having a duty (or timeless permission?) to share, so I have reduced my response to something shorter and more general. (Applying my own logic to my own words, in realtime!)
There are many cases (arguably stupid cases or counter-producive cases, but cases) that come up more and more when deals and laws and contracts become highly entangling.
Its illegal to “simply” ask people for money in exchange for giving them a transferable right future dividends on a project for how to make money, that you seal with a handshake. The SEC commands silence sometimes and will put you in a cage if you don’t.
You get elected to local office and suddenly the Brown Act (which I’d repeal as part of my reboot of the Californian Constitution had I the power) forbids you from talking with your co-workers (other elected officials) about work (the city government) at a party.
A Confessor is forbidden kinds of information leak.
That there is so much silence associated with unsavory actorsis a valid and concerning contrast, but if you look into it, you’ll probably find that every single OpenAI employee has an NDA already.
OpenAI’s “business arm”, locking its employees down with NDAs, is already defecting on the “let all the info come out” game.
If the legal system will continue to often be a pay-to-win game and full of fucked up compromises with evil, then silences will probably continue to be common, both (1) among the machiavellians and (2) among the cowards, and (3) among the people who were willing to promise reasonable silences as part of hanging around nearby doing harms reduction. (This last is what I was doing as a “professional ethicist”.)
And IT IS REALLY SCARY to try to stand up for what you think you know is true about what you think is right when lots of people (who have a profit motive for believing otherwise) loudly insist otherwise.
People used to talk a lot about how someone would “go mad” and when I was younger it always made me slightly confused, why “crazy” and “angry” were conflated. Now it makes a lot of sense to me.
I’ve seen a lot of selfish people call good people “stupid” and once the non-selfish person realizes just how venal and selfish and blind the person calling them stupid is, it isn’t hard to call that person “evil” and then you get a classic “evil vs stupid” (or “selfish vs altruistic”) fight. As they fight they become more “mindblind” to each other? Or something? (I’m working on an essay on this, but it might not be ready for a week or a month or a decade. Its a really knotty subject on several levels.)
Good people know they are sometimes fallible, and often use peer validation to check their observations, or check their proofs, or check their emotional calibration, and when those “validation services” get withdrawn for (hidden?) venal reasons, it can be emotionally and mentally disorienting.
(And of course in issues like this one a lot of people are automatically going to have a profit motive when a decision arises about whether to build a public good or not. By definition: the maker of a public good can’t easily charge money for such a thing. (If they COULD charge money for it then it’d be a private good or maybe a club good.))
The Board of OpenAI might be personally sued by a bunch of Machiavellian billionaires, or their allies, and if that happens, everything the board was recorded as saying will be gone over with a fine-toothed comb, looking for tiny little errors.
Every potential quibble is potentially more lawyer time. Every bit of lawyer time is a cost that functions as a financial reason to settle instead of keep fighting for what is right. Making your attack surface larger is much easier than making an existing attack surface smaller.
If the board doesn’t already have insurance for that extenuating circumstance, then I commit hereby to donate at least $100 to their legal defense fund, if they start one, which I hope they never need to do.
And in the meantime, I don’t think they owe me much of anything, except for doing their damned best to ensure that artificial general intelligence benefits all humanity.
I wrote a LOT of words in response to this, talking about personal professional experiences that are not something I coherently understand myself as having a duty (or timeless permission?) to share, so I have reduced my response to something shorter and more general. (Applying my own logic to my own words, in realtime!)
There are many cases (arguably stupid cases or counter-producive cases, but cases) that come up more and more when deals and laws and contracts become highly entangling.
Its illegal to “simply” ask people for money in exchange for giving them a transferable right future dividends on a project for how to make money, that you seal with a handshake. The SEC commands silence sometimes and will put you in a cage if you don’t.
You get elected to local office and suddenly the Brown Act (which I’d repeal as part of my reboot of the Californian Constitution had I the power) forbids you from talking with your co-workers (other elected officials) about work (the city government) at a party.
A Confessor is forbidden kinds of information leak.
Fixing <all of this (gesturing at nearly all of human civilization)> isn’t something that we have the time or power to do before we’d need to USE the “fixed world” to handle AGI sanely or reasonably, because AGI is coming so fast, and the world is so broken.
That there is so much silence associated with unsavory actors is a valid and concerning contrast, but if you look into it, you’ll probably find that every single OpenAI employee has an NDA already.
OpenAI’s “business arm”, locking its employees down with NDAs, is already defecting on the “let all the info come out” game.
If the legal system will continue to often be a pay-to-win game and full of fucked up compromises with evil, then silences will probably continue to be common, both (1) among the machiavellians and (2) among the cowards, and (3) among the people who were willing to promise reasonable silences as part of hanging around nearby doing harms reduction. (This last is what I was doing as a “professional ethicist”.)
And IT IS REALLY SCARY to try to stand up for what you think you know is true about what you think is right when lots of people (who have a profit motive for believing otherwise) loudly insist otherwise.
People used to talk a lot about how someone would “go mad” and when I was younger it always made me slightly confused, why “crazy” and “angry” were conflated. Now it makes a lot of sense to me.
I’ve seen a lot of selfish people call good people “stupid” and once the non-selfish person realizes just how venal and selfish and blind the person calling them stupid is, it isn’t hard to call that person “evil” and then you get a classic “evil vs stupid” (or “selfish vs altruistic”) fight. As they fight they become more “mindblind” to each other? Or something? (I’m working on an essay on this, but it might not be ready for a week or a month or a decade. Its a really knotty subject on several levels.)
Good people know they are sometimes fallible, and often use peer validation to check their observations, or check their proofs, or check their emotional calibration, and when those “validation services” get withdrawn for (hidden?) venal reasons, it can be emotionally and mentally disorienting.
(And of course in issues like this one a lot of people are automatically going to have a profit motive when a decision arises about whether to build a public good or not. By definition: the maker of a public good can’t easily charge money for such a thing. (If they COULD charge money for it then it’d be a private good or maybe a club good.))
The Board of OpenAI might be personally sued by a bunch of Machiavellian billionaires, or their allies, and if that happens, everything the board was recorded as saying will be gone over with a fine-toothed comb, looking for tiny little errors.
Every potential quibble is potentially more lawyer time. Every bit of lawyer time is a cost that functions as a financial reason to settle instead of keep fighting for what is right. Making your attack surface larger is much easier than making an existing attack surface smaller.
If the board doesn’t already have insurance for that extenuating circumstance, then I commit hereby to donate at least $100 to their legal defense fund, if they start one, which I hope they never need to do.
And in the meantime, I don’t think they owe me much of anything, except for doing their damned best to ensure that artificial general intelligence benefits all humanity.