3-year later follow-up: I bought a Hi-Tec C Coleto pen for my brother, who is in a profession where he has to write a lot, and color code forms, etc. He likes it a lot. Thanks for the recommendation.
quanticle
On the other hand, if plaintiff has already elicited testimony from the engineer to the effect that the conversation happened, could defendant try to imply that it didn’t happen by asking the manager whether he recalled the meeting? I mean, yes, but it’s probably a really bad strategy. Try to think about how you would exploit that as plaintiff: either so many people are mentioning potentially life-threatening risks of your product that you can’t recall them all, in which case the company is negligent, or your memory is so bad it was negligent for you to have your regularly-delete-records policy. It’s like saying I didn’t commit sexual harassment because we would never hire a woman in the first place. Sure, it casts doubt on the opposition’s evidence, but at what cost?
If it’s a criminal trial, where facts have to be proven beyond a reasonable doubt, it’s a common strategy. If the whistleblower doesn’t have evidence of the meeting taking place, and no memos, reports or e-mails documenting that they passed their concerns up the chain, it’s perfectly reasonable for a representative of the corporation to reply, “I don’t recall hearing about this concern.” And that’s that. It’s the engineer’s word against not just one witness, but a whole slew of witnesses, each of whom is going to say, “No, I don’t recall hearing about this concern.”
Indeed, this outcome is so predictable that lawyers won’t even take on these sorts of cases unless the whistleblower can produce written evidence that management was informed of a risk, and made a conscious decision to ignore it and proceed.
Also keep in mind that if we’re going to assume the company will lie on the stand about complex technical points
I’m not assuming anything of the sort. I’m merely saying that, if the whistleblower doesn’t have written evidence that they warned their superior about a given risk, their superiors will be coached by the company’s lawyers to say, “I don’t recall,” or, “I did not receive any written documents informing me of this risk.” Now, at this point, the lawyers for the prosecution can bring up the document retention policy and state that the reason they don’t have any evidence is because of the company’s own document retention policies. But that doesn’t actually prove anything. Absence of evidence is not, in and of itself, evidence of wrongdoing.
One reason not to mess with this is that we have other options. I could keep a journal. If I keep notes like “2023-11-09: warned boss that widgets could explode at 80C. boss said they didn’t have time for redesign and it probably wouldn’t happen. ugh! 2023-11-10: taco day in cafeteria, hell yeah!” then I can introduce these to support my statement.
Yes, that’s certainly something you can do. But it’s a much weaker sort of evidence than a printout of an e-mail that you sent, with your name on the
from
line and your boss’s name on theto
line. At the very least, you’re going to be asked, “If this was such a concern for you, why didn’t you bring it up with your boss?” And if you say you did, you’ll be asked, “Well, do you have any evidence of this meeting?” And if your excuse is, “Well, the corporation’s data retention policies erased that evidence,” it weakens your case.
The thing I said that the defendant would not dispute is the fact that the engineer said something to them, not whether they should have believed him.
I still disagree. If it wasn’t written down, it didn’t happen, as far as the organization is concerned. The engineer’s manager can (and probably will) claim that they didn’t recall the conversation, or dispute the wording, or argue that while the engineer may have said something, it wasn’t at all apparent that the problem was a serious concern.
There’s a reason that whistleblowers focus so hard on generating and maintaining a paper trail of their actions and conversations, to the point that they will often knowingly and willfully subvert retention policies by keeping their own copies of crucial communications. They know that, without documentation (e-mails, screenshots, etc), it’ll just be a he-said-she-said argument between themselves and an organization that is far more powerful than them. The documentation establishes hard facts, and makes it much more difficult for people higher up in the chain of command to say they didn’t know or weren’t informed.
If you notice something risky, say something. If the thing you predicted happens, point out the fact that you communicated it.
I think this needs to be emphasized more. If a catastrophe happens, corporations often try to pin blame on individual low-level employees while deflecting blame from the broader organization. Having a documented paper trail indicating that you communicated your concerns up the chain of command prevents that same chain from labeling you as a “rogue employee” or “bad apple” who was acting outside the system to further your personal reputation or financial goals.
Plaintiff wants to prove that an engineer told the CEO that the widgets were dangerous. So he introduces testimony from the engineer that the engineer told the CEO that the widgets were dangerous. Defendant does not dispute this.
Why wouldn’t the defendant dispute this? In every legal proceeding I’ve seen, the defendant has always produced witnesses and evidence supporting their analysis. In this case, I would expect the defendant to produce analyses showing that the widgets were expected to be safe, and if they caused harm, it was due to unforeseen circumstances that were entirely beyond the company’s control. I rarely speak in absolutes, but in this case, I’m willing to state that there’s always going to be some analysis disagreeing with the engineer’s claims regarding safety.
If I say I want you to turn over your email records to me in discovery to establish that an engineer had told you that your widgets were dangerous, but you instead destroy those records, the court will instruct the jury to assume that those records did contain that evidence.
Only if you do so after you were instructed to preserve records by the court. If you destroyed records, per your normal documented retention policies prior to any court case being filed, there’s no grounds for adverse inference.
Plaintiff responds by showing that defendant had a policy designed to prevent such records from being created, so defendant knows that records would not exist whether the meeting took place or not, and thus his argument is disingenuous. Would you follow defendant’s strategy here? I wouldn’t.
Every company I’ve worked for has had retention policies that call for the automatic deletion of e-mails after a period of time (5-7 years). Furthermore, as I alluded to in my other post, Google had an explicit policy of disabling permanent chat records for certain sensitive conversations:
At trial, the DOJ also presented evidence and testimony about Google’s policy called “Communicate with Care.” Under that policy, Google employees are trained “to have sensitive conversations over chat with history off,” the DOJ said, ensuring that the conversation would be auto-deleted in 24 hours.
This policy has created much tension between the DOJ and Google before the trial. The DOJ has argued that “Google’s daily destruction of written records prejudiced the United States by depriving it of a rich source of candid discussions between Google’s executives, including likely trial witnesses.” Google has defended the policy, claiming that the DOJ has “not been denied access to material information needed to prosecute these cases and they have offered no evidence that Google intentionally destroyed such evidence.”
And while this does look bad for Google, one can very easily argue that the alternative, the release of a “smoking gun” memo like the “embrace, extend, innovate” document would be far worse.
Would it be as self evidently damning as you think it would be? If so, then why would a company like Google explicitly pursue such a weak strategy? It’s not just Google either. When I worked at a different FAANG company, I was told in orientation to never use legal terminology in e-mail, for similar reasons.
The first lawyer will be hardly able to contain his delight as he asks the court to mark “WidgetCo Safe Communication Guidelines” for evidence.
Having safe communication guidelines isn’t as damning as you think it is. The counsel for WidgetCo would merely reply that the safe communication guidelines are there to prevent employees from accidentally creating liabilities by misusing legal language. This is no different than admonishing non-technical employees for misusing technical language.
Indeed this was Google’s actual strategy.
- Nov 8, 2023, 7:18 AM; 4 points) 's comment on The 6D effect: When companies take risks, one email can be very powerful. by (
Games, unlike many real life situations, are entered into by choice. If you are not playing to win, then one must ask why are you bothering to play? Or, more specifically, why are you playing this game and not some other?
Have you read Playing To Win, by David Sirlin? It makes many of the points that you make here, but it doesn’t shy away from winning as the ultimate goal, as you seem to be doing. Sirlin doesn’t fall into the trap of lost purposes. He keeps in mind that the goal is to win. Yes, of course, by all means try new strategies and learn the mechanics of the game, but remember that the goal is victory.
was militarily weakened severely
That’s another highly contentious assertion. Even at the height of Vietnam, the US never considered Southeast Asia to be the main domain of competition against the Soviet Union. The primary focus was always on fielding a military force capable of challenging the Soviets in Western Europe. Indeed, one of the reasons the US failed in Vietnam is because the military was unwilling to commit its best units and commanders to what the generals perceived was a sideshow.
why the US allied with China against the USSR
Was the US ever allied with China? What we did as a result of the Sino-Soviet split was simply let the People’s Republic of China back into the international system from which they had been excluded. The US certainly did not pursue any greater alignment with China until much later, at which point the Soviet Union was well into its terminal decline.
failing to prevent the oil shocks in formerly US-friendly middle eastern regimes, which were economic catastrophes that each could have done far more damage if luck was worse
More evidence is needed. The oil shocks were certainly very visible, but it’s not clear from the statistical data that they did much damage to the US economy. In fact, the political response to the oil shocks (rationing, price controls, etc) did arguably more to hurt the economy than the oil shocks themselves.
Meanwhile, the USSR remained strong militarily in spite of the economic stagnation.
The actual readiness of Soviet forces, as opposed to the hilariously false readiness reports published by unit commanders, is a matter of great debate. After the Cold War, when US commanders had a chance to tour Soviet facilities in ex-Warsaw Pact states, they were shocked at the poor level of repair of equipment and poor level of readiness among the troops. Furthermore, by the Soviets’ own admission, the performance of their troops in Afghanistan wasn’t very good, even when compared against the relatively poor level of training and equipment of the insurgent forces.
But the idea that the US was doing fine after Vietnam, including relative to the Soviets, is not very easy to believe, all things considered.
Vietnam was certainly a blow to US power, but it was nowhere near as serious a blow as you seem to believe.
each one after 1900 was followed by either the Cuban Missile Crisis and the US becoming substantially geopolitically weaker than the USSR after losing the infowar over Vietnam
I’m sorry, what? That’s a huge assertion. The Vietnam War was a disaster, but I fail to see how it made the US “significantly geopolitically weaker”. One has to remember that, at the same time that the US was exiting Vietnam, its main rival, the Soviet Union, was entering a twenty-five year period of economic stagnation that would culminate in its collapse.
Chevron deference means that judges defer to federal agencies instead of interpreting the laws themselves where the statute is ambiguous.
Which is as it should be, according to the way the US system of government is set up. The legislative branch makes the law. The executive branch enforces the law. The judicial branch interprets the law. This is a fact that every American citizen ought to know, from their grade-school civics classes.
For example, would you rather the career bureaucrats in the Environmental Protection Agency determine what regulations are appropriate to protect drinking water or random judges without any relevant expertise?
I would much rather have an impartial third party determine which regulations are appropriate rather a self-interested bureaucrat. Otherwise what’s the point of having a judicial system at all, if the judges are just going to yield to the executive on all but a narrow set of questions?
Government agencies aren’t always competent but the alternative is a patchwork of potentially conflicting decisions from judges ruling outside of their area of expertise.
Which can be resolved by Congress passing laws or by the Supreme Court resolving the contradiction between the different circuit courts.
I think they will probably do better and more regulations than if politicians were more directly involved
Why do you think this?
Furthermore, given the long history of government regulation having unintended consequences as a result of companies and private individuals optimizing their actions to take advantage of the regulation, it might be the case that government overregulation makes a catastrophic outcome more likely.
While overturning Chevron deference seems likely to have positive effects for many industries which I think are largely overregulated, it seems like it could be quite bad for AI governance. Assuming that the regulation of AI systems is conducted by members of a federal agency (either a pre-existing one or a new one designed for AI as several politicians have suggested), I expect that the bureaucrats and experts who staff the agency will need a fair amount of autonomy to do their job effectively. This is because the questions relevant AI regulation (i. e. which evals systems are required to pass) are more technically complicated than in most other regulatory domains, which are already too complicated for politicians to have a good understanding of.
Why do you think that the same federal bureaucrats who incompetently overregulate other industries will do a better job regulating AI?
Sometimes if each team does everything within the rules to win then the game becomes less fun to watch and play
Then the solution is to change the rules. Basketball did this. After an infamous game where a team took the lead and then just passed the ball around to deny it to their opponents, basketball added a shot clock, to force teams to try to score (or else give the ball to the other team). (American) Football has all sorts of rules and penalties (“illegal formation”, “ineligible receiver downfield”, “pass interference”, etc) whose sole purpose is to ensure that games aren’t dominated by tactics that aren’t fun to watch. Soccer has the off-sides rule, which prevents teams from parking all their players right next to the other team’s goal. Tennis forces crosscourt serves. And, as I alluded to above, motorsport regularly changes its rules, to try to ensure greater competitive balance and more entertaining races.
With regards to chess, specifically, Magnus Carlsen agrees (archive) that classical chess is boring and too reliant on pre-memorized opening lines. He argues for shorter games with simpler time controls, which would lead to more entertaining games which would be easier to explain to new viewers.
None of these other sports feel the need to appeal to a wooly-headed “spirit of the game” in order to achieve entertaining play. What makes cricket so special?
EDIT: I would add that cricket is also undergoing an evolution of its own, with the rise of twenty-20 cricket and the Indian Premier League.
Isn’t this stupid? To have an extra set of ‘rules’ which aren’t really rules and everyone disagrees on what they actually are and you can choose to ignore them and still win the game?
Yes, it is stupid.
Games aren’t real life. The purpose of participating in a game is to maximize performance, think laterally, exploit mistakes, and do everything you can, within the explicit rules, to win. Doing that is what makes games fun to play. Watching other people do that, at a level that you could never hope to reach is what makes spectator sports fun to watch.
Imagine if this principle were applied to other sports. Should tennis umpires suddenly start excusing double faults, because the sun was in the eyes of the serving player? Should soccer referees start disallowing own-goals, because no player could possibly mean to shoot into their own net? If a football player trips and fumbles the ball for no particular reason, should the referee stop the play, and not allow the other team to recover? If Magnus Carlsen blunders and puts his queen in a position where it can be captured, should Ian Nepomniachtchi feel any obligation to offer a takeback?
Fundamentally, what happened was that Bairstow made a mistake. He made a damned silly mistake, forgetting that overs are six balls, not five. Carey took advantage of the error, as was his right, and, I would argue, his obligation. Everything else is sour grapes on the part of the English side. If the Australian batsman had made a similarly silly mistake, and the English bowler had not taken advantage, I would be willing to bet that very few would be talking about the sportsmanship of the English bowler. Instead, the narrative would have been one of missed opportunities. How could the bowler have let such an obvious opportunity slip through his fingers?!
This pattern is even more pronounced in motorsport. The history of Formula 1 is the story of teams finding ways to tweak their cars to gain an advantage, other teams whining about unfairness, and the FIA then tweaking the rules to outlaw the “innovation”.
Examples include:
Brabham BT-46 -- a car that used a fan to suck air out from underneath it, allowing it to produce extra downforce
Tyrell P34 -- a car that had six wheels instead of four, to gain additional front grip for turning
Williams FW-14B—a car that featured an electronic active suspension to ensure that it maintained the optimum ride height for its aerodynamics in all circumstances
Renault R25 -- which used a mass damper to keep the front end of the car settled
Red Bull RB6 -- which routed the exhaust underneath the car, in order to improve the aerodynamic characteristics of the floor
In fact, one of the criticisms that many fans have of the FIA is that it goes too far with this. It seems like the moment any team gains an advantage by exploiting a loophole in the rules, the FIA takes action to close the loophole, without necessarily waiting to see if other teams can respond with innovations of their own.
- Jul 8, 2023, 12:45 AM; 2 points) 's comment on Optimized for Something other than Winning or: How Cricket Resists Moloch and Goodhart’s Law by (
any therapeutic intervention that is now standardized and deployed on mass-scale has once not been backed by scientific evidence.
Yes, which is why, in the ideal case, such as with the polio vaccine, we take great effort to gather that evidence before declaring our therapeutic interventions safe and efficacious.
A great example of a product actually changing for the worse is Microsoft Office. Up until 2003, Microsoft Office had the standard “File, Edit, …” menu system that was characteristic of desktop applications in the ’90s and early 2000s. For 2007, though, Microsoft radically changed the menu system. They introduced the ribbon. I was in school at the time, and there was a representative from Microsoft who came and gave a presentation on this bold, new UI. He pointed out how, in focus group studies, new users found it easier to discover functionality with the Ribbon than they did with the old menu system. He pointed out how the Ribbon made commonly used functions more visible, and how, over time, it would adapt to the user’s preferences, hiding functionality that was little used and surfacing functionality that the user had interacted with more often.
Thus, when Microsoft shipped Office 2007 with the Ribbon, it was a great success, and Office gained a reputation for having the gold standard in intuitive UI, right?
Wrong. What Microsoft forgot is that the average user of Office wasn’t some neophyte sitting in a carefully controlled room with a one-way mirror. The average user of Office was upgrading from Office 2003. The average user of Office had web links, books, and hand-written notes detailing how to accomplish the tasks they needed to do. By radically changing the UI like that, Microsoft made all of that tacit knowledge obsolete. Furthermore, by making the Ribbon “adaptive”, they actively prevented new tacit knowledge from being formed.
I was working helpdesk for my university around that time, and I remember just how difficult it was to instruct people with how to do tasks in Office 2007. Instead of writing down (or showing with screenshots) the specific menus they had to click through to access functionality like line or paragraph spacing, and disseminating that, I had to sit with each user, ascertain the current state of their unique special snowflake Ribbon, and then show them how to find the tools to allow them to do whatever it is they wanted to do. And then I had to do it all over again a few weeks later, when the Ribbon adapted to their new behavior and changed again.
This task was further complicated by the fact that Microsoft moved away from having standardized UI controls to making custom UI controls for each separate task.
For example, here is the Office 2003 menu bar:
(Source: https://upload.wikimedia.org/wikipedia/en/5/54/Office2003_screenshot.PNG)
Note how it’s two rows. The top row is text menus. The bottom row is a set of legible buttons and drop-downs which allow the user to access commonly used tasks. The important thing to note is that everything in the bottom row of buttons also exists as menu entries in the top row. If the user is ever unsure of which button to press, they can always fall back to the menus. Furthermore, documents can refer to the fixed menu structure allowing for simple text instructions telling the user how to access obscure controls.
By comparison, this is the Ribbon:
(Source: https://kb.iu.edu/d/auqi)
Note how the Ribbon is multiple rows of differently shaped buttons and dropdowns, without clear labels. The top row is now a set of tabs, and switching tabs now just brings up different panels of equally arcane buttons. Microsoft replaced text with hieroglyphs. Hieroglyphs that don’t even have the decency to stand still over time so you can learn their meaning. It’s impossible to create text instructions to show users how to use this UI; instructions have to include screenshots. Worse, the screenshots may not match what the user sees, because of how items may move around or be hidden.
I suspect that many instances of UIs getting worse are due to the same sort of focus-group induced blindness that caused Microsoft to ship the ribbon. Companies get hung up on how new inexperienced users interact with their software in a tightly controlled lab setting, completely isolated from outside resources, and blind themselves to the vast amount of tacit knowledge they are destroying by revamping their UI to make it more “intuitive”. I think the Ribbon is an especially good example of this, because it avoids the confounding effect of mobile devices. Both Office 2003 and 2007 were strictly desktop products, so one can ignore the further corrosive effect of having to revamp the UI to be legible on a smartphone or tablet.
Websites and applications can definitely become worse after updates, but the company shipping the update will think that things are getting better, because the cost of rebuilding tacit knowledge is borne by the user, not the corporation.