I am currently a nuclear engineer with a focus in nuclear plant safety and probabilistic risk assessment. I am also an aspiring EA, interested in X-risk mitigation and the intersection of science and policy.
ErickBall(Erick Ball)
Saturated fats are definitely manageable in small amounts. For most of history, and still in many places today, the biggest concern for an infant was getting sufficient calories, and saturated fat is a great choice for that. When you look at modern hunter-gatherer diets, they contain animal products, but in most cases they do not make up the majority of calories (exceptions usually involve lots of seafood), the meats are wild and therefore fairly lean, and BMI stays generally quite low. Under those conditions, heart disease risk is small and whether it is slightly increased by the saturated fat in one’s diet is mostly irrelevant. There is a big difference between chasing down the occasional antelope and pulling up to the drive through for a cheeseburger. So the evolutionary argument really is not strong evidence that saturated fats are harmless.
I agree that the studies we have are mostly inadequate, but I don’t think using hunter-gatherer diets as a control would be very useful either. If you change everything at once, you can’t isolate specific causal factors. What we really need (but can’t have) is a bunch of large scale trials that have many groups with many different interventions and combinations of interventions, and statistical power to distinguish outcomes between each group.
Real can of worms that deserves its own post I would think
I think in this case just spacing them out would help more.
Downvoted because I waded through all those rhetorical shenanigans and I still don’t understand why you didn’t just say what you mean.
Separate clocks would be a pain to manage in a board game, but in principle “the game ends once 50% of players have run out of time” seems like a decent condition.
Oh, good point, I had forgotten about the zero-sum victory points. The extent to which the other parts are zero sum depends a lot on how large the game board is relative to the number of players, so it could be adjusted. I was thinking about having a time limit instead of a round limit, to encourage the play to move quickly, but maybe that’s too stressful. If you want the players to choose to end the game, then you’d want to build in a mechanic that works against all of them more and more as the game progresses, so that at some point continuing becomes counterproductive...
Would a good solution be to just play Settlers, but instead of saying “the goal is to get more points than anyone else,” say “this is a variant where the goal is to get the highest score you can, individually”? That seems like it would change the negotiation dynamics in a potentially interesting way without having to make or teach a brand new game. Does this miss the point somehow?
So, then it seems like the client’s best move in this scenario is to lie to you strategically, or at least omit information strategically. They could say “I know for sure you won’t find any fingerprints or identifiable face in the camera footage” and “I think my friends will confirm that I was playing video games with them”, and as long as they don’t actually tell you that’s a lie, you can put those friends on the stand, right?
You say that lying to you can only hurt them but “There is a kernel of an exception that is almost not worth mentioning” because it is rarely relevant. I find this pretty hard to believe. If your client tells you “yeah I totally robbed that store, but I was wearing a ski mask and gloves so I think a jury will have reasonable doubt assuming my friends say I was playing video games with them the whole time”, would you be on board with that plan? There must be plenty of cases where the cops basically know who did it but have trouble proving it. Maybe those just don’t get to the point of a public defender getting assigned?
That’s like saying that because we live in a capitalist society, the default plan is to destroy every bit of the environment and fill every inch of the world with high rise housing projects. It’s… true in some sense, but only as a hypothetical extreme, a sort of economic spherical cow. In reality, people and societies are more complicated and less single minded than that, and also people just mostly don’t want that kind of wholesale destruction.
I didn’t think the implication was necessarily that they planned to disassemble every solar system and turn it into probe factories. It’s more like… seeing a vast empty desert and deciding to build cities in it. A huge universe, barren of life except for one tiny solar system, seems not depressing exactly but wasteful. I love nature and I would never want all the Earth’s wilderness to be paved over. But at the same time I think a lot of the best the world has to offer is people, and if we kept 99.9% of it as a nature preserve then almost nobody would be around to see it. You’d rather watch the unlifted stars, but to do that you have to exist.
I don’t think governments have yet committed to trying to train their own state of the art foundation models for military purposes, probably partly because they (sensibly) guess that they would not be able to keep up with the private sector. That means that government interest/involvement has relatively little effect on the pace of advancement of the bleeding edge.
Fair point, but I can’t think of a way to make an enforceable rule to that effect. And even if you could make that rule, a rogue AI would have no problem with breaking it.
I think if you could demonstrably “solve alignment” for any architecture, you’d have a decent chance of convincing people to build it as fast as possible, in lieu of other avenues they had been pursuing.
Since our info doesn’t seem to be here already: We meet on Sundays at 7pm, alternating between virtual and in-person in the lobby of the UMBC Performing Arts and Humanities Building. For more info, you can join our Google group (message the author of this post, bookinchwrm).
I found this post interesting, mostly because it illustrates deep flaws in the US tax system that we should really fix. I downvoted it because I think it is a terrible strategy for giving more money to charity. Many other good objections have been raised in the comments, and the post itself admits that lack of effectiveness is a serious problem. One problem I did not see addressed anywhere is reputational risk. The world is not static, and a technique that works for an individual criminal or a few conscientious objectors probably will not work consistently for a large and coordinated group of donors, because society will notice and react. What effect would this behavior have on the charities you give to? I suspect most of them, if they knew about it, would justifiably refuse the money. What effect would it have on other organizations you might be associated with? They are now involved with and perhaps encouraging a known criminal, albeit one who probably won’t be prosecuted.
In conclusion, I really wish I could vote to disagree with this post without downvoting to make it less visible. I think readers should be able to see it and also see that practically everyone disagrees with it.
I always thought it would be great to have one set of professors do the teaching, and then a different set come in from other schools just for a couple weeks at the end of the year to give the students a set of intensive written and oral exams that determines a big chunk of their academic standing.
Here’s a market, not sure how to define linchpin but we can at least predict whether he’ll be part of it.
https://manifold.markets/ErickBall/will-the-first-agi-be-built-by-sam?r=RXJpY2tCYWxs
I can now get real-time transcripts of my zoom meetings (via a python wrapper of the openai api) which makes it much easier to track the important parts of a long conversation. I tend to zone out sometimes and miss little pieces otherwise, as well as forget stuff.
As a counterpoint, take a look at this article: https://peterattiamd.com/protein-anabolic-responses/
The upshot is that the studies saying your body can only use 45g of protein per meal for muscle synthesis are mostly based on fast-acting whey protein shakes. Stretching out the duration of protein metabolism (by switching protein sources and/or combining it with other foods in a gradually-digested meal) can mitigate the problem quite a bit.