I’m glad you like it!
Fixed the footnotes. They were there at the end, but unlinked. Some mixup when switching between LW’s Markdown and Docs-style editor, most likely.
I’m glad you like it!
Fixed the footnotes. They were there at the end, but unlinked. Some mixup when switching between LW’s Markdown and Docs-style editor, most likely.
I see… so trolling by patenting something akin to convolutional neural networks wouldn’t work because you can’t tell what’s powering a service unless the company building it tells you.
Maybe something on the lines of “service that does automatic text translation” or “car that drives itself” (obviously not these, since a patent with so much prior art would never get granted) would be a thing that you could fight over?
You’re welcome! I’d like hearing a bit about how it helped, if you are ok with sharing.
Hi! I wrote a summary with some of my thoughts in this post as part of an ongoing effort to stop sucking at researching stuff. This article was a big help, thank you!
I’m glad you enjoyed it! I agree that more should be done. Just listing the specific search advice on the new table of contents would help a lot.
I’m gonna do the work, I promise. I’m just working up the nerve. Saying, in effect, “this experienced professional should have done his work better, let me show you how” is scary as balls.
First of all: thank you for setting up the problem, I had lots of fun!
This one reminded me a lot of D&D.Sci 1, in that the main difficulty I encountered was the curse of dimensionality. The space had lots of dimensions so I was data-starved when considering complex hypotheses (performance of individual decks, for instance). Contrast with Voyages of the Grey Swan, where the main difficulty is that broad chunks of the data are explicitly censored.
I also noticed that I’m getting less out of active competitions than I was from the archived posts. I’m so concerned with trying to win that I don’t write about and share my process, which I believe is a big mistake. Carefully composed posts have helped me get my ideas in order, and I think they were far more interesting to observers. So I’ll step back from active competitions for a bit. I’ll probably do the research summaries I promised, “Monster Carcass Auction”, “Earwax” (maybe?), then come back to active competitions.
Thank you for doing the work of correcting this usage; precision in language matters.
I made some progress (right in the nick of time) by...
Massaging the data into a table of every deck we’ve seen, and whether the deck won its match or lost it (the code is long and boring, so I’m skipping it here), then building the following machinery to quickly analyze restricted subsets of deck-space.
q = "1 <= dragon <= 6 and 1 <= lotus <= 6"
display(decks.query(q).corr()["win"].drop("win").sort_values(ascending=False).plot.bar())
decks.query(q)["win"].agg(["mean", "sum", "count"])
q is used to filter us down to decks that obey the constraint. We then check the correlation of each card to winrate. Finally, we show how many decks were kept, and what the winrate actually is.
q can be pretty complicated, with expressiveness limits defined by pd.DataFrame.query
. A few things that work:
(angel + lotus) == 0
1 <= dragon and 1 <= lotus and 4 <= (dragon + lotus)
1 <= dragon and lotus == 0
(pirate-1) <= sword <= (pirate+1)
My deck submission (PvE and PvP) is:
4 angels 3 lotuses 3 pirates 2 sword
See response to Ben Pace for counterpoints.
My counterpoints, in broad order of importance:
If you lie to people, they should trust you less. Observing you lying should reduce their confidence in your statements. However, there is nothing in the fundamental rules of the universe that say that people notice when they are deceived, even after the fact, or that they will trust you less by any amount. Believing that lying, or even being caught lying, will result in total collapse of confidence without further justification is falling for the just-world fallacy.
If you saw a man lying to his child about the death of the family dog, you wouldn’t (hopefully) immediately refuse to ever have business dealings with such a deceptive, amoral individual. Believing that all lies are equivalent, or that lie frequency does not matter, is to fall for the fallacy of grey.
“Unethical” and “deceptive” are different. See hpmor ch51 for hpmor!Harry agreeing to lie for moral reasons. See also counterarguments to Kant’s Categorical Imperative (lying is always wrong, literally never lie).
The point about information theory stands.
Note that “importance” can be broadly construed as “relevance to the practical question of lying to actual people in real life”. This is why the information-theoretic argument ranks so low.
If good people were liars, that would render the words of good people meaningless as information-theoretic signals, and destroy the ability for good people to coordinate with others or among themselves.
My mental Harry is making a noise. It goes something like Pfwah! Interrogating him a bit more, he seems to think that this argument is a gross mischaracterization of the claims of information theory. If you mostly tell the truth, and people can tell this is the case, then your words convey information in the information-theoretic sense.
EDIT: Now I’m thinking about how to characterize “information” in problems where one agent is trying to deceive another. If A successfully deceives B, what is the “information gain” for B? He thinks he knows more about the world; does this mean that information gain cannot be measured from the inside?
Early COVID response on LW was a generalized “this is a big deal.” I can’t find the post that originally caught my eye, but I remember hitting the supermarkets in Buenos Aires, stocking up on masks and hand sanitizer, and two weeks later seeing the city freak the hell out. Jacob’s “Seeing the Smoke” was a strong early signal, and Zvi’s updates often considered explicit numbers.