Feature request: Q&A posts show a sidebar with all top-level answers and the associated usernames (example). Would be nice if the Anti-Kibitzer could hide these usernames.
MTGandP
The script works well on individual posts, but I find that on the lesswrong.com homepage, it displays names and vote counts for about 3 seconds before it finishes executing. Perhaps there’s some way to make it run faster, or failing that, to block the page from rendering until the script finishes running?
Somewhat debatable whether this is a desirable feature, but right now the ordering of comments leaks information about their vote counts. Perhaps it would be good to randomize comment order.
A different paper but in the same vein: Markets are efficient if and only if P= NP
Now that April 17 has passed, how much did you end up making on this bet?
I know more about StarCraft than I do about AI, so I could be off base, but here’s my best attempt at an explanation:
As a human, you can understand that a factory gets in the way of a unit, and if you lift it, it will no longer be in the way. The AI doesn’t understand this. The AI learns by playing through scenarios millions of times and learning that on average, in scenarios like this one, it gets an advantage when it performs this action. The AI has a much easier time learning something like “I should make a marine” (which it perceives as a single action) than “I should place my buildings such that all my units can get out of my base”, which requires making a series of correct choices about where to place buildings when the conceivable space of building placement has thousands of options.
You could see this more broadly in the Terran AI where it knows the general concept of putting buildings in front of its base (which it probably learned via imitation learning from watching human games), but it doesn’t actually understand why it should be doing that, so it does a bad job. For example, in this game , you can see that the AI has learned:
1. I should build supply depots in front of my base.
2. If I get attacked, I should raise the supply depots.
But it doesn’t actually understand the reasoning behind these two things, which is that raising the supply depots is supposed to prevent the enemy units from running into your base. So this results in a comical situation where the AI doesn’t actually have a proper wall, allowing the enemy units to run in, and then it raises the supply depots after they’ve already run in. In short, it learns what actions are correlated with winning games, but it doesn’t know why, so it doesn’t always use these actions in the right ways.
Why is this AI still able to beat strong players? I think the main reason is because it’s so good at making the right units at the right times without missing a beat. Unlike humans, it never forgets to build units or gets distracted. Because it’s so good at execution, it can afford to do dumb stuff like accidentally trapping its own units. I suspect that if you gave a pro player the chance to play against AlphaStar 100 times in a row, they would eventually figure out a way to trick the AI into making game-losing mistakes over and over. (Pro player TLO said that he practiced against AlphaStar many times while it was in development, but he didn’t say much about how the games went.)
At some point, all traders with this belief will have already bought the stock and the price will stop going up at that point, thus making the price movement anti-inductive.
I’m tempted to correct my past self’s grammar by pointing out that “e.g.” should be followed by a comma.
Is it possible to self-consistently believe you’re poorly calibrated? If you believe you’re overconfident then you would start making less confident predictions right?
The survey has been taken by me.
The question “How Long Since You Last Posted On LessWrong?” is ambiguous—I don’t know if posting includes comments or just top-level posts.
And here we are one year later!
Can you imagine a Hollywood movie in which the hero did that, instead of coming up with some amazing clever way to save the civilians on the ship?
Jack Bauer might do it.
This is really remarkable to read six years later, since, although I don’t know you personally, I know your reputation as That Guy Who Has Really Awesome Idyllic Relationships.
It may be theoretically possible to increase my mental capacity in some way such that I can distinguish mental capacity from hallucination. I cannot conceive of how that would be done, but it may be possible.
P.S. I love when people reply to comments that are two and a half years old. It feels like we’re talking to the past.
It probably just computes it as a float and then prints the whole float.
(I do recognize the silliness of replying to a three-year old comment that itself is replying to a six-year old comment.)
Sort-of related question: How do you compute calibration scores?
And then check if the “rationality improvement” people do better on calibration. (I’m guessing they don’t.)
We send out a feedback survey a few days after the workshop which includes the question “0 to 10, are you glad you came?” The average response to that question is 9.3.
I’ve seen CFAR talk about this before, and I don’t view it as strong evidence that CFAR is valuable.
If people pay a lot of money for something that’s not worth it, we’d expect them to rate it as valuable by the principle of cognitive dissonance.
If people rate something as valuable, is it because it improved their lives, or because it made them feel good?
For these ratings to be meaningful, I’d like to see something like a control workshop where CFAR asks people to pay $3900 and then teaches them a bunch of techniques that are known to be useless but still sound cool, and then ask them to rate their experience. Obviously this is both unethical and impractical, so I don’t suggest actually doing this. Perhaps “derpy self-improvement” workshops can serve as a control?
This basically means they are perfectly achieving their goal, right? Wirecutter’s goal isn’t to find the best product, it’s to find the best product at a reasonable price. If you’re a power user, you’ll be willing to buy better and more expensive stuff.