Can you only react with −1 of a reaction if someone else has already reacted with the +1 version of the reaction?
Sune
Most of the reactions are either positive of negative, but if a comment has several reactions, I find it difficult to see immediately which are positive and which are negative. I’m not sure if this is a disadvantage, because it is slightly harder to get peoples overall valuation of the comment, or if it actually an advantage because you can’t get the pleasure/pain of learning the overall reaction to your comment without first learning the specific reasons for it.
Another issue, if we (as readers of the reactions) tend to group reaction into positive and negative is that it is possible to make several reaction to a comment. It means that if 3 people have left positive reactions, a single person can outweigh that by leaving 3 different negative reaction. A reader would only realise this by hovering over the reactions. I do think it is useful to be able to have more than one reaction, especially in cases where you have both positive and negative feedback, or where one of them is neutral (e.g. “I will repond later”), so I’m not sure if there is a good solution to this.
Testing comment. Feel free to react to this however you like, I won’t intrepret the reactions as giving feedback to the comment.
I don’t follow the construction. Alice don’t know x and S when choosing f. If she is taking the preimage for all 2^n values of x, each with a random S, she will have many overlapping preimages.
I tried and failed to formalize this. Let me sketch the argument, to show where I ran into problems.
Considering a code with a corresponding decoding function , and assume that .
For any function we can define . We then choose randomly from the such functions. We want to code to be such that for random and random the information is enough to deduce , with high probability. Then each would give Bob one bit of information about (its value at the point ) and hence one bit about . Here we use the assumption to avoid collisions .
Unfortunately, this argument does not work. The issue is that is chosen at random, instead of as an encoding of a message. Because of this, we should not expect to be close to a valid code, so we should not expect there to be a decoding method that will give consistent decodings of for different values of .
It is not clear to me if this is a bug in the solution or a bug in the problem! The world is not random, so why do we want to be uniform in ?
This question is non-trivial even for . Here it becomes: let Alice choose a probability (which has to be on the form but this is irrelevant for large ) and Bob observes the binomially distributed number . With which distribution should Alice choose to maximize the capacity of this channel.
“STEM-level” is a type error: STEM is not a level, it is a domain. Do you mean STEM at highschool-level? At PhD-level? At the level of all of humanity put together but at 100x speed?
Seems difficult to mark answers to this question.
The type of replies you get, and the skills you are testing, would also depend how long the subject is spending on the test. Did you have a particular time limit in mind?
This seems to be a copy of an existing one month old post: https://www.lesswrong.com/posts/CvfZrrEokjCu3XHXp/ai-practical-advice-for-the-worried
What are you comparing to? It is only compared to what you would want rationalist culture to be like, or do you have examples of other cultures (besides academia) that do better in this regard?
I mostly agree and have strongly upvoted. However, I have one small but important nitpick about this sentense:
The risks of imminent harmful action by Sydney are negligible.
I think when it comes to x-risk, the correct question is not “what is the probability that this will result in existential catastrophe”. Suppose that there is a series of potential harmful any increasingly risky AIs that each have some probabilities of causing existiential catastrophe unless you press a stop button. If the probabilities are growing sufficiently slowly, then existential catastrophe will most likely happen for an n where is still low. A better question to ask is “what was the probability of existential catastrophe happening for som .”
Im confused by the first two diagrams in the section called: “There wasn’t an abrupt shift in obesity rates in the late 20th century”. As far as I understand, they contain data about the distribution of BMI for black female and white males at age 50 and born up until 1986. If so, it would contain data from 2036.
Now I see, yes you are right. If you want the beliefs to be accurate at the civilisation level, that is the correct way of looking at it. This corresponds to the 1⁄3 conclusion in the sleeping beauty problem.
I was thinking of it on the universe level, were we are a way for the universe to understand itself. If we want the universe to form accurate beliefs about itself, then we should not include our own civilisation when counting the number of the civilisations in the galaxy. However, when deciding if we should be surprised that we don’t see other civilisations, you are right that should include ourselves in the statistics.
Yes, the ordering does matter. Compare two hypotheses, one, , says that one average there will be 1 civilisation in each galaxy. The other, says that on average there will be civilisations in each galaxy. Suppose the second hypothesis is true.
If you now do the experiment of choosing a random galaxy, and counting the number of civilisations in that galaxy, you will probably not find any civilisation, which correctly supports .
If you do the second experiment of first finding yourself in some galaxy and then counting the number of civilisation in the galaxy, you will at least find your own civilisation and probably not any other. If you don’t correct for the fact that this experiment is different, you would update strongly in the direction of even when is true and the evidence of the experiment is as favourably towards as possible. This cannot be the correct reasoning, since correct reasoning should not consistently lead you to wrong conclusions.
You might argue that there is another possible state which is even more favourable evidence towards : that you do not exist. However, in a universe with galaxies, the probability of this is .
The milky way was “choosen” because it happens to be the galaxy we are in, so we shouldn’t update on that as if if had first chosen a galaxy to study and then found life in it. Similarly, we can’t even be sure we can update on the fact that there is life at least one place in the universe, because we don’t know how many empty multiverses are out there. The only thing we can conclude from us self existing is that life is possible in the universe.
I bought a Meta Quest headset two weeks ago based on this recommendation, and I completely agree. It is a very effective way to motivate myself to get exercise. The Thrill of the Fight is particularly effective for high intensity exercise/interval training. I normally don’t like interval training, but when someone is beating you up, you have to react! I previously thought my max heart rate was around 190, today I learned it is at least 205!
One potential downside I see, it that TOTF is violent in a realistic way. Normally, I don’t worry about this in computer games: I don’t think that clicking the mouse or pressing some keys, is likely to translate to anything in the real world. However, in VR, the actions you take to hit an opponent are exactly the same as you would in the real world. So for the first time, I have been worried if a game could make me or others violent. This can of course be solved by simply playing other games.
A more general failure mode of VR exercise is to overestimate how much exercise you get when playing more relaxed games such as Beat Saber. Due to the headset, the sweat to exercise ratio is higher than for example running or biking, so I sometimes think that I have been exercising more than I really have. This is the opposite effect of swimming, were I don’t notice any sweat, and so tend to underestimate how much I have exercised.
Epistemic status: I don’t know much about the subject. The following is stated much stronger than I believe it.
Counterargument: The market consists mostly (I assume? Please correct me if I’m wrong) of quants and traders trading for other people’s money. Either rich people or pension funds. If the quants can make good trades by buying stock that will increase and sell stocks that won’t, they will be rewarded. Hence, such trades are incentivised and we should expect the EMH to hold between stock prices. However, no one is really incentivised to tell the owners “look, I think the world is going to end soon, you should just spend your money instead”. If the world was obviously going to end soon, some financial advisors would say so. But no quant will be paid to try to find out if the probability of the world ending is 1% or 10% to find out if the client should spend their money now. So we should not expect EMH to hold for interest rates.
I agree with this comment. Don’t treat markets as an oracle regarding some issue, if there’s no evidence that traders in the market have even thought about the issue.
Agreed. A recent example of this was the long delay in early 2020 between “everyone who pays attention can see that corona will become a pandemic” until stocks actually started falling.
You can eat chips with a fork! Instead of stabbing them with the fork you attack from the side so the chip is between two prongs of the fork.
Shouldn’t you get notification when there are reactions to your post? At least in the batched notification. The urgency/importance of reactions are somewhere between replies, where you get the notification immediately and karma changed, were the default is that it is batched.