I assume you either linked to this in the post, or it has been mentioned in the comments, but I did not catch it in either location if it was present, so I’m linking to it anyway: http://intelligence.org/files/Non-Omniscience.pdf contains a not merely computable but tractable algorithm for assigning probabilities to a given set of first-order sentences.
Quill_McGee
“S proves that A()=1 ⇒ U()=42. But S also proves that A()=1 ⇒ U()=1000000, therefore S proves that A()≠1” I don’t see how this follows. Perhaps it is because, if the system was sound, it would never prove more than one value for U() for a given a, therefore by the principle of explosion it proves A()≠1? But that doesn’t seem to actually follow. I’m aware that this is an old post, but on the off chance that anyone ever actually sees this comment, help would be appreciated.
Personally, I fall on the ‘all of the above(except idea A)’ side of the fence. I primarily use LessWrong for the Main board, as it is an excellent source of well-edited, well-considered articles, containing interesting or useful ideas. I want the remainder of the site to thrive because if there is not a large, active userbase and new users being attracted, then I would expect to see the types of content I want to see become less frequent. All of these ideas seem like good things to do, keeping in mind that if these do not actually support the goal of making good Main articles more frequent, then they are not good things, and it seems possible that some of these could backfire.
Well, this comes up different ways under different interpretations. If there is a chance that I am being simulated, that is this is part of his determining my choice, then I give him $100. If the coin is quantum, that is there will exist other mes getting the money, I give him $100. If there is a chance that I will encounter similar situations again, I give him $100. If I were informed of the deal beforehand, I give him $100. Given that I am not simulated, given that the coin is deterministic, and given that I will never again encounter Omega, I don’t think I give him $100. Seeing as I can treat this entirely in isolation due to these conditions, I have the choice between -$100 and $0, of which two options the second is better. Now, this runs into some problems. If I were informed of it beforehand, I should have precommitted. Seeing as my choices given all information shouldn’t change, this presents difficulty. However, due to the uniqueness of this deal, there really does seem to be no benefit to any mes from giving him the money, and so it is purely a loss.
My resolution to this, without changing my intuitions to pick things that I currently perceive as ‘simply wrong’, would be that I value certainty. A 9⁄10 chance of winning x dollars is worth much less to me than a 10⁄10 chance of winning 9x/10 dollars. However, a 2⁄10 chance of winning x dollars is worth only barely less than a 4⁄10 chance of winning x/2 dollars, because as far as I can tell the added utility of the lack of worrying increases massively as the more certain option approaches 100%. Now, this becomes less powerful the closer the odds, are, but slower than the dollar difference between the two change. So a 99% chance of x is barely effected by this compared to a 100% chance of .99x, but still by a greater value than .01x, and the more likely option still dominates. I might take a 99% chance of x over a 100% chance of .9x, however, and I would definitely prefer a 99% chance of x over a 100% chance of 0.8x.
EDIT: Upon further consideration, this is wrong. If presented with the actual choice, I would still prefer 1A to 1B, but to maintain consistency I will now choose 2A > 2B.
http://www.fungible.com/respect/index.html This looks to be very related to the idea of “Observe someone’s actions. Assume they are trying to accomplish something. Work out what they are trying to accomplish.” Which seems to be what you are talking about.
(aware that this is 2 years late, just decided to post) I find that I work, on average,somewhere between 2-3 times as fast when I am right up next to a deadline,than when I have plenty of time.
Does it count if the state of trying lasted for a long(but now ended) time? because if so, I kept on trying to create a bijection between the reals and the wholes, until I was about 13 and found an actual number that I could actually write down that none of my obvious ideas could reach, and find an equivalent for all the non obvious ones.( 0.21111111..., by the way)
I would disagree with the phrasing you use regarding ‘human terminal values.’ Now, I don’t disagree that evolution optimized humans according to those criteria, but I am not evolution, and evolution’s values are not my values. I would expect that only a tiny fraction of humans would say that evolution’s values should be our values(I’d like to say ‘none,’ but radical neo-darwinians might exist). Now, if you were just saying that those are the values of the optimization process that produced humanity, I agree, but that was not what I interpreted you as saying.