No, because “we live in an infinite universe and you can have this chocolate bar” is trivially better. And “We live in an infinite universe and everyone not on earth is in hell” isn’t really good news.
DSherron
You’re conflating responsibility/accountability with things that they don’t naturally fall out of. And I think you know that last line was clearly B.S. (given that the entire original post was about something which is not identical to accountability—you should have known that the most reasonable answer to that question is “agentiness”). Considering their work higher, or considering them to be smarter, is alleged by the post to not be the entirety of the distinction between the hierarchies; after all, if the only difference was brains or status, then there would be no need for TWO distinct hierarchies. There is a continuous distribution of status and brains throughout both hierarchies (as opposed to a sharp distinction where even the lowest officer is significantly smarter or higher status than the highest soldier), so it seems reasonable to just link them together.
One thing which might help to explain the difference is the concept of “agentiness”, not linked to certain levels of difficulty of roles, but rather to the type of actions performed by those roles. If true, then the distinguishing feature between an officer and a soldier is that officers have to be prepared to solve new problems which they may not be familiar with, while soldiers are only expected to perform actions they have been trained on. For example, an officer may have the task of “deal with those machine gunners”, while a soldier would be told “sweep and clear these houses”. The officer has to creatively devise a solution to a new problem, while the soldier merely has to execute a known decision tree. Note that this has nothing to do with the difficulty of the problem. There may be an easy solution to the first problem, while the second may be complex and require very fast decision making on the local scale (in addition to the physical challenge). But given the full scope of the situation, it is easy to look at the officer and say “I think you would have been better off going further around and choosing a different flank, to reduce your squad’s casualties; but apparently you just don’t have that level of tactical insight. No promotion for you, maybe next time”. To the soldier, it would be more along the lines of “You failed to properly check a room before calling it clear, and missed an enemy combatant hiding behind a desk. This resulted in several casualties as he ambushed your squadmates. You’re grounded for a week.” The difference is that an officer is understood to need special insight to do his job well, while a soldier is understood to just need to follow orders without making mistakes. It’s much easier to punish someone for failing to fulfill the basic requirements of their job than it is to punish them for failing to achieve an optimal result given vague instructions.
EDIT: You’ve provided good reason to expect that officers should get harsher punishments than soldiers, given the dual hierarchy. I claim that the theory of “agentiness” as the distinguishing feature between these hierarchies predicts that officers will receive punishments much less severe than your model would suggest, while soldiers will be more harshly punished. In reality, it seems that officers don’t get held accountable to the degree which your model predicts they would, based on their status, while soldiers get held more accountable. This is evidence in favor of the “agentiness” model, not against it, as you originally suggested. The core steps of my logic are: the “agentiness” model predicts that officers are not punished as severely as you’d otherwise expect, while soldiers are punished more severely; therefore, the fact that in the real world officers are not punished as harshly as you’d otherwise expect is evidence for the “agentiness” model at the expense of any models which don’t predict that. If you disagree with those steps, please specify where/how. If you disagree with an unstated/implied assumption outside of these steps, please specify which. If I’m not making sense, or if I seem exceedingly stupid, there’s probably been a miscommunication; try and point at the parts that don’t make sense so I can try again.
Doesn’t that follow from the agenty/non-agenty distinction? An agenty actor is understood to make choices where there is no clear right answer. It makes sense that mistakes within that scope would be punished much less severely; it’s hard to formally punish someone if you can’t point to a strictly superior decision they should have made but didn’t. Especially considering that even if you can think of a better way to handle that situation, you still have to not only show that they had enough information at the time to know that that decision would have been better (“hindsight is 20/20”), but also that such an insight would have been strictly within the requirements of their duties (rather than it requiring an abnormally high degree of intelligence/foresight/clarity/etc.).
Meanwhile, a non-agenty actor is merely expected to carry out a clear set of actions. If a non-agenty actor makes a mistake, it is easy to point to the exact action they should have taken instead. When a cog in the machine doesn’t work right, it’s simple to point it out and punish it. Therefore it makes a lot of sense that they get harsher punishments, because their job is supposed to be easier. Anyone can imagine a “perfect” non-agenty actor doing their job, as a point of comparison, while imagining a perfect “agenty” actor requires that you be as good at performing that exact role, including all the relevant knowledge and intelligence, as such a perfect actor.
Ultimately, it seems like observing that agenty actors suffer less severe punishments ought to support the notion that agentiness is at the least believed to be a cluster in thingspace. Of course, this will result in some unfair situations; “agenty” actors do sometimes get off the hook easy in situations where there was actually a very clear right decision and they chose wrong, while “non-agenty” actors will sometimes be held to an impossible standard when presented with situations where they have to make meaningful choices between unclear outcomes. This serves as evidence that “agentiness” is not really a binary switch, thus marking this theory as an approximation, although not necessarily a bad approximation even in practice.
...Or you could notice that requiring that order be preserved when you add another member is outright assuming that you care about the total and not about the average. You assume the conclusion as one of your premises, making the argument trivial.
Better: randomly select a group of users (within some minimal activity criteria) and offer the test directly to that group. Publicly state the names of those selected (make it a short list, so that people actually read it, maybe 10-20) and then after a certain amount of time give another public list of those who did or didn’t take it, along with the results (although don’t associate results with names). That will get you better participation, and the fact that you have taken a group of known size makes it much easier to give outer bounds on the size of the selection effect caused by people not participating.
You can also improve participation by giving those users an easily accessible icon on Less Wrong itself which takes them directly to the test, and maybe a popup reminder once a day or so when they log on to the site if they’ve been selected but haven’t done it yet. Requires moderate coding.
She responds “I’m sorry, but while I am a highly skilled mathematician, I’m actually from an alternate universe which is identical to yours except that in mine ‘subjective probability’ is the name of a particularly delicious ice cream flavor. Please precisely define what you mean by ‘subjective probability’, preferably by describing in detail a payoff structure such that my winnings will be maximized by selecting the correct answer to your query.”
Written before reading comments; The answer was decided within or close to the 2 minute window.
I take both boxes. I am uncertain of three things in this scenario: 1)whether the number is prime; 2) whether Omega predicted I would take one box or two; and 3) whether I am the type of agent that will take one box or two. If I take one box, it is highly likely that Omega predicted this correctly, and it is also highly likely that the number is prime. If I take two boxes, it is highly likely that Omega predicted this correctly and that the number is composite. I prefer the number to be composite, therefor I take both boxes on the anticipation that when I do so I will (correctly) be able to update to 99.9% probability that the number is composite.
Thinking this through actually led me to a bit of insight on the original newcomb’s problem, namely that last part about updating my beliefs based on which action I choose to take, even when that action has no causal effects on the subject of my beliefs. Taking an action allows you to strongly update on your belief about which action you would take in that situation; in cases where that fact is causally connected to others (in this case Omega’s prediction), you can then update through those connections.
It seems that mild threats, introduced relatively late while immersion is strong, might be effective against some people. Strong threats, in particular threats which pattern-match to the sorts of threats which might be discussed on LW (and thus get the gatekeeper to probably break some immersion) are going to be generally bad ideas. But I could see some sort of (possibly veiled/implied?) threat working against the right sort of person in the game. Some people can probably be drawn into the narrative sufficiently to get them to actually react in some respects as though the threat was real. This would definitely not apply to most people though, and I would not be shocked to discover that getting to the required level of immersion isn’t humanly feasible except in very rare edge cases.
His answer isn’t random. It’s based on his knowledge of apple trees in bloom (he states later that he assumed the tree was an apple tree in bloom). If you knew nothing about apple trees, or knew less than he did, or knew different but no more reliable information than he did, or were less able to correctly interpret what information you did have, then you would have learned something from him. If you had all the information he did, and believed that he was a rationalist and at the least not worse at coming to the right answer than you, and you had a different estimate than he did, then you still ought to update towards his estimate (Aumann’s Agreement Theorem).
This does illustrate the point that simply stating your final probability distribution isn’t really sufficient to tell everything you know. Not surprisingly, you can’t compress much past the actual original evidence without suffering at least some amount of information loss. How important this loss is depends on the domain in question. It is difficult to come up with a general algorithm for useful information transfer even just between rationalists, and you cannot really do it at all with someone who doesn’t know probability theory.
Does locking doors generally lead to preventing break-ins? I mean, certianly in some cases (cars most notably) it does, but in general, if someone has gone up to your back door with the intent to break in, how likely are they to give up and leave upon finding it locked?
Nitpicking is absolutely critical in any public forum. Maybe in private, with only people who you know well and have very strong reason to believe are very much more likely to misspeak than to misunderstand, nitpicking can be overlooked. Certainly, I don’t nitpick every misspoken statement in private. But when those conditions do not hold, when someone is speaking on a subject I am not certain they know well, or when I do not trust that everyone in the audience is going to correctly parse the statement as misspoken and then correctly reinterpret the correct version, nitpicking is the only way to ensure that everyone involved hears the correct message.
Charitably I’ll guess that you dislike nitpicking because you already knew all those minor points, they were obvious to anyone reading after all, and they don’t have any major impact on the post as a whole. The problem with that is that not everyone who reads Less Wrong has a fully correct understanding of everything that goes into every post. They don’t spot the small mistakes, whether those be inconsequential math errors or a misapplication of some minor rule or whatever. And the problem is that just because the error was small in this particular context, it may be a large error in another context. If you mess up your math when doing Bayes’ Theorem, you may thoroughly confuse someone who is weak at math and trying to follow how it is applied in real life. In the particular context of this post, getting the direction of a piece of evidence wrong is inconsequential if the magnitude of that evidence is tiny. But if you are making a systematic error which causes you to get the direction of certain types of evidence, which are usually small in magnitude, wrong, then you will eventually make a large error. And unless you are allowed to call out errors dealing with small magnitude pieces of evidence, you won’t ever discover it.
I’d also like to say that just because a piece of evidence is “barely worth mentioning” when listing out evidence for and against a claim, does not mean that that evidence should be immediately thrown aside when found. The rules which govern evidence strong enough to convince me that 2+2=3 are the same rules that govern the evidence gained from the fact that when I drop an apple, it falls. You can’t just pretend the rules stop applying and expect to come out ok in every situation. In part you can gain practice from applying the rules to those situations, and in part it’s important to remember that they do still apply, even if in the end you decide that their outcome is inconsequential.
While you can be definitively wrong, you cannot be definitely right.
Not true. Trivially, if A is definitively wrong, then ~A is definitively right. Popperian falsification is trumped by Bayes’ Theorem.
Note: This means that you cannot be definitively wrong, not that you can be definitively right.
They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.
Alternatively, if you define solution such that any two given solutions are equally acceptable with respect to the original problem.
WOW. I predicted that I would have a high tolerance for variance, given that I was relatively unfazed by things that I understand most people would be extremely distressed by (failing out of college and getting fired). I was mostly right in that I’m not feeling stress, exactly, but what I did not predict was a literal physical feeling of sickness after losing around $20 to a series of bad plays (and one really bad beat, although I definitely felt less bad about that one after realizing that I really did play the hand correctly). It wasn’t even originally money from my wallet; it came from one of the free offers linked elsewhere in this thread. But, wow, this advice is really really good. I can only imagine what it’s like with even worse variance or for someone more inclined to stress about this sort of thing.
If I can do something fun, from my house, on my own hours, without any long-term commitment, and make as much money as a decent paying job, then that sounds incredible. Even if it turns out I can’t play at high levels, I don’t mind playing poker for hours a day and making a modest living from it. I don’t really need much more than basic rent/food/utilities in any case.
Online poker (but it seems kinda hard)
Actually, does anyone know any good resources for getting up to speed on poker strategies? I’m smart, I’m good at math, I’m good at doing quick statistical math, and I’ve got a lot of experience at avoiding bias in the context of games. Plus I’m a decent programmer, so I should be able to take even more of an advantage by writing a helper bot to run the math faster and more accurately than I otherwise could. It seems to me that I should be able to do well at online poker, and this would be the sort of thing that I could likely actually get motivated to do to make money (which I unfortunately need to do).
Anyway, if anyone has any recommendations for how to go about the learning process and getting into playing, I’d love to hear them. I’ll try to comment back here after doing some independent research as well.
You see an animal at a distance. It looks like a duck, walks like a duck, and quacks like a duck. You start to get offended by the duck. Then, you get closer and realize the duck was a platypus and not a duck at all. At this point, you realize that you were wrong, in a point of fact, to be offended. You can’t claim that anything that looks like a duck, but which later turns out not to be, is offensive. If it later turns out not to be a duck then it was never a duck, and if you haven’t been able to tell for sure yet (but will be able to in the future) then you need to suspend judgement until you can. Particularly since there is no possible defense that the thing is not a duck except to show you that it is not a duck, which will happen in time.
Given no other information to strictly verify, any supposed time-traveled conversation is indistinguishable from someone not having time-traveled at all and making the information up. The true rule must depend on the actual truth of information acquired, and the actual time such information came from. Otherwise, the rule is inconsistent. It also looks at whether your use of time travel actually involves conveying the information you gained; whether such information is actually transferred to the past, not merely whether it could be. Knowing that Amelia Bones has some information about 4 hours in the future will only restrict your time travel if you would transmit that information to the past—if you would act significantly differently knowing that than you would have otherwise. If you act the same either way, then you are not conveying information.
In short, the rule is that you cannot convey information more than 6 hours into the information’s relative past, but that does not necessarily mean that you cannot go to a forbidden part of the past after learning it. It merely means that you cannot change your mind about doing so after learning it. Worth noting: if you plan on going to the past, and then receive some information from 6 hours in the future that changes your mind, you have conveyed information to the past. I’m not sure how that is handled, other than that the laws of the universe are structured as to never allow it to happen.
After considering this for quite some time, I came to a conclusion (imprecise though it is) that my definition of “myself” is something along the lines of:
In short form, a “future evolution of the algorithm which produces my conscious experience, which is implemented in some manner that actually gives rise to that conscious experience”
In order for a thing to count as me, it must have conscious experience; anything which appears to act like it has conscious experience will count, unless we somehow figure out a better test.
It also must have memory, and that memory must include a stream of consciousness which leads back to the stream of consciousness I am experiencing right now, to approximately the same fidelity as I currently have memory of a continuous stream of consciousness going back to approximately adolescence.
Essentially, the idea is that in order for something to count as being me, it must be the sort of thing which I can imagine becoming in the future (future relative to my conscious experience; I feel like I am progressing through time), while still believing myself to be me the whole time. For example, imagine that, through some freak accident, there existed a human living in the year 1050 AD who passed out and experienced an extremely vivid dream which just so happens to be identical to my life up until the present moment. I can imagine waking up and discovering that to be the case; I would still feel like me, even as I incorporated whatever memories and knowledge he had so that I would also feel like I was him. That situation contains a “future evolution” of me in the present, which just means “a thing which I can become in the future without breaking my stream of consciousness, at least not any more than normal sleep does today”.
This also implies that anything which diverged from me at some point in the past does not count as “me”, unless it is close enough that it eventually converges back (this should happen within hours or days for minor divergences, like placing a pen in a drawer rather than on a desk, and will never happen for divergences with cascading effects (particularly those which significantly alter the world around me, in addition to me)).
Obviously I’m still confused too. But I’m less confused than I used to be, and hopefully after reading this you’re a little less confused too. Or at least, hopefully you will be after reflecting a bit, if anything resonated at all.