Open thread, September 16-22, 2013
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
- 16 Dec 2013 4:26 UTC; 5 points) 's comment on Open Thread, April 1-15, 2013 by (
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
What’s a fun job for someone with strong technical skills?
I just graduated with a PhD in pure math (algebraic topology), I’ve done 50 Project Euler problems, I know Java and Python although I’ve never coded anything that anyone else uses. I’m looking for work and making a list of nonacademic job titles that involve solving interesting problems, and would appreciate suggestions. So far I’m looking at:
Data scientist / analytics
Software engineer
Actuary. This is very close to analytics I have been told.
Actuary as a fun job? This goes against everything I have previously heard about it.
I don’t know if being an Actuary is fun or not, but they have one of the highest ratings for job satisfaction. Some more info.
I know an actuarial technician who says he loves his job in Lansing (about 90 minutes from where he lives, in Grand Rapids) so much that he doesn’t want to switch to a job in Grand Rapids.
Dolphin trainer. Also fun for people without strong technical skills.
http://sub.garrytan.com/its-not-the-morphine-its-the-size-of-the-cage-rat-park-experiment-upturns-conventional-wisdom-about-addiction is an article about a change in perspective about how rats act when given access to a morphine drip.
Basic concept: When given a larger cage with more space and potential things and other rats to interact with, rats are much less likely to only use a morphine drip, as compared to when they are given a small standard lab cage.
Edit per NancyLebovitz: This is evidence that offers a different perspective on the experiments that I had heard about and it seemed worth sharing. It is not novel though, since apparently it was done in the late 70′s and published in 1980. See wikipedia link at: http://en.wikipedia.org/wiki/Rat_Park
I agree that the information is important, but the “rat park” research was done in the ’70s. It’s not novel, and I suggest it’s something people didn’t want to hear.
I wonder why addiction is common among celebrities—they aren’t living in a deprived environment.
Are you sure this is true?
I’m guessing you had this in mind already, but to clarify anyway, there’s a pretty major availability bias since anything celebrities are involved in is much more likely to be reported on, leading to a proliferation of news stories about celebrities with addiction problems.
On the other hand though, celebrities are a lot more likely than most people to simply be given drugs for free, since drug dealers can make extra money if their customers are enticed by the prospect of being able to do drugs with celebrities. And of course that’s aside from the fact that the drug dealers themselves can be enticed by the star power and want to work their way into their circles.
No real statistics, just claims.
This article uses the model that a fair number of people are just vulnerable to addiction (about 1 in 12), and celebrity doesn’t affect the risk except that celebrities have more access to drugs.
Second thought: What’s implied is that either humans are less resistant to addiction than rats, or there’s something about civilization in general which makes people less resistant to addiction.
There are more addictive things for humans produced, and it is easier for humans to get them.
Human mind can create the “small cage” effect even without physical constraints. Sometimes people feel ‘trapped’ metaphorically.
I’m not so sure that’s true. Being scrutinised 24⁄7 sounds like one hell of a constraint on my possible actions to me.
It could also make the environment feel non-genuine or unreal.
Oops. Upon review, I fell victim to a classic blunder. “Someone shared something on Facebook that I have not heard of before? It must be novel. I should share it with other people because I was unaware of it and it caused me to update my worldview.”
Thanks. I’ll edit the original post to reflect this.
You call this a “different perspective”, but the perspective you’re linking to is the only one I’d heard before. I thought Rat Park was the conventional wisdom. So I was initially confused about what the new, different perspective was.
My previous information was basically just “Morphine=Addicted rats.” Which was really, really out of date and simplistic.
Rat Park’s idea “The size/interactivity of the cage significantly changes addiction rates.” makes sense, but I was unaware of it until recently.
So if Rat Park was the conventional wisdom, I was behind the conventional wisdom and was just catching up to it when I posted.
Following up on a post I made last month, I’ve put up A Non-Technical Introduction to AI Risk, collecting the most engaging and accessible very short introductions to the dangers of intelligence explosion I’ve seen. I’ve written up a few new paragraphs to better situate the links, and removed meta information that might make it unsuitable for distribution outside LW. Suggestions for further improvements are welcome!
That is a good, readable summary of the main issues. A minor suggestion which is purely aesthetic is that the underlined red hyper-links look like misspellings at first glance.
Thanks! Unfortunately, I’m not sure how to get rid of those without upgrading my Wordpress or switching themes. The links are an ugly orange by default, and changing them to blue apparently leaves the underline orange.
For what it’s worth, the outer elements have a CSS “color” attribute of (255, 114, 0) (orange), while the inner elements have a CSS color of (0, 0, 128) (blue). The former color attribute is set in a CSS file; the latter color attribute is set in the HTML itself.
Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does? If the former, what kinds of stuff do you have on your list?
No, but some try.
It isn’t really possible since in many cases it isn’t even computable let alone feasible for currently existing human brains. Approximations are the best we can do, but I still consider it the best available epistemological framework for reasons similar to those given by Jaynes.
Stuff like this.
People’s brains can barely manage to multiply three-digit numbers together, so no human can do “Bayesian probabilistic reasoning”. So for humans it’s at best “the latter while using various practical tips to approximate the benefits of former” (e.g. being willing to express your certainty in a belief numerically when such a number is asked for you in a discussion).
What ArisKatsaris said is accurate—given our hardware, it wouldn’t actually be a good thing to keep track of explicit probabilities for everything.
I try to put numbers on things if I have to make an important decision, and I have enough time to sit down and sketch it out. The last time I did that, I combined it with drawing graphs, and found I was actually using the drawings more—now I wonder if that’s a more intuitive way to handle it. (The way I visualize probabilities is splitting a bar up into segments, with the length of the segments in proportion to the length of the whole bar indicating the probability.)
One of my friends does keep explicit probabilities on unknowns that have a big affect on his life. I’m not sure what all he uses them for. Sometimes it gets… interesting, when I know his value for an unknown that will also affect one of my decisions, and I know he has access to more information than I do, but I’m not sure whether I trust his calibration. I’m still not really sure what the correct way to handle this is.
It’s a gold standard—true Bayesian reasoning is actually pretty much impossible in practice. But you can get a lot of mileage off of the simple approximation: “What’s my current belief, how unlikely is this evidence, oh hey I should/shouldn’t change my mind now.”
Putting numbers on things forces you to be more objective about the evidence, and also lets you catch things like “Wait, this evidence is pretty good—it’s got an odds ratio of a hundred to one—but my prior should be so low that I still shouldn’t believe it.”
With actual symbols and specific numbers? no. But I do visualize approximate graphs over probability distributions over configuration spaces and stuff like that, and I tend to use the related but simpler theorems in fermi calculations.
So I found this research a while ago saying, essentially, that willpower is only limited if you believe it is—subjects who believed their willpower was abundant were able to power through tasks without an extra glucose boost.
I was excited because this seemed different from the views I saw on LessWrong, and I thought based on what I’d seen people posting and commenting that this might warrant a big update for some people here. Without searching the site, I posted about it, and then was embarrassed to find out that it had been posted here before a couple of years before...
What puzzles me, though, is that people here still seem to talk about ego depletion as if it’s the only model of “willpower” there is. Is it that not everyone has seen that study, or is it that people don’t take it seriously compared to the other research? I’m curious.
There’s been a replication of that (I’m assuming you’re talking about the 2010 paper by Job, Dweck and Walton). I haven’t looked at it in detail. The abstract says that the original result was replicated but you can still observe ego-depletion in people who believe in unlimited willpower, you just have to give them a more exhausting task.
Now I ache to know how people who believe the result of that experiment perform.
So the false belief somehow affects reality, but not enough to make itself actually true?
What’s the difference between “reality” and “actually true”?
In this case, you might phrase it more as ‘the asymptotics are the same, but believing in infinite willpower has a better constant factor’.
Now we need to test the people who know this fact and see when they falter.
Also, I want to see a shounen manga that applies this knowledge.
“X is true” means “X is a map, and X corresponds to some territory Y”. “X is real” means “X is territory.”
The relevant contrast, though, is between ‘affects’ and ‘makes itself’. We could rephrase Ritalin: ‘The inaccurate map changes the territory (in a way that results in its improved accuracy), but not enough to make itself (fully) accurate.’
Thanks! That explains it. And from looking through it, it looks like the ego depletion after you give them enough work is the same regardless of their beliefs, as per gwern’s comment.
Pretty sure the causation goes in the opposite direction; It’s trivial to notice how it works for yourself, very hard to check how it works for others, and then the typical mind fallacy happens.
I recently made a big update in my model of how much influence one can have on one’s longevity. I had thought that genetics accounted for the vast majority of variance, but it turns out the real number is something like 20-30%. This necessitates more effort thinking about optimizing lifestyle factors. Does anyone know of a good attempt at a quantified analysis of how lifestyle factors affect lifespan? Most of the resources I find make vague qualitative claims, as such, it’s hard to compare between different classes of risks.
My impression is that unusually high longevity is strongly influenced by genes, but that still might leave open the possibility that lifestyle makes a big difference in the midrange.
Citation needed
Punch
genetics heritability longevity
into Google Scholar; first hit says:Does this imply that the other 75% is due to life choices? This doesn’t obvious to me.
No, that is not what heritability means. The other 75% is the myriad of other influences of environment, chaotic chance and life choices.
http://link.springer.com/article/10.1007%2Fs00439-006-0144-y
Is there much value in doing psychological tests in any particular interval to catch any mental problem in its early stages even if one is not acutely aware of any problem?
Intellectual hygiene.
I am slowly coming to terms with the limits of my knowledge. Tertrium non datur is something that I should not apply outside of formal systems but always think or I could be wrong in a way I do not realize yet. In all my beliefs I should explicitly plant the seed of its destruction: If this event occurs I should stop believing in this or at least seriously doubt this.
Examples?
For what of the two? An example for the first is to think He will either buy the car or leave or I take a course of action I have not yet forseen where the action could be something malevolent or something happens that renders my plans irrelevant. An example for the second is to think I believe people are motivated by money. If I see a sizeable group of people living in voluntary poverty I should stop believing this.
That’s not quite the law of the excluded middle. In your first example, leaving isn’t the negation of buying the car but is just another possibility. Tertium non datur would be He will either buy the car or he will not buy the car. It applies outside formal systems, but the possibilities outside a formal system are rarely negations of one another. If I’m wrong, can someone tell me?
Still, planting the “seed of destruction” definitely seems like a good idea, although I’d think caution in specifying only one event where that would happen. This idea is basically ensuring beliefs are falsifiable.
A few years ago, in my introductory psych class in college, the instructor was running through possible explanations for consciousness. He got to Roger Penrose’s theory of quantum computations in the microtubules being where consciousness came from (replacing another black box with another black box, oh joy). I burst out laughing, loudly, because it was just so absurd that someone would seriously propose that, and that other scientists would even give such an explanation the time of day.
The instructor stopped midsentence, and looked at me. So did 200-odd other students.
I kept laughing.
In hindsight, I think the instructor expected more solemnity.
Would you care to explain why it’s absurd? :-)
Because there is nothing in neural activity or structure that even suggests that anything having anything to do with macroscopic quantum states has even a little bit to do with it. You don’t need to invoke anything more exotic than normal cellular protein and electrochemistry to get very interesting behavior.
Penrose is grasping at straws trying to make his area of study applicable to something he considers capital M Mysterious, with (apparently, to those that actually work with it) little understanding of the actual biology. It’s a non-sequiter, as if he were suggesting that resonant vibrations in the steel girders of skyscrapers in manhattan were what let the people there trade stocks.
True, but not consciousness. While I agree that Penrose’s model is a wild unsubstantiated speculation, until we have a demonstration of algorithmic consciousness without any quantum effects, his approach deserves a thoughtful critique, not a hearty laugh.
Thing is, it’s no more clear how quantum fluctuations give rise to subjective experience than how chemistry gives rise to subjective experience. So why claim that it’s in the quantum instead of in the chemicals?
Because he thinks that human’s are capable of some form of hypercomputation (He bases this on some Goedelian stuff mainly), and that quantum gravitational effects are what allows it.
Quantum gravity doesn’t help with hypercomputation, which doesn’t help with Goedel, which doesn’t help with consciousness. The most plausible part is that quantum gravity allows hypercomputation, but no one but Penrose believes that.
I still don’t understand the assertion that humans actaully think with logic that is vulnerable to Godelian stuff. Why should we blow up at the Godel incompleteness theorem at all?
If we are a TM computation (which is the standard reductionist explanation), we are vulnerable to the halting problem (which he also argue we can solve), and if we are a formal system of some kind (also standard, although maybe not quite so commonly said), Godel etc applies.
(I was using Godelian in the broader sense, which includes Halting-esque problems.).
I would argue strenuously against the idea that we resemble a formal system at all. Our cells act like a network of noisy differential equations that with enough training can approximate some of its outputs to resemble those of mathematically defined systems—AKA, what you do once you have learned math.
We also aren’t turing machines. Not in the sense that we aren’t turing complete or capable of running the steps that a turing machine would do, but in the sense that we, again, are an electrochemical system that does a lot of things natively without resorting to much in the way of turing-style computation. A network grows that becomes able to do some task.
We are not stuck in the formal system or the computation, we are approximating it via learned behavior and when we hit a wall in the formal system or the computation we stop it and say ‘well that doesn’t work’. That deosn’t mean we transcend the issues, it means that we go do something else.
Because we are more confused, collectively, about quantum fluctuations than we are about chemistry. And we’re also confused about the causes of subjective experience. So “quantum explains consciousness” feels more compelling than “chemistry explains consciousness”. See also: god of the gaps.
I agree, and I would bet a priori 10:1 that chemistry is enough, no quantum required, but until and unless it’s experimentally confirmed/simulated, other ideas are worth considering.
That sounds like privileging the hypothesis to me.
You should be embarrassed by this story. Behaving this way comes across as very smug and disrespectful because it is disruptive and wastes the time of hundreds of people.
I’m honestly not embarrassed by this story because it’s “smug and disrespectful”, I’m embarrassed because the more I stare at it the more it looks like a LWy applause light (which I had not originally intended).
For your next act, you should take physics and start guffawing at a professor’s description of the Copenhagen interpretation.
Upvoted for mention of “applause lights”.
It’s an applause light for actual working neuroscientists too. One which richly deserves its status. Seriously you will get eye rolls and chuckles if you mentioned something like that at a neuroscience talk where I work.
Behaving like this in classroom is probably not a good way to communicate knowledge to one’s classmates or to the instructor. (Although sometimes the first signal of disrespect communicates an important fact.)
But if the instructor told the quantum mysteriousness hypothesis as one worth considering (as opposed to: “you know, here is a silly idea some people happen to believe”), then the instructor was wasting the time of hundreds of people. (What’s next? Horoscopes as a serious hypothesis explaining human traits?)
He ‘should’ feel embarassment if the if interfered with his social goals in the context. All things considered it most likely did not, (assuming he did not immediately signal humiliation and submission, which it appears he didn’t). He ‘should’ laugh at your attempt to shame him and treat the parent as he would any other social attack by a (social distant and non threatening) rival.
Your causal explanation is incorrect—it is a justification not a cause. Signalling implications other than disruption and time wasting account for the smug and disrespectful perception.
Right, assuming he doesn’t care about the fact that hundreds of his peers now think he’s the kind of person who bursts into loud, inappropriate laughter apropos of nothing. (i.e. assuming he isn’t human.)
My model of the expected consequences of the signal given differs from yours. That kind of attention probably does more good than harm, again assuming that the description of the scene is not too dishonest. It’d certainly raise his expected chance of getting laid (which serves as something of a decent measure of relevant social consequences in that environment.)
Incidentally, completely absurd nonsense does not qualify as ‘nothing’ for the purpose of evaluating humor potential. Nerds tend to love that. Any ‘inappropriateness’ is a matter of social affiliation. That is, those who consider it inappropriate do so because they believe that the person laughing does not have enough social status to be permitted to show disrespect to someone to whom the authority figure assigns high status, regardless of the merit of the positions described.
In the very short term maybe, but in the longer term not pissing professors off is also useful.
I don’t think Penrose’s hypothesis is so obviously-to-everybody absurd (for any value of “everybody” that includes freshmen) that you can just laugh it off expecting no inferential distances. (You made a similar point about something else here.)
Sometimes. I was drawing assuming that in a first year philosophy subject the class sizes are huge, largely anonymous, not often directly graded by the lecturer and a mix of students from a large number of different majors. This may differ for different countries or even between universities.
As a rule of thumb I found that a social relationship with the professor was relevant in later year subjects with smaller class sizes, more specialised subject matter and greater chance of repeat exposure to the same professor. For example I got research assistant work and scholarship for my postgrad studies by impressing my AI lecturer. Such considerations were largely irrelevant for first year generic subjects where I could safely consider myself to be a Student No. with legs.
You are right that the inferential distance will make most students not get the humour or understand the implied reasoning. I expect that even then the behaviour described (laughing with genuine amusement at something and showing no shame if given attention) to be a net positive. Even a large subset of the peers who find it obnoxious or annoying will also intuitively consider the individual to be somewhat higher status (or ‘more powerful’ or ‘more significant’, take your pick of terminology) even if they don’t necessarily approve of them.
[re-reads thread, and notices the OP mentioned there were more than 200 students in the classroom] Good point.
That kind of status is structural power, not social power in Yvain’s terminology, and I guess there are more people in the world who wish to sleep with Rebecca Black than with Donald Trump. [googles for Rebecca Black (barely knew she was a singer) and realizes she’s not the best example for the point; but still] And probably there’s also a large chunk of people who would just think the student is a dork with little ability to abide by social customs. But yeah, I guess the total chance for them to get laid would go up—high-variance strategies and all that.
This is PUA nonsense.
So?
Why do you think it did not raise his chance of getting laid?
I’ve done some classroom teaching, and I’ve seen how other students react to students who behave similarly (eye rolling, snickering, etc.) I’ve also seen this from the student side, people like to heap scorn on students who act like this (when they aren’t around.)
To be clear, I’m not saying everything PUA’s say is nonsense. They’ve said so much that by sheer random chance some of it is probably good. But most of PUA stuff is terrible armchair theorizing by internet people who seem very angry at women.
ETA: It’s interesting how much of a perspective change classroom teaching gives you. In a typical classroom, students can’t easily see the faces of most of their peers, and their peers reveal a lot because of this.
It depends on, among other things, how much the students like the lecturer and what kind of subject is being taught (I gather that honesty is valued more, and politeness less, in the hard sciences than in humanities).
PUA isn’t the only thing that Sturgeon’s Law applies to, though.
My experience classroom teaching suggests two things:
Hesperidia’s cocky laughter is not the sort of thing that makes students heap scorn on other students except, perhaps, the most sycophantic teacher’s pets or sometimes among cliques of less secure rivals who want to reassure each other.
The behaviours knb is equivocating with are not the same thing. They have different social meaning and different expected results. While for knb the most salient factor may be that each of those behaviours signals lack of respect for authority not all things that potentially lower the status of the teacher are equal or equivalent. Amused laughter that is not stifled by attention is not the same thing as eye rolling.
I agree with your implicature and wonder whether we have correctly resolved the ambiguity in ‘nonsense’. It seems it could either mean “It is not the case that this would raise his chance of getting laid” or “It is not the case that chance of getting laid is sufficiently correlated with social status as to be at all relevant as a measure thereof”. I honestly don’t know which one is the most charitable reading because I consider them approximately equally as wrong.
As an aside, my motive for throwing in ‘chance of getting laid’ was that often ‘status’ is considered too ephemeral or abstract and I wanted to put things in terms that are clearly falsifiable. It also helps distinguish between different kinds of status and the different overlapping social hierarchies. The action in question is (obviously?) most usefully targeted at the “peer group” hierarchy than the “academia prestige” hierarchy. If you intend to become a grad student in that university’s philosophy department silence is preferred to cocky laughter. If you intend to just complete the subject and and continue study in some other area while achieving social goals with peers (including getting high quality partners for future group work) then the cocky laughter will be more useful than silence.
Is social status the only thing you care about when in a classroom?
And “sufficiently correlated” isn’t good enough, per Goodhart’s law. You can improve your chances of getting laid even more by getting drunk in a night club in a major city, and you can bring them close to 1 by paying a prostitute.
It’s a minor concern, often below getting rest, immediate sense of boredom or the audiobook I’m listening to. I’m certainly neither a model student (with respect to things like lecture attendance and engagement as opposed to grades) nor a particularly dedicated status optimiser.
I think you must have interpreted my words differently than I intended them. I would not expect that reply if the meaning had come across clearly but I am not quite sure where the confusion is.
I think there must be some miscommunication here. There is a difference between considering a metric to be somewhat useful as a means of evaluating something and outright replacing one’s preferences with a lost purpose. I had thought we were talking about the first of these. The quote you made includes ‘at all relevant’ (a low standard) and in the context was merely a rejection of the claim ‘nonsense’.
So, you said:
ISTM this doesn’t follow unless you assume he had no goals other than social ones that his burst of laughter could have interfered with; am I missing something?
OK, I see it now.
Ahh, pardon me. I was replying at that time to the statement “You should be embarrassed by this story.”, where embarrassment is something I would describe as an emotional response to realising that you made a social blunder. It occurs to me now that I could have better conveyed my intended meaning if I included the other words inside my quotation marks like:
Thank you for explaining. I was quite confused about what wasn’t working in that communication.
The ‘nonsense’ part of your claim is false. The ‘PUA’ title is (alas) not something I have earned (opportunity costs) but I do expect this is something that a PUA may also say if the subject came up.
By way of contrast I consider this to be naive moralizing mixed with bullshit. Explanation:
There is a claim about what hesperidia ‘should’ do. That means one of:
Hesperidia’s actions are not optimal for achieving his goals. You are presenting a different strategy which would achieve those goals better and he would be well served to adopt them.
Hesperidia’s actions are not optimal for achieving your goals. You would prefer it if he stopped optimising for his preferences and instead did what you prefer.
As above but with one or more of the various extra layers of indirection around ‘good for the tribe’, ‘in accordance with norms that exist’ and ‘the listener’s preferences are also served by my should, they can consider me an ally’.
It happens that the first meaning would be a false. When it comes to the latter meanings the question is not ‘Is this claim about strategy true?’ but instead ‘Does knb have the right to exert dominance and control over hesperidia on this particularly issue with these terms?‘. My answer to that is ‘No’.
I prefer it when social advice of this kind is better optimised for the recipient, not the convenience of the advice giver. When the ‘should’ is not about advice at all but instead setting and enforcing norms then I insist that the injunction should, in fact, benefit the tribe. In this case the tribe isn’t the beneficiary. We would be better off if the nonsense the professor was citing could be laughed at rather than treated with deference. The tribe isn’t the beneficiary, the existing power structure is. I oppose your intervention.
(Nothing personal, I am replying mostly because I am curious about the theory, not because I think the issue is dramatically important.)
Ignoring that that is not what happened (and that he probably explained the laughter to anyone there that he actually cared about, like friends), you are entirely too eager to designate someone who lacks this property as ‘not human’.
This sort of utilitarian thinking focused entirely on ones own goals without considering the goals of others is what leads people to believe that they should cheat on all of their tests as much as they want. If tests in school are only for signalling and the knowledge is unimportant, then you should do as little work as possible to maximize your test scores, including buying essays, looking over shoulders, paying others to take tests for you, the whole works.
Edit: I am not saying I totally disagree with this sort of thinking. I would describe myself presently as on the fence over whether one should just go ahead and be a sociopath in favor of utilitarian goals. It makes me a little bit uncomfortable, but on the other hand it seems to be the logical result. Many people bring in other considerations to try to bring it back to moral “normalcy” but they generally strike me as ad hoc and not very convincing.
At least it woke up everyone who was sleeping in the lecture.
“Hey Scott,” I said. The technician was a familiar face, since I used the booths twice each day.
“Hey David,” he replied. “Chicago Six?”
“Yup.”
I walked into the booth, a room of sorts resembling an extremely small elevator, and the doors shut behind me. There was a flash of light, and I stepped out of the booth again—only to find that I was still at Scott’s station in San Francisco.
“Shucks,” said Scott. “The link went down, so the system sent you back here. So just wait a moment… oh shit. Chicago got their copy of you right before the link went down, so now there’s one of you in Chicago, too.”
“Well, uh… two heads are better than one, I guess?” I said.
“Yeah, here’s what we do in this situation,” said Scott, ignoring me. “We don’t want two copies of you running around, so generally we just destroy the unwanted copy.”
“Yeah… I guess that sounds like the way to go,” I said.
“So yeah, just get back in the booth and we’ll destroy this copy of you.”
I stepped back into the booth again, and the doors closed. There was a fla--
Meanwhile, I was still walking to my office in Chicago, unaware that anything unusual had happened.
There are a lot of versions of this but very few stories that take advantage of the ability to cheaply copy someone dozens of times.
I recently read the source book for the Eclipse Phase pen and paper RPG, and in the flavor text it has the following description, describing the criminal faction “Pax Familiae”:
Needless to say, Eclipse Phase seems pretty awesome.
You should read the Oracle AI sourcebook Sandberg wrote for it.
Thanks for the tip, I probably would not have seen and been interested in reading that.
And here I thought I’d had an original idea.
My main worry would be that my copy hadn’t actually got to Chicago. I’d want to make damn sure of that before I let the original be killed.
I suspect that if I were sufficiently uninvested in my continuing existence to be willing to terminate it on being assured that a similar-enough person lives in Chicago (which I can easily imagine being), I wouldn’t require enormous confidence in that person’s existence… a quick phone call would suffice.
Strikes me as a perfectly reasonable approach, except the check would be done quickly and automatically, not leaving room for human decisions.
the implication was that this is the usual case- the only thing the connection going down did was mess up this last check.
So… it turns out some people actually do believe that there are fundamentally mental quantities not reducible to physics, and that these quantities explain the behaviour of living things. I confess I’m a bit surprised. I had the impression that everyone these days agreed that physics actually does describe the motion of all the atoms, including those in living brains. But no, believers in the ghost in the machine walk among us, and claim that the motions of living things cannot be predicted even in principle using physics. Something to bear in mind when discussing simulations; obviously such a man will never be convinced that the upload is the person no matter how close the simulation, even unto individual atoms.
I’m mystified that you thought everyone in the world is a materialist-reductionist. What on earth would make you believe that?
The typical mind fallacy, obviously!
But no, what surprised me was that people would seriously assert that “physics does not apply”, and then turn around and say “no law of physics is broken”.
What’s so surprising about extrapolating “different laws in different jurisdictions” to “different laws in different magisteria”? Consider the mental model where physics is not “fundamental”. Then it follows that “physics does not apply” (to a different magisterium) is logically distinct from “laws of physics are broken” (in the same magisterium).
I thought this was interesting: perhaps the first use I’ve read of odds in a psychology paper. From Sprenger et al 2013:
Can blackmail kinds of information be compared to things like NashX or Mutually Assured Destruction usefully?
Most of my friends have information on me which I wouldn’t want to get out, and vice versa. This means we can do favours for each other that pay off asynchronously, or trust each other with other things that seem less valuable than that information . Building a friendship seems to be based on gradually getting this information on each other, without either of us having significantly more on one than the other.
I don’t think this is particularly original, but it seems a pretty elegant idea and might have some clues for blackmail resolution.
If you want to do something, at least one of the following must be true:
The task is simple.
Someone else has taught you how to do it.
You have a lot of experience performing similar tasks.
As you’re trying to perform the task, you receive lots of feedback about how you’re doing.
You’ve performed an extremely thorough analysis of the task which accounts for all possibilities.
If a task is complicated (1 is false), then it consists of many sub-tasks, all of which are possible points of failure. In order to succeed at every sub-task, either you must be able to correct failures after they show up (4 is true), or you must be able to avoid all failures before encountering any of them. In order to avoid all failures before encountering any of them, you must already know how to perform the task, and the only ways to obtain this knowledge are through experience (3), through being taught (2), and through analysis (5).
Except I’m not sure there aren’t other ways to obtain the relevant knowledge. If you want to build a house, one option is to try building lots of houses until finally you’re experienced enough that you can build good houses. Another option is to have someone else who already knows how to build a house teach you. Another is to think carefully about how to build a house, coming up with an exhaustive list of every way you could possibly fail to build a house, and invent a technique that you’re sure will avoid all of those failure modes. Are there any other ways to learn to build a house, besides experience, being taught, and analysis? Pretty sure there isn’t.
I would change 2. to be something like: Someone else has taught you how to do it, or you have instructions on how to do it.
and include
You have unlimited time and resources so you can ‘brute force’ it (try all random combinations until the task is complete)
Yeah, I was considering having instructions to be a type of having been taught.
In the real world, people don’t have unlimited time and resources, so I don’t see the purpose of adding number 6.
While technically true I find this to be a confusing way to think...if it would take you 2^100000 operations to brute force, is this really any different from it being impossible?
That would depend on the type of task—for computational tasks a series of planners and solvers do many ‘jobs’ without knowing what it is doing—just minimising a function repeatedly until the right result appears.
They typically aren’t literally trying all combinations though (or if they are, the space of configurations is not too large). In this sense, then, the algorithm does know what it is doing, because it is narrowing down an exponentially large search space to a manageable size.
Is there much known about how to recall information you’ve memorised at the right time / in the right context? I can memorise pieces of knowledge just fine with Anki, and if someone asks me a question about that piece of information I can tell them the answer no problem. However, recalling in the right situation that a piece of information exists and using it—that I’m finding much more of a challenge. I’ve been trying to find information on instilling information in such a way as to recall it in the right context for the last few days, but none of the avenues of inquiry I’ve searched have yielded anything on the level I’m wanting. Most articles I’ve found are talking about specific good habits, or memory, rather than their mechanisms and how to engage them.
I would try imagining being in the given situation, and then doing the thing. Then hopefully in the real situation the information would jump into my mind.
To do it Anki-style, perhaps the question card could contain a specific instruction to imagine something. So the pattern is not just “read the question, say answer, verify answer”, but “read the question, imagine the situation, say answer, imagine the answer, verify answer”, or something like this.
Without imagining the situation, I believe the connection will not be made in the real time. Unless...
Maybe there is another way. Install a generic habit of asking “what things and I supposed to remember in a situation X?” for some specific values of X. Then you have two parts. The first part is to use imagination to teach yourself asking this question in the situation X. The second part is to prepare the lists for each situations, and memorize them doing Anki. The advantage is that if you change the list later, you don’t have to retrain the whole habit.
Note: I never tried any of this.
Only somewhat relatedly… something I found useful when recovering from brain damage was developing the habit of:
a) telling myself explicitly, out loud, what I was about to do, why I was about to do it, and what I needed to do next, and
b) when I found myself suddenly lost and confused, explicitly asking myself, out loud, what I was doing, why I was doing it, what I needed to do next.
I found that the explicit verbal scaffolding often helped me remember things that the more implicit mechanisms that were damaged by the injury (I had a lot of deficits to attention, short-term memory, that sort of thing) could no longer do.
It also got me a lot of strange looks, which I somewhat perversely came to appreciate.
I have sorted 50 US states on such a way, that their Levenshtein string difference is minimal:
Massachusetts, Mississippi, Missouri, Wisconsin, Washington, Michigan, Maryland, Pennsylvania, Rhode Island, Louisiana, Indiana, Montana, Kentucky, Connecticut, Minnesota, Tennessee, New Jersey, New Mexico, New Hampshire, New York, Delaware, Hawaii, Iowa, Utah, Idaho, Ohio, Maine, Wyoming, Vermont, Oregon, Arizona, Arkansas, Kansas, Texas, Nevada, Nebraska, Alaska, Alabama, Oklahoma, Illinois, California, Colorado, Florida, Georgia, Virginia, West Virginia, South Carolina, North Carolina, North Dakota, South Dakota
http://protokol2020.wordpress.com/2013/09/13/order-by-string-proximity/
I don’t know, was it done before?
Did you order this by a greedy-local algorithm that always takes the next state minimising the difference with the current one; or by a global minimisation of the total difference of all pairs? Presumably the latter is unique but the former changes the order depending on the starting state.
This is a traveling salesman problem, so it is unlikely that Thomas used an algorithm that guarantees optimality. If I understand your proposed greedy algorithm correctly, the distances at the beginning would be shorter than the distances at the end, which I do not observe in his list. A greedy heuristic that would not produce that effect would be to consider the state to be a bunch of lists and at every step concatenate the two lists whose endpoints are closest. This is a metric TSP, so the Christofides algorithm is no more than 1.5x optimal.
You are right. I’ll call this sorting order Levenshtein-TSP ordering.
It is a global minimization. It takes 261 insert/delete operations from Massachusetts to South Dakota.
I got many different solutions with 261 insert/delete operations. Some 262 and more, but none 260 or less.
It’s a challenge to everybody interested to do better. I am not sure if it’s possible.
Not clear what the number of operations has to do with it; isn’t the challenge to find a smaller total Levenshtein difference?
Incidentally, does it make a difference if you consider the end of the string to wrap around to the beginning?
The Levenshtein difference IS the number of insert/delete operations necessary, to transform a string A to string B.
Wrapping around, a circular list, is another option, yes.
Ah! Well then, I learned something today, I can go to bed. :)
I hope people do not mind creating me these. I live in a timezone earlier than American ones and I do periodical thread on another forum anyway so I am in the zone.
I always did them on UTC. I believe the servers are in Melbourne, so as long as it’s Monday in +11 ;-)
Are there resources for someone who is considering running a free local rationality workshop? If not does anyone have any good ideas for things that could be done in a weekly hour-long workshop? I was surprised that there weren’t any free resources from CFAR for exactly this.
The How to Run a Successful LessWrong Meetup booklet probably has some helpful crossover ideas.
A wiki page would be helpful.
The first idea is to play a “rationalist taboo”. Prepare pieces of paper with random words, and tell people to split in pairs, choose a random word, and explain it to their partner. This should only require a short explanation that it is forbidden to use not just the linguistically related words, but also synonyms and some other cheap tricks (such as telling a name of a famous person when explaining an idea). -- Then you could encourage people to use “be specific” on each other in real life. (Perhaps make it a game, that they each have to use it 3 times within the rest of the meetup.)
You could have them use CFAR’s calibration game, and then try making some estimates together (“will it rain tomorrow?”), and perhaps make a prediction market. In making the estimates together, you could try to explore some biases, like a conjunction fallacy (first ask them to estimate something complex, then to estimate individual components, then review the estimate of the complex thing)… I am not sure about that part. Or you could ask people to make 90% confidence intervals for… mass of the Moon, number of people in Bolivia, etc. (things easy to find in wikipedia)… first silently on paper, then telling you the results to write them on blackboard (the hypothesis is that way more than 10% of the intervals will be wrong).
You could do an experiment on anchoring/priming, for example giving each of the people a die, and a questionnaire where the first question would be “roll a die, multiply the result by 10 and add 15, and write it as your first answer” and “how many % of countries in Africa are members of UN? write as your second answer”, then collect the results and write all the estimates on the blackboard in columns by the first answer (as in “people who had 25 in the first answer provided the following values: …; people who has 35, provided these values: …”). People are not allowed to talk while filling out the questionnaire. Another example of priming could be asking in questionnaire the first group “what year did the first world war start?” and then “when was steam engine invented?” (emphasising that if you don’t know, make your best guess), and asking another group “when was the first crusade?” and then “when was steam engine invented?” (the hypothesis is that the first group will give later estimates for the steam engine than the second group).
Are people more productive using laptops or desktops?
In my own experience working 2 hours directly on a laptop means that my back tenses up. That doesn’t happen with the desktop setup that I use.
Having a keyboard direct next to the monitor just results in bad posture. Over the long run I wouldn’t expose myself to it even if my back wouldn’t be as sensible as it is.
This problem is underspecified, consider:
A laptop on your kitchen table
A desktop ergonomically identical to a laptop on your kitchen table
A laptop in your lap in a library on an university campus surrounded by people to ask for advice
A desktop with multiple screens at adjustable heights and a super ergonomic seating solution unlikely to be available where you wanted to use the laptop and a superior pointing device that can’t be moved around easily.
Basically, shitty laptop and desktop setups are basically identical, but they can take advantage of very different types of upgrades:
A laptop can be brought to different environments that enable productivity on different things, and can also be used at times you’d otherwise just be waiting.
A desktop can be upgraded to be much more powerful, and can be hooked up to superior (expensive and bulky) input and output devices.
Either way, ergonomics matter greatly and are easy to get wrong. A desktop has some powerful advantages in setting up a good ergonomic environment, but more importantly since that environment is stationary anyway you can’t get the benefits of both it and the laptop at once. On the other hand, some of the environments the laptop can be moved to might include a better ergonomic setup than you could afford yourself.
Can’t answer your question with a statistic, but in my humble experience, the smaller the device, the easier it feels for me to disconnect from it. I find it more demanding to use a desktop since I have to sit in the same place, in the same position, and the time needed to turn it on/off and put it in standby mode is much greater in comparison to, say, a smartphone.
Laptops can be brought into more distracting environments, and as a result of this I’ve developed a strong habit of wasting time on my laptop. I have no such habit with my desktop, and therefore when I sit down at my desktop I am reasonably productive.
There’s a Culture fanfic thread on this month’s Media Thread. I compiled a list of what little there is.
I am interested in how, in the early stages of developing an AI, we might map our perception of the human world (language) to the AI’s view of the world (likely pure maths). There have been previous discussions such as AI ontology crises: an informal typology, but it has been said to be dangerous to attempt to map the entire world down to values.
If we use an Upper Ontology and expand it slightly (as not to get too restrictive or potentially conflicting) for Friendly AI’s concepts, this would assist in giving a human view of the current state of the AI’s perception of the world.
Are there any existing ontologies on machine intelligence, and is this something worth exploring now to test on paper?
If anyone got that microeconomics vs macroeconomics comic strip, feel free to explain… Possible related: inefficient hot dogs.
Not sure I understand it well either, but that never stopped me before :-D
I think the upper-left quadrant of “describes well / never happens” is the domain of toy theories and toy problems. Microeconomics likely landed there because it tends to go “Imagine a frictionless marketplace with two perfectly rational and omniscient agents...”
The lower-right quadrant of “describes badly / happens all the time” is the domain or reality economics. It’s a mess and nobody understands it well but yes, it happens all the time. Macroeconomics was probably placed there because, while it has its share of toy theories, it does concern itself with empirical studies of what actually happens in reality when interest rates go up or down, money supply fluctuates, FX rates are fixed or left to float, etc.
Traditional microeconomics makes greater assumptions about the economic actors (that they are utility maximizing, have perfect information, in competitive markets with many participants, etc.) and based on those assumptions it is accurate in describing what happens mathematically. Macroeconomics doesn’t make as many assumptions because it’s based on the observed behavior of market participants in aggregate (GDP is just the sum of the four components of GDP, wages can be proven to be sticky the downward direction, and such), but macroeconomists are wrong or surprised all the time about the path of GDP and unemployment.
Note that I don’t necessarily agree with this characterization, but that’s what he’s going for.
I am still confused about aspects of the torture vs specks problem. I’ll grant for this comment that I would be willing to choose torture for 1 person for 50 years to avoid a dust speck in the eye of 3^^^3 people. Numerically I’ll just assign −3^^^3 utilons to specks and −10^12 utilons to torture. Where confusion sets in is if I consider the possibility of a third form of disutility between the two extremes, for example paper cuts.
Suppose that 1 paper cut is −100 utilons and 50 years of torture is −10^12 utilons so the expected utility in either case is the same*. However, my personal preference would be to choose 10^10 papercuts over 50 years of torture. Similarly, if a broken bone is worth −10^4 utilons I would rather that the same 10^10 people got a papercut instead of only 10^8 people having a broken bone. The best case would be if I could avoid 3^^^3 specks in exchange for somewhat fewer than 3^^^3 just-barely-more-irritating specks, instead of torturing, breaking, or cutting anyone.
Therefore, maximizing average or total expected utility doesn’t seem to capture all my preferences. I think I can best describe it as choosing the maximum of the minimum individual utilities while still maximizing total or average utility. As such I am inclined to choose specks over torture, probably as a result of trying to find a more palatable tradeoff with broken bones or papercuts or slight-more-annoying specks. In real life there are usually compromises, unlike in hypotheticals. Still, I wonder if it might be acceptable (or even moral) to accept only 99% of the maximum possible utility if it allows significant maximin-ing of some otherwise very negative individual utilities.
*assume a universal population of 4^^^4 individuals and the roles are randomly selected so that utility isn’t affected by changing the total number of individuals.
I think this is one of the legitimate conclusions you should make from torture vs dust specks. It’s not that your intuition is necessarily wrong (though it may be) but that a simple multiplicative may NOT accurately describe your utility function. You can’t choose torture based on simple addition but that doesn’t necessarily mean choosing torture isn’t what you should do given your UF
I don’t think it’s the specifics of the multiplicative accumulation of individual utilities that matters; just imagine that however I calculate the utility of torture and papercuts there is some lottery where I am VNM-indifferent between 10^10 papercuts and torture for 50 years. 10^10 + 1 papercuts would be too much and I would opt for torture; 50 years + 1 second of torture would be too much and I would opt for papercuts. However, given the VNM-indifferent choice, I would still have a preference for papercuts over torture because it maximizes the minimum individual utility while still maximizing overall utility. (-10^12 utility is the minimum individual utility when choosing torture, −100 utility is the minimum individual utility when choosing papercuts, total utility is −10^12 either way, average utility is −10^12 / (10^10 + 1) either way, so I am fairly certain the latter two are indifferent between the choices. If I’ve just made a math error, that would help alleviate my confusion.).
To me, at least, it seems like this preference is not captured by utilitarianism using VNM-utility. I think it’s almost possible to explain it in terms of negative utilitarianism but I don’t actually try to minimize overall harm, just minimize the greatest individual harm while keeping total or average utility maximized (or sufficiently close to maximal).
Obviously you’re going to get wrong specific answers if you’re just pulling exponents out of thin air. The torture vs. specs example works because the answer would be the same if specs were worth the same as a year of torture or 10^-10 as much or 10^-1000 as much.
Getting approximate utilities is tricky; general practice is to come up with two situations you’re intuitively indifferent about, where one involves a small event, and the other involves a dice throw and then a big event only with a certain probability dependent on it. only AFTER you’ve come up with this kind of preference do you put number on anything, although often you’ll find this unnecessary as just thinking about it like this resolved your confusion.