Open thread, Jan. 25 - Jan. 31, 2016
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
If you haven’t heard of it yet, I recommend the novel Crystal Society (freely available here, also $5 Kindle version.)
You could accurately describe it as “what Inside Out would have been if it looked inside the mind of an AI rather than a human girl, and if the society of mind had been composed of essentially sociopathic subagents that still came across as surprisingly sympathetic and co-operated with each other due to game theoretic and economic reasons, all the while trying to navigate the demands of human scientists building the AI system”.
Brienne also had a good review of it here.
Hmm. review scared me a bit, and the home page talking about incredibly nearsighted populist economics is a huge turn-off. Still, probably need to read it.
Is the kindle version different in any way from the free mobi file? I’ll gladly spend $5 for good formatting or easier reading, but would prefer not to pay Amazon if they’re not providing value.
Haven’t compared the two, but I would assume no. The formatting on the Kindle version was nothing fancy, just standard.
I think I saw the author commenting that he’d have put it up on Kindle for free as well if it was possible, and there’s no mention of it on the story’s site, so it’s probably not intended as a “deluxe edition”.
What’s wrong with the economics on the home page? It seems fairly straightforward and likely. Mass technological unemployment seems at least plausible enough to be raised to attention. (Also.)
It (and your link) treat “employment” as a good. This is ridiculous—employment is simply an opportunity to provide value for someone. Goods and services becoming cheap doesn’t prevent people doing things for each other, it just means different things become important, and a larger set of people (including those who are technically unemployed) get more stuff that’s now near-free to create.
Goods and services becoming cheaper is basically the economists definition of progress, so that’s all good.
There is no natural law which ensures that everyone has earnings potential greater than cost of living. New tech isn’t making food or housing cheaper fast enough, and can’t be expected to in the future. AI could suddenly make most of the work force redundant without making housing or food free.
Indeed not, but that correct idea often leads people to the incorrect idea that robotics-induced disemployment, and subsequent impoverishment, are technological inevitabilities. Whether people everybody is going to have enough income to eat depends on how the (increased) wealth of such a society is distributed .. basically to get to the worst-case scenario, you need a sharp decline of interest in wealth redistribution, even compared to US norms. It’s a matter of public policy, not technological inevitability. So it’s not really the robots taking over people should be afraid of, it’s the libertarians taking over.
I am not sure what that is supposed to mean. There is enough food and living space to go round, globally, but it is not going to everyone who needs its, which is, again, re/distribution problem
First, what’s “fast enough”? Look up statistics of what fraction of income did an average American family spend on food a hundred years ago and now.
Second, why don’t you expect it in the future? Biosynthesizing food doesn’t seem to be a huge problem in the context that includes all-powerful AIs...
Fast enough would be moore’s law—price of food falling by 2x every couple of years. Anything less than this could lead to biological humans becoming economically unviable, even as brains in vats.
Like this?
Biosynthesized food is an extremely inefficient energy conversion mechanism vs say solar power. Even in the ideal case, the human body burns about 100 watts. When AGI becomes more power efficient than that, even magical 100% efficient solar->food isn’t enough for humans to be competitive. When AGI requires less than 10 watts, even human brains in vatts become uncompetitive.
A future of all-powerful AIs is the future where digital intelligence becomes more efficient than biological. So the only solution there where humans remain competitive involve uploading.
Why so? Human populations do not double every couple of years.
Hold on. We’re not talking about competition between computers and humans. You said that in the future there will not be enough food for all (biological) humans. That has nothing to do with competitiveness.
I think you are misremembering the context. Here’s the first thing he said on the subject:
and that is explicitly about the relationship between food cost and earning power in the context of AI.
I was expressing my reservations about the “New tech isn’t making food or housing cheaper fast enough” part.
Of course not everyone has earning potential greater than the cost living. That has always been so. People in this situation subsist on charity (e.g. of their family) or they die.
As to an AI making work force redundant, the question here is what’s happening to the demand part. The situation where an AI says “I don’t need humans, only my needs matter” is your classic UFAI scenario—presumably we’re not talking about that here. So if the AI can satisfy everyone’s material needs (on some scale from basics to luxuries) all by itself, why would people work? And if it’s not going to give (meat) people food and shelter, we’re back to the “don’t need humans” starting point—or humans will run a parallel economy.
I take it jacob_cannell has in mind neither a benevolent godlike FAI nor a hostile (or indifferent-but-in-competition) godlike UFAI, in either of which cases all questions of traditional economics are probably off the table, but rather a gradual encroachment of non-godlike AI on what’s traditionally been human territory. Imagine, in particular, something like the “em” scenarios Robin Hanson predicts, where there’s no superduperintelligent AI but lots of human-level AIs, probably the result of brain emulation or something very like it, who can do pretty much any of the jobs currently done by biological humans.
If the cost of running (or being) an emulated human goes down exponentially according to something like Moore’s law, then we soon have—not the classic UFAI scenario where humans are probably extinct or worse, nor the benevolent-AI scenario where everyone’s material needs are satisfied by the AI—but an economy that works rather like the one we have now except that almost any job that needs a human being to do it can be done quicker and cheaper by a simulated human being than by a biological one.
At that point, maybe some biological humans are owners of emulated humans or the hardware they run on, and maybe they can reap some or all the gains of the ems’ fast cheap work. And, if that happens, maybe they will want some other biological humans to do jobs that really do need actual flesh. (Prostitution, perhaps?) Other biological humans are out of luck, though.
Given that jacob_cannell is talking about food and housing, I don’t think he has the ems scenario in mind.
I am not a big fan of ems, anyway—I think this situation as described by Hanson is not stable.
The scenario I think he has in mind is one in which there are both biological humans and ems; he identifies more with the biological humans, and he worries that the biological humans are going to have trouble surviving because they will be outcompeted by the ems.
(I’m pretty skeptical about Hansonian ems too, for what it’s worth.)
I think the Hansonian EM scenario is probably closer to the truth than the others, but it focuses perhaps too much on generalists. The DL explosion will also result in vastly powerful specialists that are still general enough to do complex human jobs, but still are limited or savant like in other respects. Yes, there’s a huge market for generalists, but that isn’t the only niche.
Take this Go AI for example—critics like to point out that it can’t drive a car, but why would you want it to? Car driving is a different niche, which will be handled by networks specifically trained for that niche to superhuman level. A generalist AGI could ‘employ’ these various specialists as needed, perhaps on fast timescales.
Specialization in human knowledge has increased over time, AI will accelerate that trend.
If people own the advanced robots or AIs that are responsible for most production, why would they be impoverished by them? More to the point, why would they want the majority of people who don’t own automated factories to be impoverished, since that means they would have no-one to sell to? There’s no law of economics saying that ina a wealthy society most people would starve, rather to keep an economy going in anything like its present form, you have to have redistribution. In such a future, tycoons would be pushing for basic income -- it’s in their own interests.
The CFAR fundraiser has only a few days left, and is on $150k out of $400. If you’re on the fence about donating, this is a good time. If you haven’t already, you might want to read why CFAR?.
I can’t donate from this computer, but I intend to donate £875 (~$1250) before the fundraiser expires, representing four months of tithing upfront.
Donated, but it only came to $1205.
(CFAR, did you lose the currency selection option on your donate page in the past few days? I appreciated that option. It’s easier than guessing an approximate dollar amount and then potentially going back if it converts to a GBP amount sufficiently far from what I intended.)
Seeking comments
I’m trying a writing experiment, and want to design as much of a story as possible before starting writing it. I want to make sure I’m not forgetting any obvious details, or leaving out important ideas, so I’d appreciate any comments you can add to my draft design document at https://docs.google.com/document/d/1XcgNwELHCU-r7GuYUgDNDDIviThd8Y7Bdto_kMIcmlI/edit . Thank you for your help.
I’ve briefly looked it up and it’s interesting. You’ve obviously focused a lot on world-building, so kudos to that.
What I’m not seeing is an interesting developement: there are many directions the plot could go, but you need to choose one and explore that.
I don’t know much about the literary trope of the last man on Earth, but it’s interesting that the protagonist might not be the last man, but the last sane man, akin to “I am legend”.
Will there be any kind of conflict? How do you plan the story to end?
And I’m still building. I neglected some numbers in the rocket design, and now it looks like I have to rebuild the whole idea from scratch.
I can see that. To me, the most interesting idea so far is that when the risk of gathering new information is that high, the correct decision to be made may be to act to eliminate a potential threat without knowing whether or not that threat really exists. My current intuition is that I’m probably going to build the story around that choice, and the consequences thereof. (Hopefully without falling into the same sorts of story-issues that cropped up in “The Cold Equations”.)
At the moment, my memories of high-school English classes say that the conflicts will primarily be “Man vs Nature” (eg, pulling a ‘The Martian’ to build infrastructure faster than it can glitch out) and “Man vs Self” (“Space madness!”), with some “Man vs Man” showing up later as diverging copies’ goals diverge.
I think I’ll keep things hopeful, and finish up with a successful launch to/arrival at Tau Ceti.
Ehrm… I think you should devote the main chunck of effort to develope the story and then derive the technical data you need. Very little of that world building stuff will actually go onto the page (if you want some reader, that is).
I’ve settled reasonably firmly on the questions I want to focus on—how to make decisions when gathering useful information to help make those decisions carries a risk of an immense cost, especially when the stakes are meaningful enough to be significant—and I’ve just finished working out the technical details of what my protagonist will have been doing up to the point where he can’t put off making those decisions any longer.
I have a rule-of-thumb for science-fiction, in that knowing what a character /can’t/ do is more important than knowing what they /can/, so by having worked out all the technical stuff up to that point, I now have a firm grasp of the limits my protagonist will be under when he has to deal with those choices. If I’m doing things right, the next part of the design process is going to be less focused on the technology, and more on decision theory, AI risk, existential risk, the Fermi Paradox and the Great Filter, and all that good, juicy stuff.
I’ll await the next iteration for further comments, then. Be sure to post it here!
I have read it and added several comments.
And I thank you for your input; I’ll definitely be using your input to improve the design-doc.
Are there QALY or DALY reference tables by condition, disease or event?
If not, constructing one would be of unspeakable value to the EA community, not to mention conventional academics and decision makers.
https://research.tufts-nemc.org/cear4/AboutUs/WhatistheCEARegistry.aspx
Also look into the Global Burden of Disease.
CEA registry is interesting but it doesn’t have a standardised database of QALY or DALY by condition
Global burden of disease doesn’t format their content in a fashion suitable for direct comparison of health state, unweighted by prevalence. I have a preference for QALY over DALY anyway.
I’m not sure what you mean. If you search CEA, you get back utilities. If you search “Alzheimer’s” in https://research.tufts-nemc.org/cear4/SearchingtheCEARegistry/SearchtheCEARegistry.aspx you get back
A utility weight over a year is the QALY; if you die in a year of Alzheimer’s, and the weight of that year is 0.6, then you lost 0.4 QALYs compared to if you had lived that year in perfect health and then died, no?
I’m trying to help a dear friend who would like to work on FAI research, to overcome a strong fear that arises when thinking about unfavorable outcomes involving AI. Thinking about either the possibility that he’ll die, or the possibility that an x-risk like UFAI will wipe us out, tends to strongly trigger him, leaving him depressed, scared, and sad. Just reading the recent LW article about how a computer beat a professional Go player triggered him quite strongly.
I’ve suggested trying to desensitize him via gradual exposure; the approach would be similar to the way in which people who are afraid of snakes can lose their fear of snakes by handling rope (which looks like a snake) until handling rope is no longer scary, and then looking at pictures of snakes until such pictures are no longer scary, and then finally handling a snake when they are ready. However, we’ve been struggling to think of what a sufficiently easy and non-scary first step might be for my friend; everything I’ve come up with as a first step akin to handling rope has been too scary for him to want to attempt so far.
I don’t think that I’ll even be able to convince my friend that desensitization training will be worth it at all—he’s afraid that the training might trigger him, and leave him in a depression too deep for him to climb out of. At the same time, he’s so incredibly nice, and he really wants to help with FAI research, and maybe even work for MIRI in the “unlikely” (according to him) event that he is able to overcome his fears. Are there reasonable alternatives to, say, desensitization therapy? Are there any really easy and non-scary first steps he might be okay with trying if he can be convinced to try desensitization therapy? Is there any other advice that might be helpful to him?
This sounds like someone who’s salient feature is math anxiety from high school asking how to be a research director at CERN. It’s not just that the salient feature seems at odds with the task, it’s that the task isn’t exactly something you just walk into, while you sound like you’re talking about helping someone overcome a social phobia by taking a part-time job at supermarket checkout. Is your friend someone who wins International Math Olympiads?
He sounds like someone with a phobia of fire wanting to be a fireman. Why does he want to work on FAI? Would not going anywhere near the subject work for him instead?
He wants to work on FAI for EA/utilitarian reasons—and also because he already has many of the relevant skills. He’s also of the opinion that working on FAI is of much higher value than, say, working on other x-risks or other EA causes.
If someone has anxiety about a topic, I suggest they go after all the normal anxiety treating methods. SSC has a post about Things that Sometimes Work If You Have Anxeity, though actually going to see a therapist and getting professional help would likely help more.
If he wants to try exposure therapy, good results have apparently recently occurred from doing that while on propranalol.
Working on AI research and working on FAI research aren’t the same thing. I think it’s likely a bad idea to not distinguish between the two when talking with a friend who wants to go into research and fears that UFAI will wipe us out.
More to the core of the issue desensitization training is a slow way to deal with fears and I’m not sure that it even works in this context. A good therapist or coach has tools to help people deal with fears.
Oops. I’ve tried to clarify that he’s only interested in FAI research, not AI research on the whole.
I think that interest in AI research in general would help to demystify the whole topic a bit, it would make it look a little bit less like magic.
FAI is only a problem because of AI. The imminence of the problem depends on where AI is now and how rapidly it is progressing. To know these things, one must know how AI (real, current and past AI, not future, hypothetical AI, still less speculative, magical AI) is done, and to know this in technical terms, not fluff.
I don’t know how much your friend knows already, but perhaps a crash course in Russell and Norvig, plus technical papers on developments since then (i.e. Deep Learning) would be appropriate.
There’s no such thing, any more than there is research into alien flying saucers with nanolasers of doom. There’s a lot of fiction and armchair speculation, but that’s not research.
Any reason he’s not trying to fix his phobia by conventional means?
What I mean is that he’d be interested in working for MIRI, but not, say, OpenAI, or even a startup where he’d be doing lots of deep learning, if he overcomes his phobia.
It might be that his interest in FAI is tied to his phobia so if the phobia goes away, so may the interest...
The latest SSC links post links to this post on getting government grants, which sort of set off my possibly-too-good-to-be-true filter, despite the author’s apparent sarcasm in the example he gave about bringing in police officers to talk with schoolchildren. Can anyone more knowledgeable comment on this article? Is it realistic for EAs to go out and expect to find government grants that they could get a reasonable amount of utility out of?
My experience is mostly with formula grants, where the grant is mostly a formality like the EFT reimbursement. Many grants have expected recipients. Others are desperately seeking new applicants and ideas. From the outside, it is difficult to tell which is which, and from the inside grantor agencies often have trouble telling why random outsiders are applying to their intentionally exclusive grants but they have trouble finding good applicants for the ones where they want new folks.
The ability to write a successful grant is a skill. Some people in the EA community could likely do this successfully if they focus on getting good at it. Other people might not have the relevant skill set.
Anyone in Singapore? I’m here for a month, PM me and I’ll buy you a beer lah.
From Omnilibrium:
Do most charities have negative utility?
Beating a ‘Yes Minister’ system
What causes the decline of capitalist economies?
Why are Education and Health Care Outcomes So Bad in the United States?
The title of the article on charity seems clickbait-y to me. I think that if a charity had negative utility, that would imply that burning a sum of money would be preferable to donating that money to that charity. However, this is not the thesis of the article; instead, the article’s thesis is:
If there are two charities, one which feeds homeless population for $3/day and a 2nd which feeds same population same food for $6/day, AND people tend to give some amount of money to one charity or the other, but not both, then it seems pretty reasonable to describe the utility of the more expensive charity as negative. It is not that it would be better to burn my contribution, but rather that I am getting $3 worth of good from a $6 donation. But just out and out burning money being superior to donating it is not the only way to interpret negative utility.
If you have $6 to give towards feeding the homeless, it would be better to burn $2 and donate $4 to the cheaper provider than to give the entire $6 to the more expensive charity. But only in the same sense that it would be better to burn $3000 and buy a particular car for $10,000 than to burn no money and buy that exact same car for $14,000. Whereever there are better and worser deals, burning less than the full savings can be worked in as part of a superior choice. This does not have anything to do with whether these are charities or for profit businesses.
I’ve always thought of negative utility as “cost exceeds benefits”; but it seems to be getting used here as if “opportunity cost exceeds benefits”, which is not the same thing.
I’m not sure which is correct. Not that familiar with utilitarianist nuts and bolts.
As with so many things, if there is more than one way to interpret something there is generally not too much to be gained by interpreting so that there is an error when there is a way to interpret it that makes sense. Clearly if a new charity sets up that takes twice the cost to provide the same benefit, and people switch donations from the cheaper charity to the more expensive one, utility produced has been decreased compared to the counterfactual where the new more expensive charity was not set up.
So whatever terminology you prefer, 1) opportunity cost is a real thing and arguably is the only good way to compare money to food quantitatively, and 2) whatever the terminology, the point of the original article is a decrease in utility from adding a charity, which is a sensible idea and well within the bounds of reasonable interpretation of the title under question.
Do most articles on rationality blogs have negative utility?
More on the replicability problem.
How would you go about teaching ‘general science, in particular biology, preferably plants’ to a six-year-old who plays go (and wins)? I used to think she’s just a cute kid who listens for much longer than I have any right to expect, and now this.
Have you read much Feynman? He has some stories of how his father encouraged him to develop the scientific mindset (like this) that might be helpful. The core thing seems to be focusing on the act of thinking things through, not the memory trick of knowing how things are labeled. I’m not sure how to incorporate that into biology, though.
(Braggity brag)
Today, we played a game of identifying common (Ukrainian) foodstuffs. There were 4 plastic bottles with wide mouths (1 transparent, 3 opaque but somewhat see-through when light goes through them), holding a small amount of stuff, and separately a handful of beads. The transparent bottle contained rice, the other ones—either soya or wheat or buckwheat (I’m thinking mixes for next time, and maybe use some other things like pepper,..). The task was to learn what was inside without opening and looking.
At first she didn’t know how to, then she didn’t know how to describe sounds made by sloshing the stuff around, then tried to brute-force it by guessing, then spilled the rice vacating the bottle for beads (a reference point for larger particles), then described smells I could not detect (but that’s neither here nor there), then misjudged the shadow size, and then we ran out of time just as we came to the conclusion that the last bottle contains either peas or soya or lentil. I had to give her some hints since she didn’t know everything to be found in the kitchen, and soya is certainly far from the commonest things (but she knew it) and to look for pepper which I hadn’t anticipated, but still. Love my job.
Yes, I agree, but there are… complications: 1) she cannot read, write half an alphabet or count beyond 30, because ‘nobody at home has the time to teach me’, and I keep thinking ‘surely teaching her to read is the best thing I can ever do’, but she wants to talk about botany, of all things, and I don’t really have any formal power to make her do anything else; 2) she doesn’t understand the value of observations and records, and I don’t want to show her ‘tricks’ like pigment separation because inferential distance and so on, and so we are stuck with ‘simple fast transformations’ which are very difficult for me to keep fitting into some kind of system, so I just wing it; 3) for an hour and a half! Torture! We keep veering off into binocular vision, birds and so on, but it’s like n All-Biology Test and I hate the lack of structure, but I cannot just tell her to go away; 4) and until today I kind of thought she was a good little polite girl, who humored my rants.
I can show her pictures of time series (of some developments), but it’s something you prepare without haste and usually with much trouble. I think it would be a good exercise in pattern-matching, but… there are so many things which can go wrong.
I think that children are very good at learning even the most unexpected things. I think that whatever you do things are unlikely to go wrong, as long as you pay some attention to what she likes and what bores her and don’t force her to do things she hates. Children are curious, but their attention benefits from some direction.
Does she object against learning the alphabet in principle, or merely because she doesn’t consider it the best use of the time when she is with you?
If it’s the latter, you could prepare her a tool to learn the alphabet when she is back at home. Just print a paper containing all letters in the alphabet, and next to each letter put two or three pictures of things that start with that letter. (“А = автомобіль, антена; Б = банан,...”; free pics.) Or maybe make it two-sided cards, so she can practive active recall. Then give her a book on botany, as a motivation.
A book on botany with great pictures I already gave her, and her relatives read to her from it. I think I’ll have a chat with her dad, though. There are many primers she can use which are much better than anything I can printout...
The book sounds good. I think ultimately there are two things that are important here: the first is teaching her about botany and the second is to instill and build on her drive to want to learn the material or more broadly a problem solving/curious mindset. In my opinion, the second one is more important.
Two pieces of advice:
Forget about the structure. Just think about setting up an environment that will let her explore, play and teach herself. The book is a good start. Maybe, a plant for her room would be a good idea.
Explore with her. The best thing you can do, I reckon, is to take her outside and explore with her. I don’t know much about Botany, but I think it would be cool, as an example, if you picked up a flower and pointed out to her that most people are born with two arms and then asked: “So, would that mean that the amount of petals on this type of flower will all be the same”. Then, no matter what she says you can go to a group of the flowers and let her count the amount of petals to see if they’re the same. Then, you can ask another question: are the buds the same etc.
I’m sure that the DeepMind team would be interested in an answer too. I think the CPU/GPU cluster that powers AlphaGo could actually do much better in science (especially biology) than the typical six-year-old.
Hello!
I’m getting into the Bay area this afternoon for the CFAR workshop starting tomorrow. I’m looking for advice on how to spend the time and also where might be a good place to look for affordable lodging for one evening.
I’d initially thought about crashing at the Piedmont house hostel as it’s cheap and close enough that I could visit CFAR before heading over tomorrow, but it appears to be sold out. I figured there are probably folks here who know the area or have visited, so I didn’t see any harm in asking for info, or checking to see if anyone was getting up to anything.
:) Kim
Hi k_ebel. I’m not sure whether this is the best place for getting in touch with people on short notice (though I’m open to stand corrected). A more immediate way to get in touch and discuss matters is the LW Slack https://wiki.lesswrong.com/wiki/Less_Wrong_Slack or the LW irc.
The Nash hotel is cheap and close, if a bit old. Plus, you know, once you’re there you can’t find a better hotel.
What are your thoughts on the refugee crisis?
There’s a whole -osphere full of blogs out there, many of them political. Any of those would be better places to talk about it than LW.
What’s wrong with LW?
this is a politically fuelled topic. Most people (here) don’t want to dabble.
Then which blogs do you agree with on the matter of the refugee crisis? (My intent is just to crowd-source some well-founded opinions because I’m lacking one.)
LW avoids discussing politics for the same reason prudent Christmas dinner hosts avoid discussing politics. If you wish to take your crazy uncle to the pub for a more heated chat, there’s Omnilibrium.
Another sad example of a problem that would be difficult but not impossible to solve rationally in theory, but in real life the outcome will be very far from optimal for many reasons (human stupidity, mindkilling, conflicts of interest, problems with coordination, etc.).
There are many people trying to escape from a horrible situation, and I would really want to help them. There are also many people pretending to be in the same situation in order to benefit from any help offered to the former; that increases the costs of the help. A part of what created the horrible situation is in the human heads, so by accepting the refugees we could import a part of what they are trying to escape from.
As usual, the most vocal people go to two extremes: “we should not give a fuck and just let them die”, or trying to censor the debate about all the possible risks (including the things that already happened). Which makes it really difficult to publicly debate solutions that would both help the refugees and try to reduce the risk.
Longer-term consequences: If we let the refugees in, it will motivate even more people to come. If we don’t let the refugees in, we are giving them the choice to either join the bad guys or die (so we shouldn’t be surprised if many of them choose to join the bad guys).
Supporting Assad, as a lesser evil than ISIS is probably the best realistic option, but kinda disappointing. (Also anything that gives more power to Russia creates more problems in long term.) Doesn’t solve the underlying problem, that the states in the area are each a random mix of religions and ethnicities, ready to kill each other. A long-term solution would be rewriting the map, to split the groups who want to cut each other’s throats into different states. No chance to make Turkey agree on having Kurdistan as a neighbor. Etc.
If I were a king of Europe, my solution would be more or less to let the refugees in, but to have them live under Orwellian conditions, which would expire in 5 or 10 years after their coming assuming they commited no crimes (a trivial crime would merely extend the period, a nontrivial crime would lead to deportation, with biometric data taken so the person doesn’t get a second chance). For example, there would be a limit of one refugee family per street, so they cannot create ghettos. Mandatory lessons on how to fit in the culture. Islam heavily controlled, only the most nonviolent branches allowed.
Tim on the LW Slack gave an impressive illustration of the different levels the refugee crisis can be seen. He was referring to Constructive Development Theory which you might want to look up for further context. I quote verbatim with his permission:
Is there a implication of ranking with the way the levels are numbered? Are Level 5 people “more advanced” than lower levels and should one strive to move up levels?
Maybe it’s just me, but I don’t see post-modernists as the ultimate peak of human thinking.
In the original, there’s an observable pattern to these “levels”, alternating between multiple contradictory models, and then a new model in which the various previously-contradictory models are reconciled into a unified framework. Even numbers are a cohesive framework, odd numbers are multiple-competing-model frameworks.
This pattern is conspicuously absent from Tim’s reconstruction. The level 3 people don’t share or understand the level 2 people’s concerns; in truth, they’re merely level 2 people of Tim’s favored tribe. The Level 4 described is just Tim’s level 3 with a hint of understanding of level 2 concerns; in truth, they’re level 3 people of Tim’s disfavored tribe. Tim’s level 5, Postermodernism, is a Level 3 of Tim’s-favored-tribe understanding of Level 5.
IOW, this farming is just predictable and blatant tribalism of the form of placing your own way of thinking as being “superior” to the opposing tribe’s way of thinking.
This was part of a much larger discussion so a lot is omitted here.
In kegan’s books, people at ‘higher’ levels sometimes lose something that the lower levels have. Level 4 people can lose a sense of intimacy and connection with other people, God etc. Level 3 people often fail to appreciate level 2 people’s mindset. Level 4 people can lack a sense of immediacy that level 2 people have.
The progression in Kegan’s book is really about the fact that what you are subject to at one level becomes object at the next level. It does not require that Level X people fully understand people at ‘lower’ levels.
I guess in one sense I have succeeded because your guess at my favored view is entirely wrong. I was trying not to make an argument about refugee policy but to illustrate various kinds of thinking.
No, that is not what the progression is “really about”. And yes, you have to be able to understand people at “lower levels” in order to be at a higher level. A Level 4 Person might not have a sense of intimacy or connection—but they have to be able to understand that other people have intimacy and connections.
So what is your favored view, and how does it meaningfully differ from the Postmodern view you espouse as the Level 5 solution?
Kegan points out that many who fancy themselves as postmodernists are actually trapped in level 3. They have been told that modernism has its flaws and there therefore reject it and stay at level 3. This fits some young people in college.
A level 5 would be post-modern in the sense that they have mastered modernist ideas but are not trapped within them.
The linked post gives a brief overview. The higher levels are ‘more advanced’ in that there is an asymmetry; the level 5 can emulate a level 4 more easily than a level 4 can emulate a level 5. But that doesn’t translate to ‘more advanced’ in all possible meanings. A relevant quote from the link:
So the implication is that’s a straight IQ ladder, then. My original objection stands.
My experience is that it’s related to, but distinct from, g. High g and more mature age make the higher levels easier but don’t create them on their own.
Why would a high-IQ level 4 person have trouble emulating level 5? See e.g. Sokal, etc.
ETA: I looked through the linked article and I stick by my impression that this is a straightforward IQ ladder modified by “maturity” (appropriate socio-emotional development, I guess?) In particular, I expect that levels have pretty hard IQ requirements, e.g. a person with the IQ of 80 just won’t make it to Level 4.
I think it is partly linked to IQ. I agree that there are probably limits to the levels people at low IQs can achieve,
But there is also a development process that takes time. Few teenagers, no matter how smart, are at level 5 Think by analogy that few 15 year old people have mastered quantum field theory. No matter how smart you are it takes time
Sokal is emulating level 3 people who think they are level 5. These people are anti-modern not post-modern. Most post-modernists are at level 3 as far as I can tell. I have been trawling through their works to assess this.
A level 5 physicist might be someone like say Robert Laughlin a Nobel Physicist who wrote a book “A Different Universe” questioning how fundamental ‘fundamental’ physics is. He has mastered modernist physics and is now building on this. This is very different from a Deepak Chopra type who doesn’t even get to first base in this enterprise.
I don’t think Sokal is an example of systems of systems thinking. (The post-modernist label is not a particularly useful one; here it means the level after the modernist level, and is only partly connected to other things called post-modernist.)
Why would a high-IQ person have trouble emulating someone of the opposite sex? (There doesn’t appear to be the same asymmetry—both men and women seem bad at modeling each other—but hopefully this will point out the sort of features that might be relevant.)
Some charitable reading is required; the labels are oversimplifications.
I agree that most post-modernists are merely pretending to be at some high level of thinking, and the reason it works for them is that most of their colleagues are in exactly the same situation, so they pass the “peer review”. But we can still use them as a pointer towards the real thing. What would be the useful mental skills that these people are pretending to have?
I remember reading somewhere about a similar model, but for the given question, on each level both “pro” and “con” positions were provided. That made it easier for the reader to focus on the difference between the levels.
Some things to bear in mind in relation to Kegan’s work are
Pretty well everyone thinks that they are 1-2 levels higher than they are actually at. This may include you. It certainly included me.
Most people are at level 3 or below.
Very few people under 30 are at level 4.
Hardly anyone is at level 5.
This from Kegan.
This may also help—a more systematic description of the levels. The right two columns are mine, from memory the others are by Kegan,
https://drive.google.com/file/d/0B_hpownP1A4PaXN1Tjg2RFd6N0E/view?usp=sharing
Better yet, has anyone here changed any part of their life because of refugee crisis? Why did you do this? Why haven’t you done this before? Thoughts are less interesting than actions.
[I’ll post on next week’s open thread as well.]
If you are interested in AI risk or other existential risks and want to help, even if you don’t know how, and you either...
Live in Chicago
Attend the University of Chicago
Are are intending to attend the University of Chicago in the next two years.
...please message me.
I’m looking for people to help with some projects.
Who are these bacterial-actioned plants?
...and what is ‘plants contaminated by soil’?
Oh I know that one! It’s plants that still have dirt on them. For instance, dirty potatoes!
Is your completed university coursework published online, why/why not? Should I publish my completed university coursework online? It is not outstanding. However, I value transparency and feedback. I reckon it’s unlikely that someone will provide unsolicited feedback unless they have a vendetta against me in which case the work could be used against me. However, I suspect it may give me social ‘transparency’ points which are valued amongst our kind of people. Yes?
Other people seem to post their essays and other content online without much fuss. I got through my classes but I feel ashamed to publish that work because I feel it’s inadequate. Maybe this is just imposter syndrome or perfectionism and it will be good to publish to combat that? Or, maybe I really am scraping through on the sympathy of professors and the accomodating standards in educational institutions in this day and age :/
I feel different about my post history here and at my linked reddit accounts because I don’t habitually tie my content to my name (which is unique enough to find me if you google it and no one else)
I have thought about this, too. I am currently not publishing my coursework (mostly programming / lab reports) because the tasks may be used again in the following year. I do not want to force instructors to make new exercises for each course and I don’t think I’d get much use out of publishing them. The argument wouldn’t apply to essays, of course.
Can I just give a huge shoutout to Briant Tomasik.. I’ve never met the guy, but just take a look at his unbelievably well documented thinking like this and this. I feel conjuring up any words to describe how saintly that man is would be an injustice to superhuman levels of compassion AND intelligence. Why isn’t he one of the ‘big names’ in the EA space already? Or is he, but just not in the circles I run in?
If I look in Google Maps at California there seem to be huge open spaces. What’s stopping new cities in California to be build on land that’s outside of the existing cities?
Cities are where they are because of actual reasons of geography, not just people plopping things down randomly on a map. You need to get stuff into them, stuff out of them, have the requisite power and water infrastructure to get to them (ESPECIALLY in California)… they aren’t something you plop down randomly on a whim.
Also, previous attempts at doing exactly this have only had modest success:
There are planned desert cities in Arabian peninsula. If land value in California grows because people value geographical proximity to San Francisco that much at some point it will outweigh costs of having to build infrastructure in the middle of the desert.
There are multiple problems that need to be solved here. Buying land is one of them, and yes, it seems like a reasonable investment for someone who has tons of money. The other problem is water.
Yet another problem could be the transit from the new city to SF. Geographical proximity may be useless if the traffic jams make commuting impossible.
A lot of Californians like those big open spaces. Others don’t want developments that make it easier for poor people to live around them (due to fear of crime, “bad schools” or other unpleasantness.)
San Francisco is now one of the most expensive real estate markets in the world, and the populace wants to keep it that way.
Alright so how do we keep these people away then while lowering prices?
You can implement Hukou system. Obviously, it would lead to other problems.
You wouldn’t even need the hukou; private covenants would be quite enough. However, these conevants are banned as an infringement of civil rights. But the real solution is to decouple education from local real-estate markets, by allowing people to freely choose their preferred schools (public, charter or private, via student-linked vouchers) regardless of their home address or VAT code.
Even people with no children buy property in good locations because that’s where the jobs are. If remote work became more popular it would make living in a big city no longer a necessity.
I am a bit doubtful that free school choice will solve the “in some places real estate is really expensive” problem.
For example, NYC has a notoriously bad public school system and very expensive real estate.
The problem is not expensive real estate persay; it’s supply restrictions that make real estate more expensive than necessary. Free school choice would remove much of the motive for these restrictions.
E.g. in New York City..?
I don’t think school is the only or even the main reason for supply restrictions. People like to live with neighbours of approximately the same social standing and will actively oppose hoi polloi moving in, even without schools being involved.
I guess because people want to live in the existing cities? It’s not like there is nowhere to live in California—looking at some online apartment listings you can rent a 2 bedroom apt in Bakersfield CA for $700/month. But people still prefer to move to San Francisco and pay $5000/month.
Environmental Impact Statements :-D
On a bit more serious note, the usual thing—you can build a city in the middle of a desert, but why would people want to live there? People want to live in LA or SF, not just in Californian boondocks...
People might want to live in LA or SF but on the net the high prices cause people to migrate out of California with 94.000 more people leaving California than joining it in 2011.
It seem like there’s open space in 1-hour driving distance of SF. Living there at a decent rent might be preferable to leaving California all-together.
I’m dreaming of a huge solar panels field that powers a desalinization complex and Nanoclay seeded desert… Alas, if I only had a couple hundred million dollars...
High quality infrastructure and community services are expensive, but taxpayers are reluctant to relocate to the new community until the infrastructure and services exist. It’s a bootstrap problem. Haven’t you ever played SimCity?
Then how are new cities ever founded? How did Belmopan, Brasília, Abuja and Islamabad do it? Look at the dozens of new cities built just in Singapore during the past half century.
The OP’s proposal to build a city in the middle of the desert strikes me as similar to the history of Las Vegas. What parts of it can be replicated?
Well all of these are deliberate decisions to build a national capital. They overcame the bootstrap problem by being funded by a pre-existing national tax base.
Again, government funding is used to overcome the bootstrap problem. Singapore is also geographically small, and many of these “cities” would be characterized as neighborhoods if they were in the US.
Well, wikipedia says it began life as a water resupply stop for steam trains, and then got lucky by being near a major government project—Hoover dam. Later it took advantage of regulatory differences. An eccentric billionaire seems to have played a key roll.
There seem to be several towns that exist because of regulatory differences, so this seems a factor to consider—at least one eccentric billionaire seems fairly serious about “seasteading” for this reason. Historically, religious and ideological differences have founded cites, if not nations, so this is one way to push through the bootstrap phase—Salt Lake City being a relatively modern example in the US. Masdar City—zero carbon, zero waste—is an interesting example—ironically funded by oil wealth.
By traditional mythology, the reason Las Vegas exists is because the mob (mafia) wanted to have a playground far far away from the Feds :-)
Or because it’s the place closest to San Francisco where gambling was legal.
San Fran is not that special :-P
Besides, gambling was legalized in the entire state of Nevada and there are certainly places closer to SF in there (like Reno).
It’s expensive but interest rates are low and the possible profit is huge.
But similar profits are available at lower risk by developing at the edges of existing infrastructure. In particular, incremental development of this kind, along with some modest lobbying, will likely yield taxpayer funded infrastructure and services.
It seems like you can’t do incremental development by building more real estage inside the cities because of the cities not wanting to give new building permits that might lower the value of existing real estage.
I think Seattle’s South Lake Union development, kickstarted by Paul Allen and Jeff Bezos, is a counter example …
http://crosscut.com/2015/05/why-everywhere-is-the-next-south-lake-union/
Perhaps gentrification is a more general counter example. But you’re right, most developers opt for sprawl.
No, it’s not in California. In California a city like Mountain View blocks a company like Google from building new infrastructure on it’s edges.
In what sense? Gentrification simply means that rents go up in certain parts of the city. It doesn’t have directly something to do with new investments.
Not at all. Gentrification is the replacement of a social class by a different social class. There are a LOT of consequences to that—the character of the neighbourhood changes greatly.
In my experience gentrification is always associated with renovation and new business investment. The wikipedia article seems to confirm that this is not an uncommon experience.
I love you, LessWrongers. Thank you for being a kind of family to me. If you’re reading this and fear that the community is very critical like I know some IRL attendees are, remember that its not personal and its all a useful learning experience if you have the right attitude and sincerely do your best to make useful contributions.
Those that follows are random spurts of ideas that emerged when thinking at AlphaGo. I make no claim of either validity, soundness or even sanity. But they are random interesting directions that are fun for me to investigate, and they might turn out interesting for you too:
AlphaGo uses two deep neural networks to prune the enormous search tree of a Go position, and it does so unsupervised.
Information geometry allows us to treat information theory as geometry.
Neural networks allows us to partition high-dimensional data.
Pruning a search tree is also strangely similar to dual intuitionistic logic.
Deep neural networks can thus apply a sort of paraconsistent probabilistic deduction.
Probabilistc self-reflection is possible.
Deep neural networks can operate a sort of paraconsistent probabilistic self-reflection?
The the Alpha Go Discussion Post.
lol no. The pruning (‘policy’) network is entirely the result of supervised learning from human games. The other network is used to evaluate game states.
Your other ideas are more interesting, but they are not related to AlphaGo specifically, just deep neural networks.
If I understood correctly, this is only the first stage in the training of the policy network. Then (quoting from Nature):
Except that they don’t seem to use the resulting network in actual play; the only use is for deriving their state-evaluation network.
How true is the proverb: ‘To break habit you must make a habit’
In animal training it is said that best way to get rid of an undesired behaviour is to train the animal with an incompatible behaviour. For example if you have a problem with your dog chasing cats, train it to sit whenever it sees a cat—it can’t sit and chase at the same time. Googling “incompatible behavior” or “Differential Reinforcement of an Incompatible Behavior” yields lots of discussion.
The book Don’t Shoot the Dog talks a lot about this, and suggests that the same should be true for people. (This is a very Less Wrong-style book: half if it is very expert advice on animal training, half of it is animal-training-inspired self-help, which is probably on much less solid ground, but presented in a rational, scientific, extremely appealing style.)
When it comes to training animals you can only go through behavorism. On the other hand when training people you can use CBT and other approaches.
Excuse my ignorance, but isn’t CBT based on behaviorism?
Behaviorism in its original form assumed that thoughts or emotions don’t exist, or at least that it is unscientific to talk about them. Later behaviorists took less extreme positions, and allowed “black boxes” in their models corresponding to things that can’t be measured (before inventing EEG).
In CBT the “B” stands for behavioral, but “C” stands for cognitive, which is like the exact of behaviorism. CBT is partially based on behaviorism, but the other essential root is so-called Rational Therapy. (Fun fact for LW readers: the Rational Therapy was inspired by Alfred “the map is not the territory” Korzybski. It’s a small world.)
CBT has many parts like the acceptance paradox that have nothing to do with behaviorism.
I think it’s certainly true. I suppose it depends on your definition of “habit”...
Isn’t much of what we do habitual, whether it benefits us or not? In this way, you have either good habits or bad that are reciprocals of one another.
For example, people who refrain are not said to have a “habit of not biting their nails”. But that is, I think, what is happening.
I stopped biting my nails (coating them in a bitter substance to remind myself not to bite them if I tried) and I did not make any replacement habit. I don’t have a “habit of not biting my nails” any more than I have a habit of breathing. It happens automatically without conscious effort, so calling “not biting nails” a habit is misusing the word.
This is why I mentioned the definition of “habit” in my comment.
From Wikipedia:
Was that “How true?” or “How true!”?
I think it is true, with the proviso that the habit to make can be the habit of noticing when the old habit is about to happen and not letting it.
‘?’
There is a website called Wizardchan. Months back multiple posts at separate intervals predicted negative interests rates, starting with Japan. Worryingly, there were negative consequences predicted for thereafter that evade my memory. I had never heard of it and thought it was silly. I returned to the site today to watch for gloating. No reference is available. The website operates like 4chan in that content disappears regularly. I don’t know what to make of this. I don’t know what to make of this information. Are we privy to privileged or in any way useful information here or just noise?
How many of you adopt a false easygoing/go-with-the-flow or selfless/helpful persona to make it seem like you’re happy to put other people ahead of yourself?
I think there is very high value in sincerity, that both of the qualities you’ve described are heavily attached to sincerity, and that the effective and regular signaling of sincerity is going to be pretty much impossible to maintain without actually being sincere. If you really want to be effective in these areas, you might try to become easygoing and less selfish rather than trying to figure out how to fake those things.
You can volunteer to inspect prisons. What a great opportunity for criminal entrepreneurs to recruit people who’ve gone through an intensive, immersive crime university and have limited job opportunities, all awhile maintaining a prosocial image.
Reading this Wikipedia article on the psychology of torture, the doubt that comes to my mind is how valid are the constructs underlying the thesis around the extraordinarily counterintuitive cultural specificity of resilience to torture thesis and what is the empirical evidence or data source if I am to analyse it independently.
The evolutionary arguments against the plausible of altruism as a construct in and of itself are probably the greatest existential threat to the effective altruism movement. That is, the implication that it is inherently disingenuine. This juxtaposition of care-harm and sanctity-degradation moral foundations are quite simply a rare mix in the human population based on personal observation and inference.
Those evolutionary arguments seem like not understanding the cognitive-evolutionary boundary. If you go with the “evolution is about survival and reproduction, so besides sex and murder everything else is a hypocrisy”, you explain away altruism, but at the same time you explain away almost everything else. The argument seems sane only when it is used selectively.
Analysis of a mind game.
Any comments as to the internal workings of A and B are welcome.
Note that, in the US, a person has the right to confront his/her accuser. In the exchange below, B has done that but the specific accusation has never been clarified by A. Very tricky. I have definitely learned from this exchange.
B: ”. . .women are biologically superior. . .”
A: This says far more about you than you could possibly imagine. I suggest being more cautious going forward.
B: It says that I take for fact what people say who study this type of thing. I suggest that your conduct in this post is offensive.
A: If that is all it says, you have nothing to be offended about.
B: It’s not your call. Who are you?
TIA for reading. :D
Note to readers: this “hypothetical” scenario was actually this exchange.
You’re reading too much into just a few words, and you seem overconfident in your ability to divine other people’s intentions. Interpreting the above exchange as a “mind game” is ridiculously paranoid.
Note to readers: I never said it was hypothetical.
And, the textbooks written about my personality type say I have a sensitivity to other people’s issues.
And, I’m not starting from zero; over the years I’ve had office mates and others who acted in a similar way and so I know what works.
Strangely, some of these people may actually have wanted my approval or recognition. Very few get that, even those who are well-behaved. I think I know what causes this, but that info is classified—sorry.
They may have spotted ways that we two are similar. Of course, the idea that I am similar to these verbal bullies is repugnant to me but it’s very likely accurate. In my whole life maybe a half dozen people fit this pattern.
Also, there are books on “Verbal Judo” but they are hard to come by from my local library. I scoop up what I can. As long as all I do is counterpunch, block-then-strike, I feel I have the moral high ground.
But, that aside, if this is a false positive for a mind/head/word game, what do you make of this exchange? Is the literal meaning the only thing going on?
In your whole life, have you ever met someone who “put one over on you”, left you with the feeling that you’ve been “had” and you couldn’t even verbalize how? If yes, in retrospect, what really went on? Did you act optimally? What would you change for future encounters of this type?
Thanks for reading. :)
Of course, lots of people have attacked me verbally. But they don’t know me closely enough to really know how to insult me. So their worst attacks don’t even need a reply: they’re hopelessly misfired. I can let them yell as long as they want. The part of me they want to hurt is one they can never reach and will never see.
Even with that protection, in my experience it’s emotionally exhausting to be constantly expecting attacks from every interaction. If you stop seeing an aggressive intent behind every comment, your life will be much less stressful. People are essentially good, and most of them are too busy going through their own day to bother ruining other people’s.
In fairness, the person (OrphanWilde) playing the part of A in that little dialogue (1) gives at least one person other than WhyAsk (namely, me) the impression of playing dark-artsy status games in his comments and (2) has described himself in so many words as a practitioner of the Dark Arts. In that context, it’s not so crazy for WhyAsk to suspect something of the sort may have been going on.
Perfectly fair. (Given that I get the impression polymathwannabe doesn’t like me very much, I appreciate the neutrality of the choice to defend, however, so I’d prefer not to discourage them in the general case.)
I’m sorry that I gave you that impression. I may dislike your political opinions, but I don’t dislike you.
Oh! Thank you for stating that.
It’s not something to apologize over, though, in either case. I think there are perfectly valid reasons to dislike me. I think there are valid reasons to like me, as well. I tend to treat like/dislike as a preference statement, rather than a value statement. (Which isn’t universal, but people generally tend to use the word “hatred” with regard to negative valuation.)
We’ve gotten derailed.
All we need do is ask O. Wilde what his or her intentions were in those posts.
I was telling you not to say those sorts of things to people, because they reveal more about you than you probably expect they do, and reveal things about yourself you don’t want to be advertising.
Welcome back.
Can you be specific, without paraphrasing? And no ad hominem, please.
At this point you might as well let the cat all the way out of the bag, if there is a cat to be let out.
Am I in physical danger? If yes, from whom?
BTW, this is about the strangest thread I’ve ever participated in. I guess it’s an opportunity to learn, which is what I hope I’m doing on this forum.
[Edited: Content removed.]
Don’t quit your day job.
Passive-aggression is tempting.
Next time I’ll tell the passenger next to me that I feel uncomfortable with his manspreading an inch into my seat territory on the plane
Greydon Square and the Cape Reason Secular Show construct an argument for the aesthetics of atheism against religion
...
Use hands-free to decrease the radiation to the head.
Keep the mobile phone away from the body.
Do not use telephone in a car without an external antenna.
thanks Wikipedia
You’re solving for a supposed problem that does not exist at all.
This isn’t a solved problem. It doesn’t require a physics explanation, as in the first link, for there to be a hazard identified at a health level. Take the Zika virus for instance: there is an association between that and birth defects, but we don’t know why: the physical cause let alone the biological cause. However, it’s a hazard and you would be stupid not to take action on it
The RationalWiki articles (last two) don’t even try to construct a compelling argument against the hazard of phone radiation. They simple push the thesis that there is woo and phobia on the matter. That doesn’t mean that irrationality makes them wrong about the topic of irrationality.
the downvoting on my post is alarming. I paid 5 karma points to reply to a downvoted thread because this is really odd behaviour. I was bringing attention to points made elsewhere. If the points are true, then it is not bullshit. If the points are false, then the authority of the source (wikipedia) is such that it ought to be discussed. Finally, if it is true but it’s an information hazard, that’s worthwhile matter of discussion on LessWrong of all places.
It isn’t a problem at all. There is no solid evidence of risk.
Stop spreading bullshit.