Open thread, December 7-13, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
- 12 Dec 2015 13:58 UTC; 1 point) 's comment on December 2015 Media Thread by (
I asked Steve Hsu (an expert) “How long do you think it will probably take for someone to create babies who will grow up to be significantly smarter than any non-genetically engineered human has ever been? Is the answer closer to 10 or 30 years?”
He said it might be technologically possible in 10 years but ” who will have the guts to try it? There could easily be a decade or two lag between when it first becomes possible and when it is actually attempted.”
In, say, five years someone should start a transhumanist dating service that matches people who want to genetically enhance the intelligence of their future children. Although this is certainly risky, my view is that the Fermi paradox implies we are in great danger and so should take the chance to increase the odds that we figure out a way through the great filter.
In so far as the Fermi paradox implies we’re in great danger, it also suggests that exciting newly-possible things we might try could be more dangerous than they look. Perhaps some strange feedback loop involving intelligence enhancement is part of the danger. (The usual intelligence-enhancement feedback loop people worry about around here involves AI, of course, but perhaps that’s not the only one that’s scary.)
Hostile intelligences would presumably still create Dyson spheres/colonise the galaxy/emit radio waves/do something to alert other civilisations to their presence. The Fermi paradox has to be something like superweapons, not superintelllegnece.
How good do you think you’d be at raising a child who is a great deal smarter than any previous human?
Let’s assume you’re sane enough to not resent the child’s superintelligence. Still, what does the child need?
Tentative suggestion: people who are interested in the project should aim for at least a dozen superintelligent children in the first generation so that at least they have some company.
I’m currently raising a child who is, age adjusted, considerably smarter than myself. It’s challenging but fun. The danger for me isn’t my resenting his intelligence, it’s taking too much pride in it.
Just from his occasional post on LW, and your occasional mention of him, Alex reminds me of a real life version of Harry from HPMoR. :)
Edit: to avoid the possibility of future confusion, I’d like to emphasize that I meant this in an entirely positive way.
https://www.facebook.com/james.d.miller.104/videos/vb.5904551/10100196616850110/?type=2&theater
Smarter than you are is one thing, smarter than any previous person is another.
That starts to remind me of Ender’s Game series, in particular Shadow of the Hegemon.
He’s talking about using CRISPR to edit DNA. I would ask what’s the timeline for germline selection, but when he says:
And I assume that getting the datasets is also the bottleneck for germline selection.
Incidentally, is this the sort of problem which can be significantly speeded up by money/publicity? And how much money? Is this the sort of thing which would be a good target for philanthropy?
Simpler idea: join okcupid, use #IWGEC (I want genetically enhanced children) as a hashtag to identify each other.
Of course, a dedicated niche dating site has advantages, in that the site can be tailored to the specific criteria, but its a lot harder to set up.
You would have that data if a country like Singapore decides to do DNA sequencing for it’s entire population.
If you want to go in that direction in the US you would need to lobby for SAT scores being included in the digital health system created by Obamacare.
Apart from that the cost of genome sequencing is an important variable. Developing cheaper sequencing technology will increase the amount of people who have their DNA sequenced.
I don’t think we are at the point where we can adequately assess the risks involved. It’s known that higher IQ is correlated with major depression, bipolar disorder, and schizophrenia. What use is having a super-intelligent child if they have to spend most of their teenage and early adult years away from society, in a medicated stupor?
There may also be other genetic side effects to increased intelligence, such as increased risk of alcohol dependence and substance abuse.
I think I remember a study saying that over an IQ of 130, there is no correlation between increased intelligence and success/happiness.
It would probably be far more worthwhile to focus on having children of moderate-to-high IQ score (120-130 range), and put more emphasis on better upbringing, instilling values such as the importance of socializing and putting effort into one’s goals. The focus that some transhumanists seem to have on raw intelligence seems a bit childish and naive.
What are you optimizing for?
The optimal mix of intelligence and ability to make use of intelligence.
You just shifted all the meaning to the word “optimal”.
Optimal when maximizing for what?
No I did not.
If James_Miller meant ‘genetic basis of intelligence’ (and I think he did) then I am pointing out that that may not be predictive of actual intelligence when measured in the real world after development. You could just as well say I’m ‘optimizing for intelligence’. I am simply making it clear that I’m not optimizing for at-birth intelligence.
I still don’t understand you.
Is there any measurable value that you are optimizing for? What is it?
What do you mean specifically with that sentence?
Nutrition, intellectually stimulating environments, presence of both parents, and existence of other children to play with have all been shown to be positively correlated with doing better at school, for one. I’m sure there are many other factors.
Another point, not directly related to your question, but related to OP’s question, is that an IQ of, say, 130 may not be that high (and definitely not that high compared to the LW average) but it is 2 standard deviations above the mean… if everyone reached that average level of intelligence it would be a vast improvement in average intelligence over what it is now.
I agree, but this isn’t actionable information for transhumanists. In contrast, a few transhumanist couples could, perhaps, in a decade create a biological super-intelligence. I would love to get an 18-year-old reader of LW to start thinking about doing this.
It’s certainly possible to use simple selective breeding techniques to increase intelligence beyond what would ever likely occur naturally. Modern experience in selective breeding of, for example, cattle for milk production has resulted in herds of cows that produce far more milk than even the most extreme natural outlier ever produced. And furthermore there are statistical tools that can take as input various traits (various intelligence scores and also factors relating to general health and well-being) and produce, as outputs, pairings that would result in optimal intelligence increase. Going further, modern genomics techniques (like sperm sorting and prediction of traits from embryonic gene sequences) could make the process even more rapid.
But it could never be done in a decade. Modern techniques require a minimum of around ~5 generations to properly maximize traits beyond what would be found in the natural population (this varies hugely depending on the trait, of course, but 5 generations is a commonly-used ballpark estimate). Assuming impregnation starts as soon as reproductive viability is achieved, that gives a figure of 75 years.
The only thing that could shorten this would be designer baby technology. A simple method could be using embryonic stem cells to go directly to gametes without having to go through birth, development, and maturation. The downside to this is that prediction of intelligence based on just embryonic DNA is flimsy; much more generations would probably be required, and a few ‘interim’ individuals would probably have to at least reach school age for model calibration. Assuming, say, three interim stages, that gives 24 years. Even this would require a huge amount of resources—and not to mention the sacrifice and enormous ethical issues involved.
I can’t see even modern genetics technology achieving biological superintelligence any shorter than that, unless you are willing to throw trillions of dollars at it.
We identify a bunch of genes that either increase or decrease intelligence and then use CRISPR to edit the genomes of embryos to create super-geniuses. Just eliminating mutational load from an embryo might do a lot.
The reason this approach won’t work is that genes aren’t linear factors that can added up together in that way. Even in something as simple as milk production, you need to do selection over multiple generations and evaluate each generation separately, building up small genetic changes over time.
If you could construct an actual model relating various genes to intelligence, in a way that took into account genetic interactions, then you could do what you propose in a single generation, but we are very very far from being able to construct such a model at present.
As it stands today, if you just carried out that naive approach you would end up with a non-viable embryo or, in the best-case scenario, a slightly-higher-than-average intelligence person. Not a super-genius.
When researching my book I was told by experts that the intelligence genes which vary throughout the human population probably are linear. Consider President Obama who has a very high IQ but who also has parents who are genetically very different from each other. If intelligence genes worked in a non-additive complex way people with such genetically diverse parents would almost always be very unintelligent. We don’t observe this.
Evidence?
Harvard Law Review
Counter-evidence: affirmative action.
In any case, it’s interesting that Obama’s SAT (or ACT) scores are sealed as are his college grades, AFAIK.
HLS students of any skin color have high IQs as measured by standardized tests. The school’s 25th percentile LSAT score is 170, which is 97.5th percentile for the subset of college graduates who take the LSAT. 44% of HLS students are people of color.
When I see funny terms like “people of color” (or, say, “gun deaths”), I get suspicious. A little bit of digging, and...
Black students constitute 10-12% of HLS students. Most of the “people of color” are Asians.
No, actually, genetic studies of both milk production and IQ show them to be mainly linear.
That selective breeding has to be done slowly has nothing to do with genetic structure.
What kind of study do you think shows IQ to be mainly linear?
I would guess that you confuse assumptions that the researchers behind a study make to reduce the amount of factors with finding of the study.
There are decades of studies of the heritability of IQ. Some of them measure H², which is full heritability and some of them measure h², “narrow sense heritability”; and some measure both. Narrow sense heritability is the linear part, a lower bound for the full broad sense heritability. A typical estimate of the nonlinear contribution is H²-h²=10%. In neither case do they make any assumptions about the genetic structure. Often they make assumptions about the relation between genes and environment, but they never assume linear genetics. Measuring h² is not assuming linearity, but measuring linearity.
This paper finds a lower bound for h² of 0.4 and 0.5 for crystallized and fluid intelligence, respectively, in childhood. I say lower bound because it only uses SNP data, not full genomes. It mentions earlier work giving a narrow sense heritability of 0.6 at that age. That earlier work probably has more problems disentangling genes from environment, but is unbiased given its assumptions.
The linked paper says:
If you have 3511 individuals and 549692 SNPs you won’t find any nonlinear effects. 3511 observations of 549692 SNPs is already overfitted 3511 observations of 549692 * 549691 gene interactions is even more overfitted and I wouldn’t expect that the four four principal components they calculate to find an existing needle in that haystack.
Apart from that it’s worth noting that IQ is g fitted to a bell curve. You wouldn’t expect a variable that you fit to a bell curve to behave fully linearly.
No, they didn’t try to measure non-linear effects. Nor did they try to measure environment. That is all irrelevant to measuring linear effects, which was the main thing I wanted to convey. If you want to understand this, the key phrase is “narrow sense heritability.” Try a textbook. Hell, try wikipedia.
That it did well on held-back data should convince you that you don’t understand overfitting.
Actually, I would expect a bell curve transformation to be the most linear.
They didn’t do well on the gene level:
Analyses of individual SNPs and genes did not result in any replicable genome-wide significant association
No, the fact that you can calculate a linear model that predicts h_2 in a way that fits 0.4 or 0.5 of the variance doesn’t mean that the underlying reality is structured in a way that gene’s have linear effects.
To make a causal statement that genes work in a linear way the summarize statistic of is not enough.
I would not recommend making confident pronouncements which make it evident you have no clue what you are talking about.
While I haven’t worked with the underlying subjects in the last few years I did take bioinformatics courses by people who had a clue what they were talking about and the confident pronouncement I make are what I learned there.
OK, let’s try a simpler piece of advice: first, stop digging.
No, it was assumed that genes controlling milk production were linear, because it was much easier to study them that way, and unfortunately over time many people came to simply accept that fact as true, when it has never been proven (in fact it’s been proven conclusively otherwise).
Simply put it on OkCupid as an additional question that’s important to you.
Ahem. A transhumanist woman wanting to have a genetically engineered baby would do well to start with a sperm bank where she can screen many donors for a good genetic baseline.
Sorry, males :-/
In your scenario, a transhumanist man would do the same with egg banks, and then rent a healthy womb.
Also possible.
Actually, since we’re genetically engineering anyway, we should be able to combine genetic material from two males or two females (or just clone, of course). And once an artificial womb gets developed you won’t need to rent anything, um, living.
In any case, not too many prospects for dating :-/
You and me baby ain’t nothin’ but mammals, so let’s do it like they do on the Discovery Channel suddenly acquires a whole new meaning X-D
Why? People want intimacy for a thousand reasons other than breeding.
Which is precisely why “let’s genetically engineer our possible children” isn’t a great start.
Let’s start thinking about the appropriate lines now so in ten years time we’ll (or those of you young enough to sill have children a decade hence) will have the skills to win over appropriate mates.
This should be a separate thread: Best Pickup Lines in a Transhumanist Bar :-)
Are you whole body or just head?
In a universe where you have people of both classifications, that could become mildly rude.
We do have both classifications. People who have whole body cryonics insurance and people who have head cryonics insurance.
I was picturing a universe in which the people were already unfrozen and healthy; it might be rude to ask things like “Is this your original set of limbs?”
But you are correct, and that didn’t occur to me.
My reading is heavily culture dependent. Presently many woman object to their partners signing up for getting cryonics. For a transhumanist who signs up for cryonics it’s valuable to screen for woman who are okay with cryonics.
In a transhumanist bar that wouldn’t be necessary. Asking “Are you whole body or just head?” with the target of finding out the cryonics status presupposes that the cryonics rejection isn’t a concern. That what makes “Are you whole body or just head?” funny.
I want to grow old and not die with you.
/blinks
I don’t want to grow old.
Perhaps Calien takes “grow old” to mean “accumulate years and experience and memories” rather than “accumulate wear and tear and damage”.
That’s called “growing wise” :-P
gjm’s interpretation is what I was going for. Chronological age only! (Warning: link to TVTropes) I wasn’t sure how to keep the same form and still have it flow nicely.
Yes, you should do this.
First, you should establish a Transhumanist Bar :-)
Want children in maybe ten years, might work on me.
That line might actually work on some people. It might work on me if I were more inclined to parent.
If we’re assuming artificial wombs are widely used, humanity effectively becomes a eusocial species.
I don’t know about that. I suspect that at this point things get really interesting and probably really unstable for a while :-/
Let’s assume that she has the typical desire to be married to the child’s father.
And that her partner (if she in in a hetrosexual relationship) wants children, or at least does not want to be cuckolded.
Really see no reason to assume that an avantgarde transhumanist woman would stick to such traditional trappings of the old patriarchy :-P
Since the ratio of women/men willing to do this will be low, willing women will have lots of dating market power. It would be silly for them to not use this power to get a high quality mate/provider.
Why do you think so?
Provided she needs or wants one. And provided she wants a male one. I know lesbian families with lots of children.
Men are greater risk takers and are far more likely to be transhumanists.
Sperm banks simply do not cater to transhumanists. They first and foremost screen donors for sperm count (to insure that they can make the most money out of every stored sample). Sperm count isn’t strongly correlated with intelligence.
After sperm count, important factors for sperm banks are: Physical health, height, and weight.
Plus, sperm donors are mostly a self-selected bunch, and I’d guess that men who are in no immediate need of money would not wake up in the morning thinking of donating sperm.
Finally, upbringing is probably a far more important factor than mere genetics; a wise mother would want to ensure availability of the father for childrearing.
There are man who want to “spread their DNA” and therefore donate sperm for reasons besides money.
Not that I’ve used them, but as far as I understand, sperm banks provide a fair amount of data on sperm donors including education. If you stick with Ph.Ds the baseline IQ level should be decent. Besides, sperm banks are a customer-oriented business. They will look for factors which women demand.
Exactly. Most women demand attributes that they themselves find attractive in mates e.g. height and other appearance-related factors. Transhumanists don’t make up most of the female population.
Sure, but the great advantage of sperm banks is that you can easily filter a large number of possibilities.
At the lets-genetically-engineer-super-IQ level you’d probably want to start by paying the sperm bank for whole genome scans of several likely candidates.
I think civilisation is in danger even disregarding the Fermi paradox.
There’s no need to wait five years to start a transhumanist dating service. Suppose you want to have genetically enhanced kids in ten years time, presumably you would still want to date now. If you are looking for a long-term relationship now, then you would want it to be with someone you could have kids with one day.
The biggest problem is that transhumanists are mostly male. I wonder if this will change, given that transhumanism is becoming increasingly mainstream?
Gwern has written an article for Wired, allegedly revealing the true identity of Satoshi Nakamoto:
http://www.wired.com/2015/12/bitcoins-creator-satoshi-nakamoto-is-probably-this-unknown-australian-genius/
Just a tangential question: am I the only one perfectly happy not to know who really Satoshi is?
I am perfectly happy not to know who Satoshi is, but I also have a well-developed curiosity :-)
I wouldn’t mind knowing myself. However, I don’t think having Satoshi’s identity publicly known would be good to bitcoin.
An anonymous source supplies Gwern with juicy facts about this man Wright, they see print after a few weeks, and then within hours his home is raided by Australian federal police. I am reminded that the source of the Watergate leaks was in fact the deputy director of the FBI...
A friend of mine who knows much more about cryptography and computers than I do points me to evidence that documents using the same public keys as Satoshi were back-dated and faked:
https://www.reddit.com/r/Bitcoin/comments/3w027x/dr_craig_steven_wright_alleged_satoshi_by_wired/cxslii7
I hardly know what to make of this myself, but she seems convinced.
While we are at the topic of someone trying to fake Satoshi identity, the LW account http://lesswrong.com/user/Satoshi_Nakamoto/overview/ is worth noting. It stopped posting after I linked it to an attempt of establishing a fake identity for Satoshi. It might be useful to compare the stylometry of those posts with Wright.
From the linked Wired article:
Gwern’s comment in the Reddit thread:
These comments seem to partly refer to the 2013 mass archive of Google Reader just before it was discontinued. For others who want to examine the data: the relevant WARC records for
gse-compliance.blogspot.com
are in line 110789824 to line 110796183 ofgreader_20130604001315.megawarc.warc
, which is about three-quarters of the way into the file. I haven’t checked the directory and stats grabs and don’t plan to, as I don’t want to spend any more time on this.NB: As for any other large compressed archives, if you plan on saving the data, then I suggest decompressing the stream as you download it and recompressing into a seekable structure. Btrfs with compression works well, but blocked compression implementations like
bgzip
should also work in a pinch. If you leave the archive as a single compressed stream, then you’ll pull all your hair out when you try to look through the data.Gwern, what’s your credence that Wright is Satoshi?
Follow-up—after we’ve all had some time to think about it, I think this is the best explanation for who this would-be SN is:
https://www.reddit.com/r/Bitcoin/comments/3w9xec/just_think_we_deserve_an_explanation_of_how_craig/cxuo6ac
Bug report: the antikibitzer’s toggle button (which appears at the top right of the browser window’s content area) doesn’t work correctly for me (on recent Firefox on Windows) because the loop that attempts to identify the antikibitzer stylesheet fails. It fails because an earlier stylesheet in the list (actually, the very first) has a null href.
A simple fix is to change the obvious line in antikibitzer.js to this:
but I make no guarantee that this is the fix the author of the code would prefer.
Some uncomfortable questions I’ve asked myself lately:
Could you without intentionally listening to music for 30 days?
I recall being taught to argue towards the predetermined point of view in schools and extra-curriculum activities like debating. Is that counterproductive or suboptimal?
Listening back to a recording I made of a therapy session when I was quite mentally ill, I feel amazed at just how much I have improved. I am appalled by the mode of thought of that young person. What impression do the people around me have that they won’t discuss openly?
Aren’t storm water drain explorer’s potentially mapping out critical infrastructure which may be targetted more easily by terrorists? One way I see these things going is commercial drain tours. That way there would be a legitimised presence there and perhaps enhanced security.
something to be asked of academia
Imagine a person was abused for a large part of their childhood and is subsequently traumatised and mentally ill, then, upon regaining greater functioning as an adult decides to extort their abusive parents for money with the threat of exposing them while still counting on inheritence, instead of simply going to the authorities and approaching a legal settlement (expecting that will cut of any pleasant relations). Are there actions unconscionable? What would you do in their situation?
If I went straight to a family member without preparing them in advance would they consent to my cryonics application? to support a cryonics application?
Do most people really think like this?
The rate at which I come up with ideas that I feel are worthwhile business ventures is unmanageable. So, I’ll take a leaf out of the EA Ventures method webpage by asking: what are three existing organizations that are doing similar things and why aren’t you joining them?
Upvoting for applied learning: Previously these would each be their own comment; you asked what you were doing wrong, somebody mentioned the number of comments, and you appear to have updated your behavior.
Extortion is by definition illegal on the other hand making an informal settelement is quite okay. It depends a lot on the details.
Withholding an inheritance from someone because you abused him and he dared to take you to court is unethical. So this “extortion” is being used as the only way to get compensation while avoiding being the victim of further unethical behavior.
So no, it’s not unconscionable, although whether it’s legally extortion would require asking a lawyer.
If you find yourself alternating between different psychological moods every few months, appalled by how you used to think, you may be suffering from bipolar disorder. Since you go to a therapist I assume that if you have it, it’s been diagnosed by now, so I’m mostly saying this for the benefit of people reading this.
This has been talked about before. One suggestion is to not make it a habit.
Can you rephrase this?
Depends on what you mean by “abuse”? A lot of what’s been called “child abuse”, e.g., spanking, isn’t. On the other hand, legitimate abuse happens as well.
What is an example of ‘legitimate abuse’?
This for example.
From Omnilibrium:
Should fundamental science be funded by international agencies?
Integration of Muslim Immigrants: US vs. Europe
What’s more important Laffer Curve or Income Inequality?
Should the US radically increase spending on Voice of America?
There are intelligent people speaking, without attacking each other. When they add facts, I am going to suppose those facts are likely true. That’s already better than 99.99% of internet.
Yet there seems to be no conclusion, and even the analysis seems rather shallow.
What do you want when you mean “conclusion”?
Well, currently it seems to me like this:
A new topic.
Person X says something smart.
Person Y says something smart.
Person Z says something smart.
Everyone moves to the next topic.
That’s okay if your goal is signalling smartness. It’s okay-ish if your goal is to have more information about the topic. I haven’t read much, but the debating style still feels adversarial—people are intelligent and polite, but they still give arguments for one side or for the other, so at the end of the day you still get Team Pro and Team Con.
The missing part is someone saying “these are arguments for, these are arguments against, after weighing them carefully, this seems like an optimal solution. (And in the Aumann’s ideal world, all participants of the debate would agree.)
Why? Because sometimes we ask questions when we need answers. If you ask “Is it better to do X, or to do Y?”, and you receive three smart answers supporting X, and three smart answers supporting Y, at the end of the day you still don’t know whether you should do X or Y. (Though if you make your decision, using whatever means, now you have three great arguments to support it. There is even a button on the website that will filter them for you.)
The goal of reading a site exploring a political question shouldn’t be that the reader comes away with: “I don’t need to think myself, the community decided that X is right, so I support X because I want to support what my tribe has chosen to support.”
Ideally the person leaves with a mind that’s more open than when they came.
This is how I accept 99% of information about the world. I have never seen an atom, never been in Paris, still believe they exist. A community I consider trustworthy about the topic has decided that they exist, and I don’t have time to personally verify everything.
Getting more inputs for your independent research is great if you do have time and other resources necessary to do the research. Making the inputs public is also good, because some of the participants may have the time. But inputs without conclusion is still an incomplete work.
Is adjusting probabilities towards 50% a good thing?
I don’t think that openness is mainly about probabilities but most people are heavily overconfident about most of their political positions so moving the probabilities closer to 50% is a good thing.
The world would be a much better place if more people would respond to
Is policy A better than policy B?
withI don't know
instead ofPolicy A is better because my tribe says it's better
.Let’s ask instead of
are there atoms?
is helium a molecule?
. Thomas Kuhn wrote about the issue:You get a different answer to the question depending on who you ask
is helium a molecule?
.Does that mean that you should adjust probabilities towards 50% on the question ofis helium a molecule?
? No, that wouldn’t make any sense to average the 100% certainity of the physicist that helium is no molecule with the 100% certainity of the chemist that it is towards 50%.I would want participants who read a political dicussion come away with thinking that there are multiple ways of looking at the debate in question.
That’s basically rejecting skepticism. Skepticism is about being okay with the fact that you don’t have a conclusion to every question. Keeping questions open for years is important for understanding them better.
That’s a very special kind of question: one that’s almost entirely about definitions of words. It shouldn’t be a surprise to anyone here that different people or groups use words in different ways, and therefore that questions about definitions often don’t have a definite answer.
Many many questions have some element of this (e.g., if some etymology enthusiast insists that an “atom” must be indivisible then the things most people call atoms aren’t “atoms” for him, and for all we know there may actually be no “atoms”) and that’s important to know. But this doesn’t look to me like a good model for political disagreement; word definitions aren’t usually a big part of political disagreements.
(What is usually a big part of those disagreements is divergence between different people’s or groups’ values, which can also lead to situations where there’s no such thing as The Right Answer.)
Unless you allow the “conclusion” to be something like “We don’t yet have enough information to know whether A or B is the better course of action”, or “A is almost certainly better if what you mostly care about is X, and B is almost certainly better if what you mostly care about is Y”, or “The dispute between A and B is mostly terminological”. All of which I’m guessing Viliam would be fine with; it looks to me like what he’s unsatisfied with is debates that basically consist of some arguments for A and some arguments for B, with no attempt to figure out what conclusion—which might well be a conclusion with a lot of uncertainty to it—should follow from looking at all those arguments together.
On one level, yes, this is just a definition issue.
On a deeper level no, because particular answers to such questions place the phenomena into a specific framework. Notice that two answers to “is helium a molecule” arose not because two people consulted two different dictionaries. They arose because these two people are used to thinking about molecules in very different ways—both valid in their respective domains.
In that sense this “special kind of question” could be about defining terms, but it also could be about the context within which examine the issue.
I agree that the disagreement about whether a helium atom should be considered a molecule is related to what mental framework one slots the question into. I don’t think this in any way stops it being a disagreement about definitions of words. (In particular, for the avoidance of doubt, I am not taking “X is a disagreement about definitions” to imply “X is trivial” or anything of the kind.)
The physicist and chemist in Kuhn’s story could—I don’t know whether they would if actually asked—both have said something like this: “It turns out that there are a few different notions close enough together that we use the word “molecule” for all of them, and they don’t all agree about what to call a helium atom”. Again, this is far from what happens in most political disagreements.
For the avoidance of doubt again, I am not denying that some political disagreements are like this. For instance, there are cases where two sides would both claim to be maximizing equality, but one side means “treat everyone exactly the same” and another means “treat everyone the same but compensate for inequalities X, Y, and Z elsewhere”. I suggest that this is actually best considered a disagreement about values rather than about definitions, though. (Each group prefers to define “equality” in a particular way because they think what-they-call-equality is more important than what-the-other-guys-call-equality.)
Well, yes, because in the political context “framework” very often means “value framework”. However both definitions and frameworks matter—it is still the case that the argument will get nowhere until people agree on the meaning of the words they are using.
It feels as if you may be trying to correct a mistake I’m not making. I agree that definitions matter. As I said two comments upthread:
Nope, you just have all your defensive shields up and at full power :-) I am agreeing with you here.
Full power is more dramatic than that :-).
My experience is that if someone begins “Well, yes” rather than, say, just “Yes”, their intention is generally something less positive than simply agreeing with you. (“Well, yes. What kind of idiot would need that to be said explicitly?” “Well, yes, but you’re forgetting about X, Y, and Z.” “Well, yes, I suppose so, but I don’t think that’s actually quite the right question.”)
I’ll work on augmenting my expressions of enthusiasm :-)
For Thomas Kuhn it’s a an issue of different paradigms.
When we look at the questions of atoms then saying: “Atoms exist.” likely means “Thinking of matter as being made up of atoms is a valuable paradigm.”
Lavoiser came up with describing oxygen as a new element. In doing so he rejected the paradgim that chemistry should analyse principles like phlogiston but rather think of matter as being made up of atoms.
Calling oxygen dephlogisticated air is more than just an issue of calling it a different name. It’s an issue at the heart of the conflict of two scientific paradigms.
Both the phlogiston theory and the oxygen theory successfully predict that if you put a glass over a candle the candle while go out. The oxygen theory says that it’s because there no oxygen anymore in the air. The phlogiston theory says that it’s because the air is full of phlogiston so that it can’t take any more additional phlogiston.
Phlogiston chemistry was a huge improvement over the chemistry of the four elements which neither explained or predicted that the candle would go out.
Understanding different paradigms to look at an political issue is often an important part of having a political debate. It moves the issue beyond tribe A vs. tribe B. Of course you can have a tribe A vs. tribe B political discussion but often that’s not the kind of political debate that I like to have.
In reality the kind of conclusions that parliaments draw from political debate are laws that fill hundreds of pages that specify all sorts of little details that happen to be important. If the GBS does policy documents specifying details and coming to a conclusion makes sense but I don’t think that’s a good goal for a discussion on a forum like Omnilibrium.
A question about Omnilibrium. The FAQ states
So what beliefs generally cluster the optimates and populares? I’ve been wondering this, and it seems fairly opaque as an outside observer, but I’m sure that people who regularly use the site have picked up on it.
There are two noticeable differences between the optimate/populare and the traditional left-wing/right-wing politics:
1) Traditional politics is much better approximated by a binary. Person’s views on one significant issue, such as feminism, pretty accurately predict positions on foreign policy, economics and environmental issues. By comparison, optimate/populare labels have much less predictive power. While there is a significant correlation between populare (optimate) and left (right)-wing views on economics and foreign policy, both optimates and populares are much more likely to cross ideological lines on individual issues.
2) On average, both populares and optimates are more libertarian and less religious than the traditional left and right.
I’m not sure what VoA should do beyond what it already does. It already provides a wide range of free programming to the world in a bunch of different languages. The programming—so far as I’ve seen it—is terrible and completely unconvincing for foreigners. On the other hand, home-grown youtube networks like The Young Turks seem to already have a large following from non-American viewers, despite being targeted towards Americans, and seem to do a much more effective job in exporting Western values to people who don’t already believe in them.
Investigative journalism costs money. Even in the US it’s hard to fund it in a for-profit way as shown by outlets like the New York Times employing fewer investigative journalists. VoA should fund investigative journalists in other countries.
The Young Turks aren’t doing genuine news. They comment on what various other people report and do little research into the subjects they cover.
To the extend that what VoA is producing is terrible, they should produce better content. Focus on material that get’s shared in the target nations via social media. I see stories from RussiaToday from time to time on my facebook feed. There’s no good reason why VoA shouldn’t be able to do the same thing in the countries it targets.
Pay local bloggers with regime critical views to write stories. If needed allow them to publish stories under a pseudonym if the story would get them thrown in prison otherwise.
We aren’t talking about journalism here. We are explicitly talking about propaganda. Or counter-propaganda, if you prefer.
Much like the rest of the media. And, again, geniune news is off-topic. Although they do tend to bring into focus some subjects that the rest of the media is hesitant to cover.
No use if they get blocked, thrown in prison, etc. And even if not, it would most likely turn out to be very counter-productive if it emerged that anti-government bloggers were paid off.
Not really. Alex Jones is speaking critically about the US system but the factual background of what he says is poor. While he do has a relatively large audience he doesn’t strongly affect the political system.
To do effective propaganda you need to actually engage with the reality on the ground. Michael Hastings couldn’t have written an article that forces Stanley McChrystal into resignantion without doing investigative reporting.
Quite a lot of mainstream reporters do pick up the phone to call people to research a story. The Young Turks just seem to pick up news story and then have a few people sit together to speak about what they think about that story.
Specifically, Russian propaganda in my country (don’t know if it works the same everywhere) usually markets itself as “the news for people who are not satisfied with propaganda and censorship in mainstream media”. Obviously, any information inconvenient for Putin’s regime is called “propaganda”, and the fact that Putin’s propaganda is not published in our mainstream media is called “censorship”. The target group seems to be people believing in conspiracy theories, and young people.
Essentially, they are trying to role-play Assange, while publishing the same stuff you would find on the official Russian TV. Plus the conspiracy theories, because everything that puts West in a bad light is a bonus. (Yes, that includes even theories like “vaccination causes autism”, because vaccination = big pharma = capitalism = the West.)
One tool in the toolset is providing a lot of links to “suppressed information” and encouraging readers to do their “research” for themselves. Basicly, instead of one propaganda website, you have dozen websites linking to each other, plus to some conspiracy theories written by third-party bloggers. And it works, because people who follow the links do have the feeling that they did a research, that they are better informed than the rest of the population, and that there is a lot of important information that is censored from the official media. If you ever had an applause light of “internet will bring freedom of speech and make the old media obsolete”, it feels like you are in the middle of it, when you read that stuff.
So, having a network of websites debunking Russian propaganda—using the engaging language of blogs, instead of the usual boring language of newspapers—would provide some balance. (Of course it would only take 10 seconds for all the “independent” websites to declare that all these websites are paid by evil Americans, but they already keep saying that about everything that opposes them.)
Paradox at the heart of mathematics makes physics problem unanswerable
Undecidability of the Spectral Gap (full version) by Toby Cubitt, David Perez-Garcia, Michael M. Wolf
Ask an unbounded question, get an uncomputable answer by Scott Aaronson
‘Outsiders’ Crack 50-Year-Old Math Problem
Extreme Self-Tracking
Man has himself MRIed twice a week for a year and a half, plus tracking a lot about his life. The data mining is still going on, but at least it’s been shown that (probably) people’s connectomes change pretty rapidly.
I’m also posting this to the media thread because I’m not sure where it’s more likely to be seen.
Calling what was mapped here a ‘connectome’ is REALLY stretching it. When they make those graphs of parcels connected to each other, what they’re doing is just measuring the correlation between activity as revealed by an fMRI (which is itself removed from activity, measuring the short-term fluctuations in bloodflow as a result of energy requirements) in different parcels of brain and drawing a ‘connection’ when the coefficient is high enough. Correlation is not just connection.
I do note that there was diffusion tensor imaging (which shows you the average orientation of fibers in any given voxel [and showed an unusual crossing mixed fiber feature in a spot of his corpus callosum and will probably show similar oddities throughout the brain in any given human] ) and I will try to get at that information once I am past a paywall later on, but the repeated MRIs appear to be fMRIs.
I don’t think the paper is paywalled: link
MRIs: a lot of different scans including 15 T1 and T2 weighted structural scans; 19 diffusion-weighted scans. fMRI was mostly resting state (100), but also included various tasks such as n-back (15x), motion discrimination/stop signal (8x), object localiser (8x), verbal working memory localiser (5x), spatial WM (4x), breath holding (18x).
Oops! In whatever case I’m back from my situation-with-effectively-only-mobile-internet-access and have the paper now.
The repeated scans were indeed fMRIs measuring correlation of metabolic activity (a good proxy for activity) under various conditions. They made one diffusion tensor map from all their diffusion data (multiple scans). They saw correlations between a fiber-tract map they generated from the diffusion data (you plop down seed points in the cortex and other places and let fibers follow the main directions of diffusion) and their various activity correlation maps, and correlation was strongest for areas very close to each other on the brain and weak for longer fibers, especially inter-hemisphere fibers quite possibly because the tractography has a harder time getting those. The diffusion data also tended to show denser connections for stronger functional correlations, though as we see the instantaneous state can change the activity correlation quite a bit even though the white matter fiber tracts are not going to change that much on fast timescales. The fact that correlations are different in different activities illustrates that you dont need to have day to day changes entirely in the gross physical structure that shows up on scans of this type.
The actual layout of fibers at this coarse layer of detail is one thing of several that would contribute to activity correlations, including chemistry and actual engagement of said tract fibers for that particular activity and in that particular state, and all the fine molecular twiddling and potentiation at synapse scales.
Not only is data mining still going on by the group who published the paper, but Russ Poldrack (first author and subject of the study) is a very vocal proponent of open science: data associated with this publication have been made freely available for anyone else as well: openfmri.org
Also see this blogpost where he discusses creation of an open analysis platform (and the challenges in setting up analysis pipelines that are reproducible by others
Interesting article on vox (not a new one, but it’s the first time I’ve seen it and I thought I’d share; apologies if it’s been featured here before) on ‘how politics makes us stupid’: http://www.vox.com/2014/4/6/5556462/brain-dead-how-politics-makes-us-stupid
tl;dr: The smarter you are, the less likely you are to change your mind on certain issues when presented with new information, even when the new information is very clearly, simply, and unambiguously against your point of view.
In an adversarial setting—e.g. in the middle of culture warfare—this is an entirely valid response.
If you just blindly update on everything and I control what evidence you see, I can make you believe anything with arbitrarily high credence. Note that this does not necessarily involve any lying, just proper filtering.
That’s just rationalization. Again, even in the context of a simple hypothetical example with very clear and unfiltered evidence, participants were not willing to change their minds. I suggest you look at the actual study.
What is just rationalization? It seems pretty obvious to me that if your stream of evidence is filtered in some way, you should be very wary about updating on it.
Yes but that does not apply to this study; the participants weren’t even willing to acknowledge the statistical results when it disagreed with their point of view. Let alone changing their minds.
About your point about having evidence selectively presented, it’s easy to discard all information that disagrees with your worldview. What’s hard is actually changing your mind. If you believe that there is a ‘culture war’ going on with filtering of evidence and manipulation from all sides, the rational response would be to look at all evidence skeptically, not just the evidence that disagrees with you. Yet in that study, participants had no problem accepting statistical results that agreed with them or that they didn’t have strong political opinions about. And, importantly, this behaviour got worse for smarter people.
I’m not much interested in that particular study. I’m discussing your tl;dr which is
You, clearly, think this is bad. I, on the contrary, think that in certain situations—to wit, when your stream of evidence is filtered—NOT updating on new information is a good idea.
I feel this is a more interesting issue than going into the details of that study.
Also, as George Orwell said “There are some ideas so absurd that only an intellectual could believe them”.
While that is the way Ezra Klein is interpreting it, I don’t think that’s exactly right. It’s not that smart people are less likely to change their mind; it’s that smart people who are also partisan are less likely to change their mind. The combination of intelligence and closedmindedness is dangerous; I would agree. But I believe intelligence is correlated with openmindedness so this is either a very narrow effect (which is what Ezra Klein seems to be suggesting) or an artifact of the study design.
Actually, never mind for part of this. I had assumed they were using the median to divide between conservative and liberal in which case people who identified as moderate would be thrown out, but they’re using the mean which is most likely a number in between the possible options, so everybody gets included. Moderates are included with either liberals or conservatives; I’m not sure which.
I don’t think openmindedness is the same as the ability to get the math right for emotionally charged topics. The ability to get the math right in context like that is part of what Keith Stanovich wants to measure with the rationality index.
Unfortunately in writing the article Vox themselves seem to have fallen prey to some of the same stupidity; if you’re familiar with Vox’s general left-wing sympathies you’ll be unsurprised that the examples of stupidity used in the article are overwhelmingly from right-wing sources. If you really want to improve people’s thinking, you need to focus on your own tribe at least as much as the enemy tribe.
I previously wrote about this here.
The example they give is actually anti gun control (it is a contrived example of course) and they repeatedly mention that the biases in question affect individuals who identify as left-wing as well as individuals who identify as right-wing.
Why? I looked at your linked article and the two articles it links to and I can’t find any proof that doing what you say would result in fewer disagreements than not doing that.
World’s first anti-ageing drug could see humans live to 120
Anyone know anything about this?
The drug is metformin, currently used for Type 2 diabetes.
It seems like the drug trial is funded by the The American Federation for Aging Research(nonprofit). Likelihood of success isn’t high but one of the core reason for running the trial seems to be the first anti-aging drug trial and have the FDA develop a framework for that purpose.
3000 patients from who are between 70 to 80 years old at the start of the study. Metformin is cheap to produce, so the claim isn’t too expensive for the nonprofit that funds it.
There is discussion on Hacker News. tl;dr: Don’t hold your breath.
Please, not another bias! An evolutionary take on behavioural economics by Jason Collins
Wake me up when evolutionary biology can predict all those 165 things from first principles and a very little input the way modern astronomy can predict the motion of planets.
I would agree that a collection of biases points to a need for a theory, but I don’t think such a theory is likely to be central to the economics model simply because those deviations are irrelevant in a large number of cases. Simple rational expectations can be quite predictive of human behavior in many cases even though it is clearly completely absurd. Think of the relationship between quantum mechanics and relativity. Relativity doesn’t seem to fit at all in any reasonable way into quantum mechanics, and yet relativity is quite useful and accurate for problems at the atomic level and above.
Jason Collins’ reasoning can be used for almost any scientific theory imaginable. If you examine any scientific discipline closely enough, you will find deviations which don’t fit the standard model. But the existence of deviations does not necessarily prove the need for a new model; particularly if those deviations do not appear to be central to the model’s primary predictions. I would say a rigorously tested theory of how cognitive biases develop and are maintained may provide some useful insights into economics, but that it’s unlikely that they will disprove the basic model of supply and demand.
There are other issues in that essay. Present bias isn’t normally considered a bias. It’s referred to in economics as temporal preferences. Hyperbolic discounting has from its conception been considered an issue of preference, and only later as one of rationality. He then discusses conspicuous consumption in the context of mating signals except that isn’t a new idea either. Economics already has a theory of signalling that roughly matches with what he is referring to, and they’ve already considered social status as a type of signal, and that conspicuous consumption is used to signal social status. That also isn’t an issue of rationality, but one of preference.
I don’t think that the fact that Wikipedia has a list of 165 cognitive biases says more about Wikipedia than it says about behavioural economics.
The core idea from Kahnmann is that humans use heuristics to make decisons.
Evolution certainly affected human cognition but most designs for intelligent agents don’t produce intelligent agents. The space of set of heuristics that produce intelligent agents is small. It’s not clear that you can make an intelligent agent of something like a neural net that doesn’t engage in something like the availability heuristic. Confirmation bias isn’t something substantially different than the availability heuristic in action.
When Google’s dreaming neural nets reproduce quirks of the human brain it’s hard to argue that those quirks exit because they provide advantages in sexual competition.
That’s basically saying Darwin was wrong and his critics who object to organism that don’t evolve according to objectives were right. Darwin wasn’t controversial because he invented evolution. Lamarks already did that decades before Darwin. Darwin was controversial because he proposed to get rid of teleology.
Economics makes errors because it assumes that humans have objectives. You don’t fix that by explaining how humans have different objectives. You fix it by looking at the heuristics of human beings and also studying heuristics of effective decision making in general.
Seems like some people replace the teleological model of “it evolved this way because the Spirit of Nature wanted it to evolve this way” by a simplistic pseudo-evolutionary model of “it evolved because it helps you to survive and get more sex”.
Nope. Some things evolve as side effects of the things that help us “survive and get more sex”; because they are cheaper solutions, or because the random algorithm found them first. There are historical coincidences and path-dependency.
For example, that fact that we have five fingers on each hand doesn’t prove that having five fingers is inherently more sexy or more useful for survival than six or four. Instead, historically, the fish that were our ancestors had five bones in their fins (I hope I remember this correctly), and there was a series of mutations that transformed them into fingers. So, “having fingers” was an advantage over “having no fingers”, but the number five got there by coincidence. Trying to prove that five is the perfect number of fingers would be trying to prove too much.
Analogically, having an imperfect brain was an advantage over having no brain. But many traits of the brain are similar historical artefacts, or design trade-offs, or even historical artefacts of the design trade-offs of our ancestors. A different history could lead to brains with different quirks. Using “neural nets” (as opposed to something else) already is a design decision that brings some artifacts. Having the brain divided into multiple components is another design decision; etc. Each path only proves that going this path was better than not going there; it doesn’t prove that this path is better than all possible alternatives. Some paths could later turn out to be dead ends.
I agree that treating humans as “rational beings with objectives” can be a nice first approximation, but later it’s just adding more epicycles on a fundamentally wrong assumption.
Hey everyone,
This is my first post!
This is what I’ve been wondering lately:
Who is the best sales person in the world? Who knows?
‘Sales competitions’ generally refers to ‘in-house’ competitions established by managers to motivate their sales people to compete against one another.
Recently I began thinking about the prospects for a ‘world sales tournament’ of sorts:
Successful sales people have lots of money. But sales is derided, whether it be in real estate, ‘charity mugger’ fundraisers, or even the people doing tenders for defence contracts.
What if we could could take their money, convert it to prestige, and take a smooth commision on the whole thing?
Sales tournaments! The World Series of Sales! The Sales Olympics. Major League Sales. Who says sales people ought to pay an entry fee anyway (except, perhaps to get a manageable number of entrants). If there are companies out there with products or services to sell, getting the best, most competitive sales people in the game to sell them is a highly desirable service itself. In return for product to sell, said product and service companies could sponsor the competition.
Sales people have difficulty switching industries, despite their highly tuned sales skills. Product and industry knowledge is easy to pick-up, but soft skills are tougher to gain. Never-the-less, recruiters are reluctant to pick up sales people from other industries—the numbers don’t always make sense. Someone working for a luxury car dealership may have huge sales numbers, but a luxury hand bag dealer might not be able to translate the numbers over. Having a high sales ranking, in a similar way that programmers are ranked in coding competition websites, could make for a highly desirable piece of career capital.
An online ‘quick and dirty’ version could be coded for email marketers and telemarketers that could be conducted in a distributed fashion in the likeness of coding competitions. But, a large scale, potentially TV-friendly could be much more profitable.
There’s an EA aspect to this too. Rationalisation of sales human resources may make Effective Altruism fundraising more quantified. And, with the birth of this idea here—the earliest competitions, or the non-profit ones could be ‘selling’ those GiveWell recommended charities as options for prospective donors. Major League Fundraising/Philanthropy! The same could be said about politics—if this become ubiquitous, voters could try to adjust their interpretations of the policy-offerings of politicans by their sales ability.
I reckon there could even be ‘team sales’. People might barrack for particular sales team’s their affiliated with—Say the Farmers Marketing Cooperatve of California (made up name) may consist of 10 members, but be supported by hundreds of farmers. Then, when it comes to a competition, say to raise money for Development Media International, one of GiveWell’s charities, people would support them like a sports team. Imagine that, people caring as much about charity as sports teams, or their online gaming leagues! In a sense, this is the gamification of sales.
If you have read this till here, you are the kind of person I want to help me build this. Please do get in touch, preferably publically as a comment here and privately (yes, twice! Once for people to know who’s involved, and twice for contact details which you may prefer not to publicly disclose) with a contact email address so I can keep everyone involved in the loop and we can decide upon a work cycle.
Equity split for founding team, including myself will be by negotiation among all of us. I forsee an equal split of total equity—an equal partnerships, for those involved.
I think fundraising competitions for GiveWell recommended charities would be a valid activity.
I see two ideas here:
1) Create a mechanism for price discovery of standardized sales ability
Cool. I think the world would be a better place if a robust market existed for every good and service imaginable. Markets = better information = better decisions
2) Gamification of sales
Maybe I lack imagination, but I don’t see how this would be entertaining. Then again, there are a lot of successful reality shows based on pawn shops and real estate agents and other boring stuff, so...?
At the highest echelons of sales, relationships are more important that soft-skills. Obviously, soft-skills are highly related to a salespersons relationships, but in the same way capital is related to income. Soft-skills determine how fast your relationship asset increases. While soft-skills are transferable between industries, relationships, in general, are not. Relationships are also path-dependent in a way soft-skills are not.
The best salespeople in the world are, depending on how much of a role you think luck plays into the equation, either heads of sales or CEOs at Fortune 500 companies, or simply highly-talented salespeople spread throughout high-level sales careers.
My guess is that a sales tournament would be a sufficiently simulated environment that it would train skills similar to, but not the same as, those used in actual sales. It would also be optimized for dramatic contests, which isn’t quite the same thing as real world sales.
I’m from Baltimore, MD. We have a Baltimore meetup coming up Jan 3 and a Washington DC meetup this Sun Dec 13. So why do the two meetups listed in my “Nearest Meetups” sidebar include only a meetup in San Antonio for Dec 13 and a meetup in Durham NC for Sep 17 2026 (!)?
Whoever is running the meetup needs to make Meetup Posts for each meeting before they show up on the sidebar. IIRC regular meetups are often not posted there if the creator forgets about it. You can ask the person who runs the meetups to post them on LW more often or ask them if you can post them in their stead.
I run the San Antonio meetup and you are very welcome to attend here if it’s the nearest one to you!
Not sure what you mean by this. I actually posted the meeting for the Baltimore area myself.
The Baltimore and Washington DC meetups do show up if I click on “Nearest Meetups”, just that they appear in the 5th and 8th spots. That list appears to be sorted first by date and then alphabetically. The San Antonio meetup appears at the #4 slot, and the Durham meetup does not appear at all.
Basically the “nearest” part of nearest meetups seems to be completely broken.
For it at the moment it shows:
It doesn’t show the Berlin Meetup which is the city where I live and which I have put into my LW profile.
Man, that Durham date sure disconfirms the idea that your meetup isn’t soon enough :)
And hmm, just having one far-future meetup post is a clever way to just keep your meetup in the list permanently, like how meetup.com groups have permanent pages, with the actual meetup schedule being a part of that group page.
If the usual short, intermittent gazes of a conversation are replaced by gazes of longer duration, the target interprets this as meaning that the communication is less important than the personal relationship between two people...Traffic police wear reflecting, mirrored glasses to help reduce the possibility of an argument; irate or nervous drivers can be put off a confrontation if they not only cannot see the eyes of the policeman but are also forced to see their own eyes. They experience objective self-awareness, seeing themselves as objects and not seeing those they are engaging with.
I’m thinking about people’s capacity for emotional healing—I believe this is possible because people have a base state to aim at, even if it’s a slow and somewhat indirect process. My question is whether it would be possible to build something like this into an AI, since I assume an AI (even if not in a society of AIs) could either have mistakes built into its structure or make mistakes when changing itself.
I am not very well versed in AI at all. But reading this, my automatic response is to question how an emotional response is different from any other response, for an AI.
I understand that emotional responses are different in complexity than trivial responses. But I think of emotional responses (for an AI) as fitting somewhere in the fairly straightforward continuum between “what color should my desktop be” and “how do I judge the validity of a moral structure to apply to humans when they cannot agree on any meaningful criteria themselves”. I would assume that even AI emotional ecology is closer to the complex side of the spectrum, but it seems like a problem that should be fully open to internal inspection and modification by the AI—or if it is limited, at least no more difficult to adjust than any equally important calculation.
Building an AI with hidden subconscious seems like an unfortunate combination of stupid and malicious. The most likely reason for such a thing to exist that I can think of is as a hidden backdoor to allow humans to manipulate the AI without it knowing what is going on, but inducing schizophrenia-like symptoms is probably not the sane way to control our constructs.
But I may be under-applying important concepts—particularly, I may be underestimating the importance of emergent properties, especially in a hard takeoff scenario.
I brought up emotional healing because I’d recently read a strong example of it, but you raise a bunch of interesting points, and because, as I said, people seem to have a base state of emotional health—a system 1 which is compatible with living well. It seems as though people have some capacity for improving their system 1 reactions, though it tends to be slow and difficult.
Let’s see if I can generalize healing for AIs. I’m inclined to think that AIs will have something like a system 1 / system 2 distinction—subsystems for faster/lower cost reactions and ways of combining subsystems for slower/higher cost reactions. This will presumably be more complex than the human system, but I’m not sure the difference matters for this discussion.
I think an AI wouldn’t need to have emotions, but if it’s going to be useful, it needs to have drives—to take care of itself, to take care of people (if FAI), to explore not-obviously useful information.
It wouldn’t exactly have a subconscious in the human sense, but I don’t think it can completely keep track of itself—that would take its whole capacity and then some.
What is a good balance between the drives? To analogize a human problem, suppose that an FAI starts out having to fend off capable UFAIs. It’s going to have to do extensive surveillance, which may be too much under other circumstances—a waste of resources. How does it decide how much is too much?
This one isn’t so much about emotional healing, though emotions are part of how people tell how there lives are going. Suppose it makes a large increase in its capacity. How can it tell whether or not it’s made an improvement? Or a mistake? How does it choose what to go back to, when it may have changed its standards for what’s an improvement?
I have a different view of AI (I do not know if it is better or more likely). I would see the AI as a system almost entirely devoted to keeping track of itself. The theory behind a hard takeoff is that we already have pretty much all the resources to do the tasks required for a functional AI; all that is missing is the AI itself. The AI is the entity that organizes and develops the exiting resources into a more useful structure. This is not a trivial task, but it is founded on drives and goals. Assuming that we aren’t talking about a paperclip maximizer, the AI must have an active and self-modifying sense of purpose.
Humans got here the messy way—we started out as wiggly blobs wanting various critical things (light/food/sex), and it made sense to be barely better than paperclip maximizers. In the last million years we started developing systems in which maximizing the satisfaction of drives stopped being effective strategies. We have a lot of problems with mental ecology that probably derive from that.
It’s not obvious what the fundamental drives of an AI would be—it is arguable that ‘fundamental’ just doesn’t mean the same thing to an AI as it does to a biological being… except in the unlucky case that AIs are essentially an advanced form of computer virus, gobbling up all the processor time they can. But it seems that any useful AI—those AI in which we care about mental/emotional healing—would have to be first and foremost a drive/goal tuning agent, and only after that a resource management agent.
This almost has to be the case, because the set of AIs that are driven first by output and second by goal-tuning are going to be either paperclip maximizers (mental economy may be complex, but conflict will almost always be solved by the simple question “what makes more paperclips?”), insane (the state of having multiple conflicting primary drives each more compelling than the drive to correct the conflict seems to fall entirely within the set that we would consider insane, even for particularly strict definitions of insane), or below the threshold for general AI (although I admit this depends on how pessimistic your view of humans is).
These are complex decisions, but not particularly damaging ones. I can’t think of any problem in this area that an AI should find inherently unhealthy. Some matters may be hard, or indeterminate, or undetermined, but it is simply a fact about living in the universe that an effective agent will have to have the mental framework for making educated guesses (and sometimes uneducated guesses), and processing the consequences without a mental breakdown.
The simple case would be having an AI predict the outcome of a coin flip without going insane—too little information, a high failure rate, and no improvement over time could drive a person insane, if they did not have the mental capacity to understand that this is simply a situation that is not under their control. Any functional AI has to have the ability to judge when a guess is necessary and to deal with that. Likewise, it has to be able to know its capability to process outcomes, and not break down when faced with an outcome that is not what it wanted, or that requires a change in thought processes, or simply cannot be interpreted with the current information.
There are certainly examples of hard problems (most of Asimovs’ stories about robots involve questions that are hard to resolve under a simple rule system), and his robots do have nervous breakdowns.… but you and I would have no trouble giving rules that would prevent a nervous breakdown. In fact, usually the rule is something simple like “if you can’t make a decision that is clearly best, rank the tied options as equal, and choose randomly”. We just don’t want to recommend that rule to beings that have the power to randomly ruin our lives—but that only becomes a problem if we are the ones setting the rules. If the AI has power over its own rule set, the problem disappears.
This is a complex question, but it is also the sort of question that breaks down nicely.
How big a threat is this? (Best guess may be not-so-good, but if AI cannot handle not-so-good guesses, AI will have a massive nervous breakdown early on, and will no longer concern us).
How much resources should I devote to a problem that big?
What is the most effective way(s) to apply those resources to that problem?
Do that thing.
Move on to the next problem.
As I write this out, I see that a large part of my argument is that AIs that do not have good mental ecology with a foundation of self-monitoring and goal/drive analysis will simply die out or go insane (or go paperclip) rather than become a healthy, interesting, and useful agent. So really, I agree that mental health is critically important, I just think that it is either in place from the start, or we have an unfriendly AI on our hands.
I realize that I may be shifting the goal posts by focusing on general AI. Please shift them back as appropriate.
You’re probably right about the trend. I’ve heard that lizards do a lot less processing of their sensory information than we do. It’s amusing that people put in so much work through meditation to be more like lizards. This is not to deny that meditation is frequently good for people.
However, an AI using a high proportion of its resources to keep track of itself is not the same thing as it being able to keep complete track of itself.
In re the possibly over-vigilant/over reactive AI: my notion is that its ability to decide how big a threat is will be affected by its early history.
That’s where I started. We have an evolved capacity to heal. Designing the capacity to heal for a very different sort of system is going to be hard if it’s possible at all.
Facebook has open-sourced its AI hardware.
When I go to CFAR web page, my browser complains about the certificate. Anyone else having this problem?
It looks like you’re going to https://rationality.org rather than http://rationality.org. CFAR doesn’t have a SSL certificate (but maybe should get one through Let’s Encrypt).
You’re right, but now I wonder how it happened (going to HTTPS). I would guess that I googled the address somehow or followed someone’s link, but I don’t remember anymore.
I think that by now both Chrome and Firefox will, by default, attempt an HTTPS connection if HTTP is not specified explicitly.
That is, going to “rationality.org″ will by default do ”https://rationality.org″. You can manually override that by specifying ”http://rationality.org″, but, of course, few people bother.
Curious: Are there any (currently active) readers who are in Idaho, Eastern Washington, or Eastern Oregon?
This article discusses FAI, mentioning Bostrom, EY etc. Its interesting to see how the problem is approached as it goes more mainstream, and in this particular case a novel approach to FAI is articulated: whole brain emulation (or biologically inspired neural nets) … on acid!
The idea is that the WBE will be too at-one-with-the-universe to want to harm anyone.
Its easy to laugh at this. But I think there’s also a real worry that someone might actually try to build an AI with hopelessly inadequate guarantees of safety.
Having said that, perhaps the idea is not quite as crazy as it sounds. If WBE comes first, then some form of psudo-drug based behavioural conditioning is better than nothing for AI control, although I would have thought that modifying oxytocin to increase empathy would be the obvious strategy: digital MDMA, not LSD.
Tangentially, there seems to be a perception some people have that taking LSD causes prosocial values (or, at least, what they believe to be prosocial values), but there seems to be a real danger of confusing the direction of causality here—hippies do acid, and hippies hold certain values, but the causal direction is surely: hippy values → become hippy → take acid, not take acid → become hippy. Of course, perhaps acid might make the hippy values stronger, but that could be because the experience is interpreted within the structure of your pre-existing values. I have heard some (atypical?) neoreactionaries plan an acid trip, for spiritual reasons, and their values certainly appear different from hippy values. Of course, both the neoreactionaries and the hippies believe that they hold prosocial values, they just differ on what prosocial values are. Perhaps their terminal values are not so different, but they have very different models of the world?
To briefly go back to the original point, I think the author is conflating two things—just because ‘can we program an AI to hallucinate?’ is an interesting question (at least to some people), does not mean that it is an actually sensible proposition for FAI control. Conversely, just because this idea can trigger the absurdity heuristic, does not mean that ‘AI behavioural modification with drugs’ is an entirely useless idea.
So I’m attempting to adopt practices that will bring me closer to generally strategic behavior. I am also interested specifically in strategic/efficient studying. To that end I would like as much of an info dump as possible on the topic on failure.
This can include avoiding failure, preparing for failure even when avoiding it, how to notice when you are failing, and perhaps how to fail gracefully (as possible). I realize there is overlap/confusion here; I was simply rattling of primers for you to consider.
Please err on the side of inclusivity. I am not starting from a state of complete ignorance (the sequences can be turned towards my concerns handily), but best be safe.
Thank you for your help! :)
Edited for clarification, I hope.
Also for starts: What sorts of questions could someone ask to learn the most about failure?
I spent a long time coming up with theories about how I work and why, and it was a great waste of time. I now find it a lot more reliable to base my actions on generalizations about most or all humans, rather than coming up with idiosyncratic theories about myself. Idiosyncratic theories are likely to be based in introspection, which is notoriously unreliable and which humans are known for systematically overvaluing. (See introspection illusion.) I’ve found that a good rule of thumb is: Don’t use an idiosyncratic theory unless you also would’ve generated that theory about someone else by observing their current behavior and having knowledge of their past behavior. And even when idiosyncratic theories seem to work, they more likely work because they’re also explainable using the aforementioned generalizations.
This was a very useful topic to bring to the conversation, but I think I may have framed what I had in mind poorly. Did the edit clarify?
Your post contains no question at the moment. Specifying questions is useful for having discussions.
Thank you for pointing out my error. Did my editing clear up said issue?
There’s still no question in the original post. Questions are quite useful for exploring a new topic of interest. You might get some answers by getting seeking an info dumb but a concrete question would likely produce better discussion. It would also help you focus yourself.
I like this prompt, and it so happens I have a proper response that fits.
I’ve seen people talk of noticing failure, but thankfully it having been a gentle one, they managed to make something of it. Sometimes people speak or write as if their may be some underlying method to be mined away from luck.
While planning actions, is it a good heuristic to attempt action so as a fall would not break legs, so to speak?
Well look at that you’ve helped me dissolve a question into a form that has an obvious answer. This is both nice (less clutter), and partly the reason I was asking for a dump. I’m trying to stumble across gaps in my understanding, not necessarily tangles (although again, thank you).
I suppose I expect to de-tangle my knowledge of this subject as I review anything possibly relevant. I just thought to ask here in conjunction with said review.
I’m trying to be as comprehensive as possible, which means I should ask the obvious first. Is the question now posed in the main post a respectable start?
The post as it now stands needs some serious proofreading.
LessWrong is likely to focus on cognitive biases, and this is a good place to start. I assume that you have already read some on the subject, but if not, we have a lot on site, and there are some good books—for example, The Invisible Gorilla and Mistakes Were Made. Everyone will have a different list of recommended reading, but I don’t know if that is the sort of info dump you are looking for.
I think that your question may be too general. Being more specific will almost surely give you more useful responses.
I suspect that your best bet would be to notice specific sub-optimal outcomes in your life, and then ask knowledgeable people (which may include us) for thoughts and information. If you have access to a trustworthy person who will give honest and detailed feedback, you might ask them to observe you in completing some process (or better, multiple processes) and take notes on any thoughts they have regarding your actions—things you do differently, things you do wrong, things that you do slower than most people, etc. They will probably notice some things that you do not. They may not know how to help you change, but that doesn’t make their information any less valuable.
Thank you for the feedback. This was a surprisingly useful line of interaction.
The first thing it did was make me remember that inferential gaps take caution at the very least to cross. Another way I failed was in not carrying out my empathetic modules of people far enough; I knew people would realize what I was after was large and vague, but then trailed off into assuming people would actually want to rattle off in some randomly chosen direction available to them. Taken on iota more of a step and I can feel how annoying such a prompt is.
And then I recalled something about A.I. safety; something along the lines of not being able to specify all the ways we don’t want an AI (genie?) to act; the nature of value or goal specification is too exclusive to approach from that direction efficiently. Reflection to see if I can be coherent about his will have to happen later.
As of this moment (2 am) it is unattractive to see if I am on to something or not. Thank you once more for the feedback. It feels like I’ve gained valuable responses.
Is anyone aware of a healthy diet plan that comes with pre-packaged meals?
http://www.mealsquares.com/
Thanks, I found Soylent-equivalent available in Europe.
I heard there are more options, but I only tried Joylent.
How was your experience?
I have tried it for a week (eating only Joylent, nothing else), and I was completely satisfied.
The taste was okay. Not great—but that is meta-great, because I was not tempted to overeat (which is usually my problem). What I was supposed to eat during one day, was exactly what I ate during the day, and I didn’t feel hungry anytime. It was like: I am eating as much as I want to, whenever I want to; it’s okay but nothing special, merely a fuel for my body. Perfect.
And it saved a lot of time. All the thinking about what should I cook, buying, cooking, cleaning dishes—easily an hour a day—didn’t exist anymore. Convenient.
Then I stopped it mostly because my girlfriend didn’t want to join me, and cooking for one person is almost as much work as cooking for two people. (During that one week she was away, so I had a chance to try what it is like when no one at home is eating normal food.) So now I mostly use Joylent as a backup option, for example when I wake up late in the morning and I have to hurry to my job, so I don’t have to skip my breakfast completely.
We had a debate with my girlfriend about whether such food can be healthy. There are a few objections I consider reasonable, even assuming the food contains exactly what is advertised:
Just because it contains “100% of recommended daily intake of everything”, it doesn’t obviously follow that your body needs everything on the same day. Hypothetically, what if your body processing some X disallows it to process some other Y at the same time? You could have a balanced diet by eating 2X on one day, and 2Y on the other day, but if you eat 1X+1Y every day, you may get Y-deficient.
What if there are molecules your body needs that medicine still does not know about? They can occur in some meals you would randomly eat once in a while, but they may be absent from the artificial food.
Unprocessed food or diary products contain some friendly microorganisms, which will be missing in the artificial food.
But in my opinion, if you eat normal food once in a while, that should be safe enough. My opinion is that normal food should be something to enjoy, not a boring duty. If you don’t enjoy every single meal, you might as well replace the ones you didn’t enjoy by quick artificial food.
Part 2: ‘Which experts to trust’, ‘Limitations’ and ‘Practice’
**Part 1: IS EXPERT OPINION A WASTE OF TIME? is available here
which experts to trust
Now for an application:
An obvious example of a category of hedgehog that springs to mind are ideologues – everyone from Anarchocapitalists to Bayesians to ‘materialists’ to ‘ordinary people’ to ‘Agnostics’ (just trying my best to insult the most number of people here, cause not enough people recognise they’re as guilty of the things they might be looking for in others). The motivated reasoning bias springs to mind (after prompting by the ACERA paper).
Commiserate:
important limitations to this research line
Practice
For any smart cookies out there, you must wonder – well, what can you do to get the most out of experts? It’s too much information to tell you about here, but I recommend ACERA’s
Elicitation tool, user manual and
Elicitation tool, process manual
Academic delphi style groups outperform baseline groups by around 50%. Could professional delphi groups be formed to profit from stock and prediction markets?
and
UPenn’s Delphi decision aid And this Real time Delphi tool
Sincerely, Carlos Larity
Self-appointed claimant as the resident expert expert
*Penned in order to get my karma back at around 11. I aimed to have a net zero karma – balancing more controversial stuff with popular content, just enough to not discredit myself. Though, I just found out that I actually can’t upvote when my karma is zero, and I need ’11 more’ to do so.
And, just a fun additional link on reddit for you to check out. Remember not to trust experts, particularly not experts in experts...or hedgehogs!
Confused about agnostics and ordinary people here.
**Part 1: IS EXPERT OPINION A WASTE OF TIME?
Part 2 on ‘Which experts to trust’, ‘Limitations’ and ‘Practice’ available here
To the lay person, graphs are intimidating. Atmospheric science is notoriously complex. Expert judgement is a ‘next best’ option, then perhaps what is socially normative and marketable.
To the lay person, how can expert judgement be interpreted? Who even gets counted as an expert. We frequently hear about a ‘scientific consensus’ but also here from seemingly erudite ‘skeptics’, who use graphs that are compelling but uninterpretable in the broader context of all the information around.
It looks like the naive algorithm for evaluating evidenc, while a naive Bayesian conclusion, is not particularly efficient in some important cases:
This has real world consequences:
I feel climate change is a good example of a thing which is allegedly highly important but extremely complex, where deferral to experts is probably prudent. However, knowing how to relate to expert evidence is then important, particularly if you, like me, are unsure about what to do with all the expertise floating around awhile action is sluggish, prompting me to wonder – what’s going on here?
So why is a structured approach to expert interpretation useful in general. Let my friends from ACERA tell it
After doing some research, I’ve come up with a few notes. They are not in my own words, because they are written about adequately by others:
Who is an expert?
-ACERA
Should we trust experts?
Among experts in ecology: ‘No consistent relationships were observed between performance and years of experience, publication record or self-assessment of expertise.’.
-ACERA
My impression is that we shouldn’t naively take the judgements of experts to be simply superior to amateurs/lay people. As counterintuitive as it is: experts are more accurate than amateurs, but amateurs are more precise.
Expert overconfidence is one thing. Expert underperformance relative to simple equations is another:
Given the length it would make more sense to move this to it’s own post in discussion instead of having it in the open thread.
My impression after interviewing dozens of academics from various health related fields is that career advancement among these researchers pertains more to signalling the work is being done that actually doing the work. Thiel discredits this arrangement in Zero to One as dysfunctional.
Which academics?
They’re all from the same institution. It’s one of Group of 8 universities in Australia. Interviewed just 10 academics for around 30 minutes each. Small sample size, and just one uni so not neccersarily generalisable.
Why can’t seasoned politicians handle a windbag millionaire?
Is it because he is a caricature of them?
A lot of problems that the establishment has been ignoring for decades, e.g., illegal immigration, out of control PC policing, pensions for government employees crowding out other spending, are starting to become critical and the seasoned politicians don’t know how to address these problems. In fact they probably can’t be addressed without upsetting established interests to whom the seasoned politicians are beholden to.
So we are locked into a stable, nowhere-near-optimum equilibrium. :(
It’s not stable. The problems I mentioned are getting worse.
Good point.
I don’t think it makes sense to call Trump a ‘windbag millionaire’. Trump is a billionaire because he’s good at dealmaking which is a relevant skill for political campaigning.
Offensively attacking Fox news and then having Fox news fold is an example of a political move that’s likely calculated and no other Republican candidate could have pulled of the same way.
Trump and other politicians of this session show that the model of political electioneering that is about polling and then saying whatever polls best isn’t the only one that works.
Through calling for the US to bomb Daesh’s oil industry Trump already achieved concrete policy changes of US policy. Trump isn’t stupid or uncalculated even when his public persona gives the impression that he’s uncalculated.
It’s not clear that that’s true. E.g., earlier this year someone asked the question: If Trump had just taken all his money in 1987 and put it in index funds rather than trying to grow it himself, what would have happened? The answer, apparently, is that he’d be about 3x richer than he is now.
The reason for picking 1987 is less than fully convincing, though, so it’s possible that there’s some misleading cherry-picking going on. This article, as well as quoting the other one, says that if he’d put his money in index funds in 1978 instead of 1987 he’d now be about twice as rich as he actually is. Trump apparently disputes both the “before” and “after” figures, but if instead we use his own figures for 1976 (why 1976? because that’s when we have figures from) he still ends up having underperformed the market.
So, I dunno, maybe he’s good at dealmaking, but it seems like the main reason he’s rich is that he inherited a fortune from his family. Everything he’s done since to grow his wealth could have been done at least about as well (and maybe much better) by just putting the money into the stock market.
That’s not a good comparison. There are many more people who inherited as much money and who haven’t increased their wealth.
But apart from pure numbers, if I observe Trump style of interaction with journalists who interview his looks to me a lot like Carl Icahn’s style. In both cases there an extreme amount of frame control that comes across as pretty rude.
That’s the kind of stuff that get’s Scotts Adams write gushing articles about Trump. I think that Scott Adams predictions are widely overconfident but I see what Scott Adams means when he speaks about Trumps language usage. Then I have had hypnosis training just as Scott Adams, so it can be that the patterns are otherwise hard to spot.
It suggests that Trump has been less successful, in terms of turning money into more money, than the average business in one of those index funds. The fact that individual investors often do even worse than that by investing badly is neither here nor there.
Most businesses do worse than the average business in one of those index funds. Individuals don’t maxmize for wealth. Trump has brought a big yacht. He doesn’t live the frugal life. Comparing apples to oranges makes no sense. You would have to compare Trump to similar people.
This is true, and a good point. But your original point was:
Based on this discussion, it looks like Trump is not a billionaire because he is a good at dealmaking. He had multiple ways to become a billionaire (investing in index funds, investing in gold, dealmaking, inventing Facebook).
And as you now point out, he completely failed in utilizing the money; Trump is a billionaire because he was inefficient in finding good ways of exchanging his money for utility. Another area in which I would have outperformed him :-)
Given the way he has chosen he wouldn’t be a billionaire if he was bad at dealmaking. It’s a relevant skill for winning political fights.
If you think that Trump just blabbers around what’s on his mind, you are massively misreading the situation.
It doesn’t follow that because he did worse than the optimal strategy his strategy wasn’t equally as optimal. It could be that the strategy he followed is as optimal as the other one, but is subject to chance and he got unlucky.
You can’t say “strategy A produced a better result than strategy B, therefore strategy A is a better strategy” based on a single example of someone using strategy A.
The real point, for me, is not so much “Trump could have done better by investing in index funds”, it’s “Trump’s business underperformed the market”.
And, yes, underperforming the market over 30 years or so isn’t proof of anything much; he could just have been unlucky. But for the exact same reasons, the fact that Trump’s a billionaire isn’t proof of anything much; he could just have been lucky. (He was: he inherited a lot.)
The only point I’m making is this: the fact that Trump is rich is not very good evidence that he’s a great deal-maker. He’s rich mostly because he inherited a fortune; someone who had inherited the same fortune and just put it into the stock market would now be richer than he is; what (admittedly limited) evidence we have of his business skill is that he’s done worse than the market over the last few decades.
He might still be a great deal-maker. Or he might be a pretty terrible one. All I’m saying is that I don’t see evidence that he’s particularly good.
You have your example backwards.
We have a case of many many people using strategy A (index funds), and a single example of strategy B (Trump). And you can say that the strategy that worked lots of times is a better bet than the one that failed once. Strategy A is better in the limited sense that given our current information, it looks safer.
Is that true? Are there many billionaires who became billionaires through having most of their money in index funds?
I am assuming that investment in index funds is scalable and was therefore including in my sample all long term investors in index funds. If this strategy is not scalable, I withdraw my analysis.
The issue isn’t whether investment in index funds is scalable but whether investing in index funds is scalable.
I can’t imagine someone who has 1 billion dollars to simply put it into an index fund and do nothing with it and live life as if he wouldn’t have any money.
If you look at lottery winners most of them as relatively soon broken and without any money. If a lottery winner has the same amount of money ten years later that shows good financial skills relative to other lottery winners.
There aren’t many billionaires, full stop. (A little under 2000.) And if someone (say) inherited $500M or earned it by building and selling a very successful business, and then put the money into index funds and waited for it to grow to $1B … I don’t think we’d say they became billionaires through index funds, we’d say they became billionaires by inheritance or by growing a business.
Trump became a billionaire by inheriting a fortune. The fortune he inherited was less than $1B, and it happens that the path he took from <$1B to >$1B involved running a business, but he could have invested his money conventionally and done just as well.
I would expect that few billionaires have their wealth invested conventionally, though. To get really rich you generally either need to do something exceptionally lucrative or inherit from someone who did. In the first case, (1) the chances are that you have a drive to keep doing exceptionally lucrative things and (2) you’re likely to think—with some justification—that having done so well for yourself you can do better by carrying on than by just investing and relying on other people’s success.
In the second case, the chances are that a lot of that inheritance is in the business that made your family rich. Again, if it’s been so successful you’re likely to think it better to keep most of your wealth in that.
For how many billionaire’s does that happen to be true?
If that’s true for nobody than comparing Trump to nobody doesn’t make much sense.
For how many billionaires does what happen to be true?
Became billioniares through having most of their assets in index funds.
That’s the same question you asked 5 comments upthread from this one. Apparently you found my answer unsatisfactory since you just asked (what I now understand to be) the exact same question again, but unless you care to indicate what was unsatisfactory about it I’m not sure what to say that I haven’t already.
Because a lot of people are tired of and disillusioned with seasoned politicians who they see as windbags on their way to becoming millionaires.
The usual explanation for Trump is that it’s a Rage Against the Machine thing.
Trump is a new problem. It can take time to figure out how to solve a new problem.
How were you expecting them to handle him?
You’re actually asking why they couldn’t handle him quickly. He may end up handled in the sense of not getting the nomination. It also seems as though it took Trump’s proposal to not let Muslims enter the US to really motivate the Republican heavyweights.
I don’t know how to reply to this thread as a whole, so I defaulted to this.
Like the Veiled Statue at Sais, I’m thinking this drama is revealing some truth about the US society and the US government. Some people recoil and want the veil restored, some want to see more and some don’t know what to do, but no one is neutral.
What does Game Theory suggest in this situation? Is a tie the best that can be done? I don’t think the “have you no decency?” retort will work here.
Also see DSM-IV, Narcissistic Personality Disorder, the first choice for any world leader according to Jerrold Post.
It’s a rerun.
Game Theory assumes a defined set of players. In politics there are many different players each with different incentives and agenda’s.
It reveal that the US government is weak. Both the Republican side and the Democratic side with Berny Sanders.
We had three presidents from Yale in a row. Then in 2004 two people from the same Yale fraternity running against each other. In 2008 Obama from the university of Chicaco was elected president. With Hilary the presidency might go back to Yale but otherwise there are many people with really different backgrounds.
People seem to be fessed up with politics as usual.
Yeah, I imagine having to choose between two former classmates, who are probably long-term buddies, can be demotivating. Even worse than the usual knowledge that most American presidents actually come from only a few “royal” families.
Voting for Trump or Sanders is another way to express “I want someone who does not belong to the old aristocracy”. The way the system is designed, if you don’t vote, it doesn’t matter; if you vote for a third party, unless you succeed to coordinate half of the population (rather difficult, if the media will push in the opposite direction), it still doesn’t matter… so the only way you are realistically allowed to rebel is to vote for the most eccentric candidate in the primaries.
Of recent presidents, only the Bushes were an established political family. Before the Bushes the most recent time a scion of a political family was in the White House was Kennedy.