Open Thread Feb 22 - Feb 28, 2016
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
I found this paper which is interesting. But at the start he tells an interesting anecdote about existential risk:
This is a good example of optimizing for the wrong goal.
+1
LW might find that interesting:
Just by reading this phrase, I can conclude that everything else is probably useless.
Here is a shortened version:
Darwin’s grandfather believed in something similar to abiogenesis. Later in Darwin’s life, scientists found something that appeared to be the first proto-cell, but later they found a progenitor to this in oceanic mud. Darwin believed that the discovery of the first life form would occur soon, but it didn’t happen. Organisms that reproduce, metabolize energy, and create a cell wall, require at least a hundred proteins, each of which has approximately 300 amino acids, and all need to be able to work with each other.
To reach this level of sophistication via chemical evolution defies explanation. The experiments in 1953 created some of the amino acids found in all life forms, but this is a far cry from creating proteins. The origin of life is one of those puzzles that has been right around the corner, for the past two centuries. Imagining something is not a scientific argument, but simply speculation. Many present all evolution as similar to how wolves changed to sheepdogs, or the way in which bacteria develop resistance to penicillin, but such change will not create radically new protein complexes or new species.
If evolution is untrue, it changes everything. After I accepted that a creator exists, I found myself attending church, engaging in Bible study, and reading Christian authors. Various facts all began to make much more sense.
But, apparently, it’s not a far cry from a supernatural person to create all universe(s).
My two cents, extrapolating from this and other converts (e.g. UnequallyYoked, which I still follow): there’s a certain tendency of the brain to want to believe in the supernatural. In those people who have this urge at a stronger level, but who come in contact with rationality, a sort of cognitive conflict is formed, like an addict trying to fight the urge to use drugs.
As soon as a hole in rationality is perceived, this gives the brain the excuse to switch to a preferred mode of thinking, wether the hole is real or not, and wether there exists more probable alternatives.
Admittedly, this is just a “neuronification” of a psychological phoenomenon, and it does lower believers’ status by comparing them to drug addicts...
I’m a big Eliezer fan, and like reading this blog on occasion. I consider myself rational, Dunning-Kruger effect notwithstanding (ie, I’m too dumb or biased to know I’m not dumb or biased, trapped!). In any case, I think the above is pretty good, but I would stress the ID portion of my paper, which is in the PDF not the post, is that the evolutionary mechanism as observed empirically scales O(2^n), not O(n), generally, where n is the number of mutations needed to create a new function. Someday we may see evolution that scales, at which point I will change my mind, but thus far, I think Behe is correct in his ‘edge of evolution’ argument (eg, certain things, like anti-freeze in fish, are evolutionarily possible, others, like creating a flagellum, are not). As per the Christianity part, the emphasis on the will over reason gives a sustainable, evolutionarily stable ‘why’ to habits of character and thought that are salubrious, stoicism with real inspiration. Christianity also is the foundation for individualism and bourgeois morality that has generated flourishing societies, so, it works personally and for society.
My younger self disagreed with my current self, so I can empathize and respect those why find my reasoning unconvincing, but I don’t think it’s useful in figuring things out to simply attribute my belief to bias or insecurity.
This is the part I cannot wrap around my mind: let’s say that evolution, as it’s presently understood, cannot explain the totality or even the birth of the life evolved on this planet. How can one jump from “not explained by present understanding of evolution” to “explained by a deity”? I mean, why the probability of the supernatural is so low compared to say, intelligent aliens intervention, panspermia or say a passing black hole that happens to create a violation of the laws of biochemistry?
Have I understood correctly that, once estabilished for whatever reason that a deity exists, that you choose which deity exactly based on your historical and moral preferences?
It is a serious mistake to assume that because something could happen by natural laws, it is automatically more probable than something which would be a violation of natural laws.
For example, suppose I flipped a coin 10,000 times in a row and always got heads. In theory there are many possible explanations for this. But suppose by careful investigation we had reduced it to two possibilities:
It was a fair coin, and this happened by pure luck.
God made it happen through a miraculous intervention.
Number 1 could theoretically happen by natural laws, number 2 could not. But number 2 is more probable anyway.
The same thing might well be true about explanations such as “a passing black hole that happens to create a violation of the laws of biochemistry.” I see no reason to think that such things are more probable than the supernatural.
(That said, I agree that Eric is mistaken about this.)
Just to be clear, this is obviously not what is happening with Eric. But let’s run with the scenario:
I would contest that this is not the case. If you think that n° 2 is more probable, I would say it’s just measuring that the probability you assign to the supernatural is higher than 2^10k (besides, this is exactly Jaynes’ suggested way to numerically estimate intuitive probabilities).
But your probability is just a prior: while n° 1 is justifiable by appealing to group invariance or symmetric ignorance, n° 2 just pops out of nowhere.
It certainly feels that n° 2 should be more probable, but the wrong answer also feels right in the Wason selection task.
This is what I was asking Eric: by what process were you able to eliminate every other possible explanation, so that the supernatural is the only remaining one?
I suspect also that, in your hypothetical scenario, this would be the same process hidden in the sentence “by careful investigation”.
The evolution of the flagellum works because proteins used in it are useful in other context:
Of the millions of proteins on the planet it is unremarkable most exist elsewhere, just as it’s likely most parts in a car can be found in other machines. Further, these aren’t identical proteins, merely ‘homologous’ ones, where there’s a stretch of, say, a 70% match over 40% of the protein, so that makes this finding not surprising (lug nut in engine A like fastener in engine B). A Type 3 Secretory System has about 1⁄3 of the proteins in a flagellum (depends which T3SS, which flagellum), but to get from one to the other needs probably ten thousand new nucleotides in a specific constellation, and nothing close to that kind of change has been observed in the lab w/ fruit flies or E. coli. So, it’s possible, but still improbable, like turning one of my programs into another via random change and selection, there are many similarities in all my programs, but it just would take too long. Possible is not probable, and unlike cosmological improbabilities, there’s no anthropic principle to save this. Pointing out homologs still leaves the problem of tranversing a highly spiked fitness landscape, but if this is ever demonstrated on say, a new protein complex in E. coli, I’d say, you win (but more complex than moving a gene closer to a promoter region as in the citT).
Any present version of a protein that evolved >1,000,000,000 years ago is only homologous and not identical to it’s predecessor.
A billion years does happen to be really long, especially if you have very many tiny spots of life all around the planets that evolve on their own.
What works in a lab in a few years is radically different than what works in billions of billions of parallel experiments done for billions of years.
How do you judge something to be a new protein complex? Bacteria’s pass their plasmides around.
E.coli likely hasn’t good radically new protein complexes in the last millions of years so anything it does presently is highly optimized and proteins are only homologous to their original functions.
I think you are more likely to find new things in bacteria’s that actually adept to radically new enviroments.
The years thing seems to make everything probable, because we have basically 600 MM years of evolution from something simple to everything today, and that’s a lot of time. But it is not infinite. When we look at what evolution actually accomplishes in 10k generations, it is basically a handful of point mutations, frameshifts, and transpositions. Consider humans have 50MM new functioning nucleotides developed over 6 million years from our ‘common ape’ ancestor: where are the new unique functioning nucleotides (say, 1000) in the various human haplogroups? Evolution in humans seems to have stopped. Dawkins has said given enough time ‘anything’ can happen. True, but in finite time a lot less happens.
They’ve been looking at E. coli for 64000k+ generations. That’s where we should see something, and instead all we get is turning a gene that is sometimes on, to always on (citT), via a mutation that put it near a different promoter gene. That’s kinda cool, and I admit there’s some evolution, but it seems to have limits.
But, thanks for the respectful tone. I think it’s important to remember that people who disagree with you can be neither stupid or disingenuous (there’s a flaw in the Milgrom-Stokey no-trade theorem, and I think it’s related to the ‘Fact-Free Learning’ paper of Aragones et al.)
There’s your flaw in reasoning. 64000k is relatively tiny. But more importantly bacteria’s today are highly optimized while bacteria’s 2 billion years ago when the flagellum evolved weren’t. I would expect more innovation back then.
One example for that optimization is that human’s carry around a lot of pseudogenes. Those are sequences that were genes and stopped being genes when a few mutations happened.
Carrying those sequences around is good for innovation as far as producing new proteins that serve new functions.
The strong evolution pressure that exists on E-coli today results in E-coli not carrying around a lot of pseudogenes. Generally being near strong local maxima also reduces innovation.
If you want to look at new bacterias with radical innovations the one’s in Three Mile Island.
No evolution in humans hasn’t stopped.
It is strong enough that natives skin color strongly correlates to their local sunlight patterns. We don’t only have black native people at the equator in Africa but also in South America. Vitamin D3 seems to be important enough to exert enough evolutionary pressure.
In area’s with high malaria density in West Africa 25% have the sickle cell trait. It’s has much lower prevelance in Western Europe where there’s less malaria.
Western Europe has much higher rates of lactose intolerance than other human populations.
Those are the examples I can bring on the top of my head. There are likely other differences. Due to the current academic climate the reasons for the genetic differences between different human haplogroups happen to be underresearched. I would predict that this changes in the next ten years but you might have to read the relevant papers in Chinese ;)
The 10,000 Year Explosion disagrees; to quote my own earlier summary of it:
Allele variation that generates different heights or melanin within various races, point mutations like sickle cell, the mutations that generate lactose tolerance in adults, or that affect our ability to process alcohol, are micro-evolution. They do not extrapolate to new tissues and proteins that define different species. I accept that polar bears descended from a brown bear, that the short-limb, heat-conserving body of an Eskimo was the result of the standard evolutionary scenario. I have no reason to doubt the Earth existed for billions of years.
Humans have hundreds of orphan genes unique among mammals. To say this is just an extension of micro-evolution relies on the possibility it could happen, but you need 50MM new nucleotides that work to arise within 500k generations. Genetic drift could generate that many mutations, but the chance these would be functional assumes proteins are extremely promiscuous. When you look at what it takes to make a functioning protein within the state-space of all amino acid sequences, and how proteins work in concert with promoter genes, RNA editing, and connecting to other proteins, the probability this happened via mutation and selection is like a monkey typing a couple pages of Shakespeare: possible, but not probable.
This all argues for a Creator, who could be an alien, or an adolescent Sim City programmer in a different dimension, or a really smart and powerful guy that looks like Charlton Heston. The argument for a Christian God relies on issues outside of argument by design
You claimed that evolution in humans seems to have stopped. Kaj_Sotala gave you evidence that it hasn’t. Of course the examples he gave were of “micro-evolution”; what else would you expect when the question is about what’s happened in recent evolution, within a particular species?
There’s some reason to think that most human “orphan genes” are actually just, so to speak, random noise. Do you have good evidence for hundreds of actually useful orphan genes?
I’m curious what you think the earth looked like during those billions of years. Scientists have pretty concrete ideas of what things were like over time: where the continents were, which species existed at which times, and so on. Do you think they are right about these things, or is it all just guesswork?
When I was younger I thought that evolution was false, but I started to change my mind once I started to think about that kind of concrete question. If the dating methods are generally accurate (and I am very sure that they are), it follows that most of that scientific picture is going to be true.
This wouldn’t be inconsistent with the kind of design that you are talking about, but it strongly suggests that if you had watched the world from an external, large scale, point of view, it would look pretty much like evolution, even if on a micro level God was inserting genes etc.
White skin, blue eyes, lactose digestion in adulthood… (some people say even consciousness)… are relatively recent adaptations.
What did you expect, tentacles? ;)
The question is: Would somebody who builds his argument on one more missing step reverse his stance when that more step is also found or would he just point out the next currently missing bit?
I think he’s equivocating on “if evolution is untrue, it changes everything”. That statement is literally true in the same sense that “if I’m a brain in a jar, it changes everything” or “if the world was created by Zeus, it changes everything” are true. But that’s not what he’s using it to mean.
Of course; “it changes everything” doesn’t mean “I can stop using logic and just take my preferred fairy tale”. There are many possible changes.
Also by “evolution” he means “creating a life from non-life, in the lab, today”, because that’s the part he is unsatisfied with.
So, more or less: “if you cannot show me how to create life from non-life, then Santa must be real”.
Here is a great video that explain how abiogenesis happened.
Here is another great video on the evolution of the flagellum.
All of his videos are fantastic. And there is a great deal more stuff like that on youtube if you search around. It’s really inexcusable for an intelligent person to doubt evolution these days. The evidence is vast and overwhelming.
I’m not sure Eric is denying common descent (the subject of your last link). My impression is that he’s some sort of theistic evolutionist, is happy with the idea that all today’s life on earth is descended from a common ancestor[1] but thinks that where the common ancestor came from, and how it was able to give rise to the living things we see today given “only” a few billion years and “only” the size of the earth’s biosphere, are questions with no good naturalistic answer, and that God is the answer to both.
[1] Or something very similar; perhaps there are scenarios with a lot of “horizontal transfer” near the beginning, in which the question “one common ancestor or several?” might not even have a clear meaning.
[EDITED because I wrote “Erik” instead of “Eric”; my brain was probably misled by the “k” in the surname. Sorry, Eric.]
Well, he says:
If he doesn’t believe that species can become other species, he can’t believe in common descent (unless he believes that the changes in species happen when scientists say they happen, but he attributes this to God).
This is approximately what many Christians believe. (The idea being that the broad contours of the history of life on earth are the way the scientific consensus says, but that various genetic novelties were introduced as a result of divine guidance of some kind.)
I’m not sure whether this is Eric’s position. He denies being a young-earth creationist, but he does also make at least one argument against “universal common descent”. Eric, if you’re reading this, would you care to say a bit more about what you think did happen in the history of life on earth? What did the scientists get more or less right and what did they get terribly wrong?
Only the last link is about common descent. And it isn’t agnostic on theistic evolution; there’s a whole section on experiments for testing evolution through Random Mutation and Natural Selection. The first link covers abiogenesis, and the second the evolution of complicated structures like the flagellum.
I don’t think theistic evolution is that much more rational than standard creationism. It’s like someone realized the evidence for evolution was overwhelming, but was unable to completely update their beliefs.
That would by why I called it “the subject of your last link” rather than, say, “the subject of all your links”.
I do not think anything on that page says very much about whether the evolution of life on earth (including in particular human life) has benefited from occasional tinkering by a god or gods. (For the avoidance of doubt: I am very confident it hasn’t.)
I think it’s quite a bit better—the inconsistencies with other things we have excellent evidence for are subtler—but that wasn’t my point. I was just trying to avoid arguments with strawmen. If Erik accepts common descent, there is little point directing him to a page listing evidence for common descent as if that refutes his position.
In reality of course they don’t need any proteins and it’s quite possible that the first cells were simply RNA based.
Well, to understand rationality we should read about both successes and failures of human reasoning.
The equivocation of ‘created’ in those four points are enough to ignore it entirely.
It does happen to be a bit frightening to see an economics PHD doubt evolution. I think it would be good if someone like Scott Alexander writes a basic “here’s why evolution is true”-post.
I don’t think such a thing is possible. There’s too many bad objections to evolution floating around in the environment.
The goal hasn’t to be to address every bad objection. Addressing objections strong enough to convince an economics PhD and at the same time providing the positive reasons that make us believe in evolution would be valuable.
Dawkins’ Greatest Show on Earth is pretty comprehensive. The shorter the work as compared to that, the more you risk missing widely held misconceptions people have.
I wouldn’t expect economics PhD to give people better insights into biology. (Only indirectly, as PhD in economics is a signal of high IQ.) A biology PhD would be more scary.
An economics PhD should understand that markets with decentralized decision making often beat intelligent design.
As an econ PhD, I’m theoretically amazed that multicellular organisms could overcome all of the prisoners’ dilemma type situations they must face. You don’t get large corporations without some kind of state, so why does decentralized evolution allow for people-states? I’ve also wondered, given how much faster bacteria and viruses evolve compared to multicellular organisms, why are not the viruses and bacteria winning by taking all of the free energy in people? Yes, I understand some are in a symbiotic relationship with us, but shouldn’t competition among microorganisms cause us to get nothing? If one type of firm innovated much faster than another type, the second type would be outcompeteted in the marketplace.(I do believe in evolution, of course, in the same way I accept relativity is correct even though I don’t understand the theory behind relativity.)
In the absence of any state holding the monopoly of power a large corporation automatically grows into a defacto state as the British East India company did in India. Big mafia organisations spring up even when the state doesn’t want them to exist. The same is true for various terrorist groups.
From here I could argue that the economics establishment seems to fail at their job when they fail to understand how coorperation can infact arise but I think there good work on cooperation such as Sveriges Riksbank Prize winner Elinor Ostrom.
If I understand her right than the important thing for solving issues of tragedy of the commons isn’t centralized decision making but good local decision making by people on-the-ground.
The British East India company and the mafia were/are able to use the threat of force to protect their property rights. Tragedy of the commons problems get much harder to solve the more people there are who can defect. I have a limited understanding of mathematical models of evolution, but it feels like the ways that people escape Moloch would not work for billions of competing microorganisms. I can see why studying economics would cause someone to be skeptical of evolution.
Microorganisms can make collective decisions via quorum sensing. Shared DNA works as a committment device.
Interesting. Given that your field seems to be about understanding game theory and exactly how to escape Moloch, have you thought about looking deeper into the subject to see whether the microorganisms due something that useful in a more wider scale and could move on the economist’s understanding of cooperation?
Beliefs have to pay rent ;)
I have thought about studying in more depth the math of evolutionary biology.
The British East India Company was a state-supported group, so it doesn’t count. But you’re right that in most cases there is a winner-take-all dynamic to coercive power, so we’re going to find a monopoly of force and a de-facto state. This is not inevitable though; for instance, forager tribes in general manage to do without, as did some historical stateless societies, e.g. in medieval Iceland. Loose federation of well-defended city states is an intermediate possibility that’s quite well attested historically.
That wasn’t the argument I was making. The argument I was making that in the absence of a state that holds the monopoly of force any organisation that grows really big is going to use coercive power and become states-like.
Sure, but that’s just what a winner-takes-all dynamic looks like in this case.
The argument is about explaining why we don’t see corporation in the absence of states. It’s not about explaining that there are societies that have no corporations. It’s not about explaining that there are societies that have no states.
Large companies can definitely coexist with small states, though. For instance, medieval Italy was largely dominated by small, independent city-states (Germany was rather similar), but it also saw the emergence of large banking companies (though these were not actual corporations) such as the Lombards, Bardi and Perruzzi. Those companies were definitely powerful enough to finance actual governments, e.g. in England, and yet the small city states endured for many centuries; they finally declined as a result of aggression from large foreign monarchies.
You mean competition between cells in a multi-cellular organism? They don’t compete, they come from the same DNA and they “win” by perpetuating that DNA, not their own self. Your cells are not subject to evolution—you are, as a whole.
In the long term, no, because a symbiotic system (as a whole) outcompetes greedy microorganisms and it’s surviving that matters, not short-term gains. If you depend on your host and you kill your host, you die yourself.
Doesn’t this line of reasoning prove the non-existence of cancer?
No, I don’t think so. Cancerous cells don’t win at evolution. In fact, is they manage to kill the host, they explicitly lose.
Survival of the fittest doesn’t prove the non-existence of broken bones, either.
It seems to me that the better argument is more along the lines of “bodies put a lot of effort into policing competition among their constituent parts” and “bodies put a lot of effort into repelling invaders.” It is actually amazing that multicellular organisms overcome the prisoners’ dilemma type situations, and there are lots of mechanisms that work on that problem, and amazing that pathogens don’t kill more of us than they already do.
And when those mechanisms fail, the problems are just as dire as one would expect. Consider something like Tasmanian Devil Facial Tumor Disease, a communicable cancer which killed roughly half of all Tasmanian devils (and, more importantly, would kill every devil in a high-density environment). Consider that about 4% of all humans were killed by influenza in 1918-1920. So it’s no surprise that the surviving life we see around us today is life that puts a bunch of effort into preventing runaway cell growth and runaway pathogen growth.
I just don’t see those “prisoners’ dilemma type situations”. Can you illustrate? What will cells of my body win by defecting and how can they defect?
Cancer is not successful competition, it’s breakage.
That’s anthropics for you :-)
Consider something like Aubrey de Grey’s “survival of the slowest” theory of mitochondrial mutation. The “point” of mitochondria is to do work involving ATP that slowly degrades them, they eventually die, and are replaced by new mitochondria. But it’s possible for several different mutations to make a mitochondrion much slower at doing its job—which is bad news for the cell, since it has access to less energy, but good news for that individual mitochondrion, because less pollution builds up and it survives longer.
But because it survives longer, it’s proportionally more likely to split to replace any other mitochrondion that works itself to death. And so eventually every mitochondrion in the cell becomes a descendant of the mutant malfunctioning mitochondrion and the cell becomes less functional.
(I believe, if things are working correctly the cell realizes that it is now a literal communist cell, and self-destructs, and is replaced by another cell with functional mitochondria. If you didn’t have this process, many more cells would be non-functional. But I’m not a biologist and I’m not certain about this bit.)
Recall that we are talking about evolution. Taking the Selfish Gene approach, it’s all about genes making copies of themselves. Only the germ-line cells matter, the rest of the cells in your body are irrelevant to evolution except for their assistance to sperm and eggs. The somatic cells never survive past the current generation, they do not replicate across generations.
Your mitochondrion might well live longer, but it still won’t make it to the next generation. The only way for it to propagate itself is to propagate its DNA and that involves being as helpful to the host as possible, even at the cost of “personal sacrifice”. Greedy mitochondrions, just as greedy somatic cells, will just be washed out by evolution. They do not win.
I’m well aware. If you don’t think that evolution describes the changes in the population of mitochondria in a cell, then I think you’re taking an overly narrow view of evolution!
I happen to be male; none of my mitochondria will make it to the next human generation anyway. (You… did know that mitochondrial lines have different DNA than their human hosts, right?)
But for the relevant population—mitochondria within a single cell—these mutants do actually win and take over the population of the cell, because they’re reproductively favored over the previous strain. And if we go up a level to cells, if that cell divides, both of its descendants will have those new mitochondria along for the ride. (At this level, those cells are reproductively disfavored, and thus we wouldn’t expect this to spread.)
That is, evolution on the lower level does work against evolution on the upper level, because the incentives of the two systems are misaligned. Since the lower level has much faster generations, you’ll get many more cycles of evolution on the lower level, and thus we would naively expect the lower level to dominate. If a bacterial infection can go through a thousand generations, why can’t it evolve past the defenses of a host going through a single generation? If the cell population of a tumor can go through a thousand generations, why can’t it evolve past the defenses of a host going through a single generation?
The answer is twofold: 1) it can, and when it does that typically leads to the death of the host, and 2) because it can, the host puts in a lot of effort to make that not happen. (You can use evolution on the upper level to explain why these mechanisms exist, but not how they operate. That is, you can make statements like “I expect there to be an immune system” and some broad properties of it but may have difficulty predicting how those properties are achieved.)
(That is, the lower level gets both the forces leading to ‘disorder’ from the perspective of the upper system, and corrective forces leading to order. This can lead to spectacular booms and busts in ways that you don’t see with normal selective gradients.)
That may well be so, but still in the context of this discussion I don’t think that it’s useful to describe the changes in the population of mitochondria in an evolutionary framework (your lower level, that is).
Unless you have a sister :-) Yes, I know that mDNA is special.
There is also the third option: symbiosis. If you managed to get your hooks into a nice and juicy host, it might be wise to set up house instead of doing the slash-and-burn.
Since this started connected to economics, there are probably parallels with roving bandits and stationary bandits.
In the long-term cancer sells die with the organism that hosts them. Viruses also do kill people regularly and die with their hosts.
Sure. The impression one gets from this is that an answer to James_Miller’s question is that they frequently fail to solve that problem, and then die.
Individual people die but the species doesn’t die.
https://en.wikipedia.org/wiki/Extinction
OK, but I have lots of different types of bacteria in me. If one type of bacteria doubled the amount of energy it consumed, and this slightly reduced my reproductive fitness, then this type of bacteria would be better off. If all types of bacteria in me do this, however, I die. It’s analogous to how no one company would pollute so much so as to poison the atmosphere and kill everyone, but absent regulation the combined effect of all companies would be to do (or almost do) this.
It’s not obvious to me that it will better off. There is a clear trade-off here, the microorganisms want to “steal” some energy from the host to live, but not too much or the host will die and so will they. I am sure evolution fine-tunes this trade-off in order to maximize survival, as usual.
The process, of course, is noisy. Bacteria mutate and occasionally develop high virulence which can kill large parts of host population (see e.g. the Black Plague). But those high-virulence strains do not survive for long, precisely because they are so “greedy”.
YSITTBIDWTCIYSTEIWEWITTAW is a little long for an acronym, but ADBOC for “Agree Denotationally But Object Connotationally’
“Your statement is technically true but I disagree with the connotations if you’re suggesting that …”, I guess. I’m hampered by not being sure whether you’re objecting connotationally to (1) the idea that having an economics PhD is a guarantee of understanding the fundamentals of economics, or (2) the analogy between markets / command economies and unguided evolution / intelligent design, or (3) something else.
″… that economics is why evolution wins in the actual world”?
Even if ‘markets with decentralized decision making often beat intelligent design’, that doesn’t mean decentralised decision making dominates centralised planning (what I assume he means by intelligent design)
But ChristianKI is neither claiming nor implying that (so far as I can see); his point is that Eric is arguing “look at these amazing things; they’re far too amazing to have been done without a guiding intelligence” but his experience in economics should show him that actually often (and “often” is all that’s needed here) distributed systems of fairly stupid agents can do better than centralized guiding intelligences.
(I don’t find that convincing, but for what I think are different reasons from yours. Eric’s hypothetical centralized guiding intelligence is much, much smarter than (e.g.) the Soviet central planners.)
An economics PhD who works in academia will meet colleagues from biology department. They have plenty of opportunities to clarify their misconceptions if they are curious and actually want to learn something.
Falkenstein does not work in academia.
My understanding is that most biologists don’t work on evolution and know little about the mathematical theories of evolution.
Reading this evokes in me physical sensations of discomfort, although it shouldn’t. As others have said, it’s important to study the failures as well as the successes.
My first conclusion is that “rationality” can become an applause light or a part of one’s identity just as easily as anything else. Maybe there are deeper lessons here, though.
Just stopping by to chuckle at the phrase “evokes in me physical sensations of discomfort” from someone whose forum name is that of a worm that crawls in peoples ears and mind controls them in order to destroy the human race.
It was originally a pun on “Master Mind”, only later I discovered that ridiculous, half-forgotten DC villain… It was nonetheless fitting :)
Anyway, how would you feel if you had to crawl inside the ear of a giant alien? ;)
Well, if nothing else, this is a good reminder that rationality has nothing to do with articulacy.
I created easy explanation of Bayes theorem as a small map: http://immortality-roadmap.com/bayesl.jpg (can’t insert jpg into a comment)
It is based on following short story: All men in a port city are either sailors or librarians, and the total male population is 10 000 people, in which are 100 librarians. 80 per cent of the librarians wear glasses, and 9.6 per cent of the sailors wear glasses. If there you meet a man wearing glasses, which is more probable: that he is a librarian or a sailor? The true answer is sailor. The probability that he is librarian results from ratio of two green boxes:
80 / (80 + 950) = 80 / 1030 ≈ 7.8%.
The digits are the same as in original EY explanation, but I exclude the story about mammography, as it may be not easy to understand two complex scientific topics simultaneously. http://www.yudkowsky.net/rational/bayes
But you can—see doku
Also: The linked image is very small.
Thanks! I uploaded higher resolution image and inserted it into the comment
You should have done only one of those actions, not both.
When I was thinking about “quantum immortality”, I realized that I am bad at guessing the most likely very unlikely outcomes.
I don’t mean the kind of “quantum immortality” when you throw a tantrum and kill yourself whenever you don’t win a lottery, but rather the one that happens spontaneously, even if you are not aware of the concept. With sufficiently small probability (quantum amplitude) “miracles” happen, and in a thousand years all your surviving copies will be alive only thanks to some “miracle”.
But of course, not all “miracles” are equally unlikely; even if all of them have microscopic probability, still some of them are relatively more likely than others, sometimes by several orders of magnitude. We should expect to find ourselves in the futures that required relatively more likely “miracles”. If you can survive thousand years by two strategies, A and B, where A has a probability of 1/10^50, and B has a probability of 1/10^60, then in a thousand years, if you exist, you should expect to find yourself in the situation A. But I have problem finding out the possible ways and estimating their probabilities. I mean, even if I find some, there is still a chance that I missed something else that could be relatively more likely than the variants I have considered, which means that all my results are unlikely even in the “quantum immortality” future.
(This is also an objection against the constructions where people plan to kill themselves if they don’t win a lottery. Push the probability too far, and your suicide mechanism will “miraculously” fail, because some “miracle” had greater probability than the outcome you wanted to achieve. The mechanism will fail, or some intruders will break into your quantum-suicide lab, or an alien attack will disable all quantum detectors on the Earth.)
I cannot even answer a simpler question: In the “quantum immortality” future, should you expect to find yourself more exceptional than other people, or not?
The first idea was that the older you get, the more lucky you are. Even now, think about all those people who died before for various reasons—you were not one of them! Also a “miracle” saving your life seems more likely than a lot of “miracles” saving lives of many people, because the improbability would grow exponentially with the number of people saved. Thus you should expect yourself to be saved by “miracles” while observing other people not having the same luck.
Or maybe I’m wrong. My crazy example was a nuclear war starting and a bomb dropping on your city, just above your head. How can you survive that? Well, maybe an angel can randomly generate from particles in the sky, and descend to protect you with its wings. This scenario is more likely than a scenario where thousands of angels generate the same way and protect everyone. So a personal “miracle” seems more likely than a “miracle” for everyone. -- But of course now I am privileging an unlikely solution, where much more likely solutions exist; namely, the bomb could malfunction, and the whole town could be saved. So maybe the “miracle” saving everyone is actually more likely than a “miracle” saving only me.
Similarly, a situation thousand years later where I live because cheap immortality for everyone was invented in 20?? seems more likely that a situation thousand years later where only I lived for a thousand years because of series of “miracles” that happened specially to me (or maybe just one huge “miracle”, such as me randomly changing to a vampire). So maybe it is possible to experience the “quantum immortality” and still be just an average muggle.
Returning to the original question, what is the most likely way to survive thousand years? Million years? 10^100 years?
(EDIT: There is also a chance that the whole concept is completely confused; for example that “the average living copy of yourself in a thousand years” is a wrong way to predict your most likely personal experience in the future. Instead, the correct approach may be to talk about your typical person-moment in space-time, because there is no reason to privilege the future. I mean, you already know that many of your copies will not have person-moments in far future. And maybe your expected person-moment is plus or minus what you are experiencing now, precisely because you are not immortal.)
One should look on most probable outcomes:
You are cryopreseved and resurrected lately − 1 per cent (in my case)
You are resurrected by future AI based on your digital footprint—also around 1 per cent
You live in simulation with afterlife. Probability can’t be estimated but may be very high like 90 per cent.
These three outcomes are most probable futures if you survive death and they will look almost similar: you are resurrected in the future by strong AI. Only outcomes with even higher probability could change the situation, but they are unlikely given that probability of this ones is already high.
All other outcomes are much smaller given known priors about state of tech progress.
Using wording like “blobs of amplitude” makes all story overcomplicated.
Why do you give this such a low estimate? Because you’re not signed up?
I signed to russian Cryorus. My doubts are mostly about will I be cryopreserved if die (I live alone and if I suddenly die, no body will know for days) and about the ability of Cryorus to exist in next 30 years. If I live in Pheonix and have a family dedicated to cryonics I would rise the probability of success to 10 per cent.
This sounds interesting, but I don’t quite get what you mean by saying that many copies won’t have person-moments in the future or how this leads to non-immortality. Can you elaborate?
In general, I agree that estimating these probabilities is very difficult. I suppose the likeliest ways may, in any case, be orders of magnitude more likely than others; meaning that if QI works and, say, resurrection by a future AI or hypercivilization is the likeliest way to live for a hundred million years, the other alternatives may not matter much. But it’s hard to say anything even remotely definite about it.
I am confused a lot about this, so maybe what I write here doesn’t make sense at all.
I’m trying to take a “timeless view” instead of taking time as something granted that keeps flowing linearly. Why? Essentially, because if you just take time as something that keeps flowing linearly, then in most Everett branches you die, end of story. Taking about “quantum immortality” already means picking selectively the moments in time-space-branches where you exist. So I feel like perhaps we need to pick randomly from all such moments, not merely from the moments in the future.
To simplify the situation, let’s assume a simpler universe—not the one we live in, but the one we once believed we lived in—a universe without branches, with a single timeline. Suppose this classical universe is deterministic, and that at some moment you die.
The first-person view of your life in the classical universe would be “you are born, you live for a few years, then you die, end of story”. The third-person / timeless / god’s view would be “here is a timeline of your person-moments; within those moments you live, outside of them you don’t”. The god could watch your life as a movie in random order, because every sequence would make sense, it would follow the laws of physics and every person-moment would be surrounded by your experience.
From god’s point of view, it could make sense to take a random person-moment of your life, and examine what you experience there. Random choice always needs some metric over the set we choose from, but with a single timeline this is simple: just give each time interval the same weight. For example, if you live 80 years, and spend 20 years in education, it would make sense to say “if we pick a random moment of your life, with probability 25% you are in education at that moment”.
(Because that’s the topic I am interested in: how does a typical random moment look like.)
Okay, now instead of the classical universe let’s think about a straw-quantum universe, where the universe only splits when you flip the magical quantum coin, otherwise it remains classical. (Yes, this is complete bullshit. A simple model to explain what I mean.) Let’s assume that you live 80 years, and when you are 40, you flip the coin and make an important decision based on the outcome, that will dramatically change your life. You during the first 40 years you only had one history, and during the second 40 years, you had two histories. In first-person view, it was 80 years either way, 1⁄2 the first half, 1⁄2 the second half.
Now let’s again try the god’s view. The god sees your life as a movie containing together 120 years; 40 years of the first half, and 2× 40 years of the second half. If the god is trying to pick a random moment in your life, does it mean she is twice as likely to pick a moment from your second half than from your first half? -- This is only my intuition speaking, but I believe that this would be a wrong metric. In a correct metric, when the god chooses a “random moment of your life”, it should have 50% probability to be before you flipped the magical coin, and 50% probability after your flipped the magical coins. As if somehow having two second halves of your life only made each of them half as thick.
Now a variation of the straw-quantum model, where after 40 years you flip a magical coin, and depending on the outcome you either die immediately, or live for another 40 years. In this situation I believe from the god’s view, a random moment of your life has 2⁄3 probability to be during the first 40 years, and 1⁄3 during the second 40 years.
If you agree with this, then the real quantum universe is the same thing, except that branching happens all the time, and there are zillions of the most crazy branches. (And I am ignoring the problem of defining what exactly means “you”, especially in some sufficiently weird branches.) It could be, from god’s point of view, that although in some tiny branches you life forever, still a typical random you-moment would be e.g. during your first 70 years (or whatever is the average lifespan in your reference group).
And… this is quite a confusing part here… I suspect that in some sense the god’s view may be the correct way to look at oneself, especially when thinking about antropic problems. That the typical random you-moment, as seen by the god, is the typical experience you have. So despite the “quantum immortality” being real, if the typical random you-moment happens in the ordinary boring places, within the 80 years after your birth, with a sufficiently high probability, then you will simply subjectively not experience the “quantum immortality”. Because in a typical moment of life, you are not there yet. But you will… kinda… never be there, because the typical moment of your life is what is real. You will always be in a situation where you are potentially immortal, but in reality too young to perceive any benefits from that.
In other words, if someone asks “so, if I attach myself to a perfectly safe quantum-suicide machine that will immediately kill me unless I win the lottery, and then I turn it on, what will be my typical subjective experience?” then the completely disappointing (but potentially correct) answer is: “your typical subjective experience will always be that you didn’t do the experiment yet”. Instead of enjoying the winnings of the lottery, you (from the god’s perspective, which is potentially the correct one) will only experience getting ready to do the experiment, not the outcomes of it. If you are the kind of person who would seriously perform such experiment, from your subjective point of view, the moment of the experiment is always in the future. (It doesn’t mean you can never do it. If you want, you can try it tomorrow. It only means that the tomorrow is always tomorrow, never yesterday.)
Or, using the Nietzsche’s metaphor of “eternal recurrence”, whenever you perfom the quantum-suicide experiment, your life will restart. Thus your life will only include the moments before the experiment, not after.
For the “spontaneous” version of the experiment, you are simply more likely to be young than old, and you will never be thousand years old. Not necessarily because there is any specific line you could not cross, but simply because in a typical moment of your life, you are not thousand years old yet. (From god’s point of view, your thousand-years-old-moments are so rare, that they are practically never picked at random, therefore your typical subjective experience is not being thousand years old.)
Of course on a larger scale (god sees all those alternative histories and alternative universes where life itself never happened), your subjective measure is almost zero. But that’s okay, because the important things are ratios of your subjective experience. I’m just thinking that comparing “different you-moments thousand years in the future” is somehow a wrong operation, something that doesn’t cut the possibility-space naturally; and that the natural operation would be comparing “different you-moments” regardless of the time, because from the timeless view there is nothing special about “the time thousand years from now”.
Or maybe this is all completely wrong...
You may be getting at the truth here, but there is a simpler way to think about it.
Quotation from Epicurus:
“Why should I fear death? If I am, then death is not. If death is, then I am not. Why should I fear that which can only exist when I do not?”
Whether you pick your point of view, or a divine point of view, if you pick any moment in your life, random or not, you are not dead yet. So basically you have a kind of personal plot armor: your life is finite in duration but is an open set of moments, in each of which you are alive, and which does not have an end point. Of course the set has a limit, but the limit is not part of the set. So subjectively, you will always be alive, but you will also always be within that finite period.
Because there are other things associated with death, such as suffering from a painful terminal illness, where the excuse of Epicurus does not apply. With things like this, “quantum immortality” could potentially be the worst nightmare; maybe it means than after thousand years, in the Everett branches where you are still alive, in most of them you are in a condition where you would prefer to be dead.
I agree, except that you are not actually refuting Epicurus: you are not saying that death should be feared, but that we should fear not dying soon enough, especially if we end up not dying at all.
Maybe I’m misunderstanding something. How do we know this?
Either it is finite in duration, or mostly finite in duration, as Viliam said. These come approximately to the same thing, even if they are not exactly the same.
Very interesting insight. It does feel like it solves the problem in some way, and yet in a quantum version as specified, it seems there must be a 1000_year_old_Villiam out there going “huh, I guess I was wrong back on Less Wrong that one time...” Can we really say he doesn’t count, even if his measure is small?
He certainly counts for himself, but probably doesn’t for Viliam2016.
Viliam2016 is probably relatively young, healthy and living in a country with a fairly high quality of life, meaning that he can expect to live for several decades more at least. But as humans, our measure diminishes fairly slowly at first, but then starts diminishing much faster. For Viliam age 90, Viliam age 95 may seem like he doesn’t have that much measure; and for Viliam age 100, Viliam 101 may look like an unlikely freak of nature. But there’s only a few months difference there. So at which point do the unlikely future selves start to matter? (The same applies to younger, terminally ill Viliams as well.)
Behold and despair.
People optimizing for popularity and fame are most famous. What a surprise.
It holds before us a mirror or ‘progression’ of civilization.
ADDED: A friend mentioned that todays predominance of sports (and actors before that) is more an indication of contemporary interests that will be forgotten before long. Probably the athens also had sport stars at their time that were talked about a lot.
Amazingly depressing! Still, amazing.
Compelling evidence that one should name one’s son ‘Stephen’.
Not really surprising. The most popular anything will always be whatever pleases the lowest common denominator.
I have large PR problems when talking about rationality with others unfamiliar with it, with the Straw Vulcan being the most common trap conversation will fall into.
Are there any guides out there in the vein of the EA Pitch Wiki that could help someone avoid these traps and portray rationality in a more positive light? If not, would it be worth creating one?
So far I’ve found, how rationality can make your life more awesome, rationality for curiosity sake, rationality as winning, PR problems and the contrary rationality isn’t all that great.
Not a guide, but I think the vocab you use matters a lot. Try tabooing ‘rationality’, the word itself mindkills some people straight to straw vulcan etc. Do the same with any other words that have the same effect.
Revisiting past conversations I think this is exactly what has been happening. When I mention rationality, reason, logic it becomes a logic v. emotion discussion. I’ll taboo in future, thanks!
What exactly are you doing that you have PR problems?
Are you simply relabeling normal conversations with friends as PR?
Something like,
A: I’ve been reading a lot about rationality in the last year or two. It’s pretty great.
B: What’s that?
A: Explanation of instrumental + epistemic OR Biases a la Kahneman
B: Sounds dumb. I do that already.
A: I’ve found it great because X, Y, Z.
B: I think emotion is much more important than rationality. I don’t want to be a robot.
Yes. Sorry for the lack of clarity.
The problem isn’t simply clarity. The frame of mind of treating a conversation with your friends as PR is not useful for getting your friends to trust you and positively respond to what you are saying. If you do that, it’s no wonder that someone thinks you are a Straw Vulcan because that mindset is communicating that vibe.
That said, let’s focus on your message. You aren’t telling people that you are using rationality to make you life better. You are telling people that you read about rationality. That doesn’t show a person the value of rationality.
If I want to talk about the value of rationality I could take about how I’m making predictions in my daily life and the value that brings me. I can talk about how great it is to play double crux with other rationalists and actually have them change their mind.
If I want to talk about the effect it has on friends, I can talk about how a fellow rationalist who thought he only cared about the people he’s interacting with used rationality techniques to discover that he actually cares about rescuing children in the third world from dying from malaria.
If I want to talk about society then I can talk about how the Good Judgement project outperforms CIA analysts who have access to classified information by 30%. I can talk about how better predictions of the CIA before the Iraq war might have stopped the war and therefore really matter a great deal. Superforcasting is a great book for having those war-stories.
In this case it is. I believe I have been less than clear again.
Agreed—but I’ve never done that. The conversations are ordinary in that I share rationality in the same way I would share a book or movie I’ve enjoyed. It is “I enjoy X, you should try it I bet you would enjoy it too” as opposed to, “I want to spread X and my friends are good targets for that.” I literally meant I relabeled an ordinary conversation as PR, not that I was in the spread rationality mindset. My brain did a thing where,
‘I’m having trouble sharing rationality with friends in a way that doesn’t happen with my other interests. I bet other rationalists have similar problems. I wonder if there is any PR material on LW that might help with this.’
… and boom my brain labels it as a PR problem. I’m trying to not get caught up in the words here, do you follow my meaning?
Your recommendations on talking about the value rationality brings me look good. Thank you for them.
We don’t enjoy a topic as diverse as rationality in the same way we enjoy a book or movie. A book or movie is a much more concrete experience.
You could speak about individual books like Kahnmann’s instead of using the label rationality.
There’s actually a yearly cryonics festival held in Nederland, a little town near Boulder, Colorado. However, according to a relative who’s been to the festival, the whole event is really just an excuse to drink beer, listen to music, and participate in silly cryonics-themed events like ice turkey bowling.
So, it seems like the type of people who attend this festival aren’t quite the type who would be likely to sign up for cryonics after all, though I might be able to further evaluate this next year if I’m able to attend the festival while visiting a relative in the area. (This could be relevant to LW in theory if the festival ends up being a good place to advertise cryonics, though I’m not suggesting that this is likely to be the case).
For almost all subjects X, an X festival is an excuse to drink beer, hang out, and do silly X-themed stuff.
This should not be taken to mean that it has nothing to do with X, or that it adds no value toward it. What you’re really getting out of it is an opportunity to meet other people who’re into the subject, or at least well-disposed enough to show up to a festival advertised as such.
Meta: I messed up the dates for the last two OT, but it’s back to normal now.
Anyone know of the best vitamins/minerals/supplements to take for promoting bone growth and healing?
It seems calcium and vitamin D are pretty commonly recommended. And rational dissent on this? Any others that should be taken?
Theoretically, you should actually measure your blood levels of vitamin D and then get them to optimum levels and maintain them there. If you don’t know what your baseline is, you don’t know how much to take.
Other than that, vitamins A and K are synergistic with D.
What do you think is the relative advantage of LessWrong in nutritional advice for a specific medical problem over generic net info or your doctor? Not criticizing the choice, this isn’t a rhetorical question and I might have done something similar, I’m genuinely curious.
The signal to noise ratio on some advice topics is absurdly bad on the internet overall for vitamins/minerals and general health topics (exercise, diet, etc.). A few people around here have actually made the effort to study them in-depth and have gotten much better information than would be readily available otherwise. I’m primarily thinking of Scott’s wheat, fish, and Vitamin D posts at this point, though I’ve seen others around here in the past.
The relative advantage of LessWrong is that it is free, and contains many smart people with variable knowledge bases. There is no reason to believe that it will always (or often) be better than your doctor, but there very little cost to asking and the potential gain outweighs the minimal cost.
Scott Alexander wrote a long post about how his clinic fails to stock Melatonin because there’s no drug reps encouraging them to stock any Melatonin.
If you ask most doctors for Vitamin D3 supplementation I don’t think they give the correct answer of 2000+ UI per day taken in the morning.
People on LW might be both better at reading studies and evaluating the statistics and have spend more time researching a particular issue than the average doctor.
As was pointed out to me recently; (general practitioner) doctors are
verygood at general health;specialists are very good at specific medicines; but if you want to spend 200 hours reading up everything about one molecule you can probably overtake their knowledge.
What does “general health” mean? Doctors are not good at keeping people healthy and each failure of health is specific, not “general”.
A General Practitioner doctor deals with all health ailments. as a consequence they are not trained to be experts in all health ailments; they are trained in the first steps of dealing with all ailments (which is a difficult endeavour).
I made up the 200 hour figure, but if you consider one subject for one semester of university is expected to cost 150-250 hours depending on the details. Let’s say I underestimated 200 and actually it’s more like 4-500 hours reading up and understanding everything about one molecule to overtake the knowledge of health professionals.
That’s a defense for the claim that doctors aren’t experts at everything. It’s not evidence for that claim that doctors are very good at general health.
alright; I can wear that. I think I meant to say; “general practitioner doctors are very not good at oddly specific health” have adjusted the post above. Did not mean to make that claim.
“General health” is a health intervention which would, if followed, mean that a relatively large number of patients’ lives would be improved over how they are now.
8-0 I am more confused...
I don’t know how good this holds up, but: I noticed that in conversations I tend to barge out questions like an inquisitor, while some of my friends are more like story tellers. Some, I can’t really talk with, and when we get together, it’s more like ‘just chilling out’.
I was wondering a, if people tend to fall into some category more than others b, if there are more such categories c, if overemphasis on one behavior is a significant factor of mine, (and presumably others’) social skill deficit
If the last is true, I would like to diversify this portfolio..
Is there some kind of psychological theory I should be aware of?
In my search for underutilized venues, where should I go?
Where could I find a large corpus of people having real conversations, preferably followed over a long term?
I found something long time ago in some PUA materials, but unfortunately I don’t remember the source anymore. The central idea was this:
People don’t say random stories. (At least the socially savvy don’t.) People, consciously or not, select the stories that support the persona they want to project. So the rational approach would start by making a list of attributes you want to associate with yourself, and then select / modify / invent the stories that provide fictional evidence that you have these attributes.
A typical PUA advice would probably recommend this set of attributes for a heterosexual man:
your life is full of adventures;
you are able to overcome problems (you have the skills, and you stay mentally stable in adversity);
you have loyal friends, who consider you their natural leader;
women want you (this should not be a focus of the story, merely a background assumption).
Now your task is to create an story that is interesting to listen and contains all these attributes. For example:
“A few years ago you did something adventurous with your charming girlfriend (tried to travel across the desert in a car; or took a hike through an exotic jungle). Then something dangerous happened (your car hit a landmine that destroyed its motor; in a supposedly safe part of jungle you met a tiger). You were smart and quick enough to avoid the immediate danger (you catapulted yourself and your girlfriend from the car; you took the girlfriend and pushed her up on a tree, then you climbed up too). Your girlfriend was super scared, but you remained cool and said “honey, I don’t know how, but trust me, we are going to solve this, and it will be a cool story afterwards”. You demonstrated some more skills (built a guitar from the remains of the car; killed a squirrel on the tree and cooked it for a dinner). Then you called your good friends, who owe you for saving their lives in the past—but that’s another story, you could offer to tell her tonight at your own place, if she is interested—and they immediately went there to help you, because you are a very high priority for them. Then you spent the rest of the day partying together and having a lot of fun.” (Also you need some good explanation for why you are not with the amazing girlfriend anymore. She was a student from an exotic country, and she returned home to follow her career.)
If you are too honest to invent stories, just filter your own experience and find situations where you exhibited the desired traits. Feel free to slightly exaggerate your role; most people do.
In the context of communication categories (a, b, and others) it may be useful particularly to view conversations as persona building (as above), because there is a subset of people who do not tell stories about what they have done, but tell you about what they are doing—or simply do them. The person who shows up with Google Cardboard or TARDIS nail polish is signaling strongly without telling any stories. Depending on your goals, this may be a more effective way of persona building than learning to tell stories.
On the other hand, if you want to improve conversational skills, you might instead focus on finding productive questions to ask—it is very hard to determine what stories people will enjoy, but most people will enjoy telling you about themselves, and this appears to be true even if you ask very simple questions.
This seems pretty good.
It’s probably not that useful to think about this in terms of categories. It would be better to think about what makes a conversation great and to find out what is missing when you end up ‘just chilling out’.
Let me know what you perceive to be the difference in your conversations that work and the ones in which you end up just chilling out.
Here’s some background information to help you out with that. Conversations are a type of speech exchange system that involves turn taking. When you are having your turn, i.e. speaking, this is referred to as holding the conversational floor. A conversation that progresses past the initial stage, referred to as small talk, will have longer turns in which the content is free flowing and natural. One of the main things that differentiate conversation from other speech systems like interviews is that the turns are best when they are somewhat balanced. Conversations thrive when the turns are natural, build on previous turns and allow multiple avenues for future turns.
Based on what you have said, I would presume that your conversations that don’t work tend to involve short turns as you keep asking them questions and they give short answers. When conversations sag and die, it will most likely be because of minimal responses, i.e. short turns, and no free information that the other person can use to take a future turn. In fact, this is how almost all conversations end. That is, with the exchange of ritualistic small turns, e.g. “Ok, cya” → “Yeh, bye”
In general, I think that a good conversationalist is someone who is good at doing conversational work which is all about ensuring that the conversation will continue and that the turns will become more expansive and natural. Some aspects of conversational work include:
Asking questions (preferably open ones which lead to longer turns or follow up questions which show that you’re listening and care)
Providing answers
Introducing new topics
Picking up topics
Telling good stories
Helping good stories
Helping others to be able to ask you questions, i.e. offering lots of free information. For example, if asked what do you do then it is good if you can provide enough information to allow them to expand on what you have said. Don’t just tell them your role, but tell them what you do day to day and why you love it, or don’t.
Conversation is a bit of a chaotic act. People sometimes cling to the act of small talk. Seems like you’ve done without that. Why frantically skate back to the norm? Are you afraid? Rewrite the rules, be a leader, be the inquisitor.
I figure this is a long shot, but: I’m looking for a therapist and possibly a relationship counselor. I’d like one who’s in some way adjacent to the LW community. The idea is to have less inferential distance to cross when describing the shit that goes through my head. Finding someone with whom there isn’t a large IQ gap is also a concern.
Can anyone give me a lead?
(also, the anonymous Username account appears to have had its password changed at some point. Hence this one.)
I can recommend Shannon Avana. Don’t mind the horrible web page, she’s great. Not the archetypal LW rationalist, but she is familiar with the community and culture. Very smart, too.
I remember a Shannon Friedman. Same person, new name?
(thanks for the disclaimer about the web page...it really is horrible)
Yes. She also used to have a nicer web page previously.
Is this some sort of counter-signaling then?
I would expect that LW readers are not the target group.
Yeah, you need to use Username2 nowadays.
What is your preferred backup strategy for your digital life?
Before reading the responses, I thought this comment meant “how are you preserving information about yourself so that an upload copy of you could eventually be constructed”.
An online automatic backup service (www.code42.com/crashplan)
I use mega.nz to backup files on my computer. I use the service because it has client-side-encryption and provides 50GB of free storage.
I use Evernote for all information like notes or articles I read that I want to remember.
Anki has it’s webserver where automatic updates happen. Gmail also automatically has the data in the cloud.
External HDD
“Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it”—Linus Torvalds
I just keep anything I couldn’t re-download or re-generate on a couple days’ notice in my Dropbox folder.
What are examples of complex systems people tend to ignore, that they still interact with every day? I am thinking of stuff like your body, the local infrastructure, your computer, your car - stuff, which you just assume works, and one could probably gain from trying to understand.
What I am going for here is a full list of things people actually interact with, hoping to have some sort of exhaustive guide for ‘Elohim’s Game of Life’ and its mechanisms, like one would have on a game’s wikia.
The construction of materials used to build the buildings you spend time in.
Governments and large organizations that require lots of resources, jobs, and work done just to do things like make sure you have a street in front of your house that is relatively clean.
Water processing.
Waste disposal.
The advanced nature of the basic chemicals you gain everyday use from. This includes soaps, detergents, food, water purification, refrigerator materials, internal cooling, internal heating, bleaches, and all sorts of other things that were tested and developed in labs.
Preservatives in all your food.
We still live in a society strongly maintained by paper. (We’ve digitized some, but are still fully reliant on paper.) So the entire paper industry and infastructuve involved are important to you even if all the paper you see on a regular basis is your mail.
More complex:
Religion both modern and past. We are all strongly influenced by the religious dogmas of the past
Widely shared social structures (past and present)
Norms, mores, etc. (past and present)
Popular philosophies (past and present)
Popular ethical systems (past and present)
Memetics and especialy the ma or scaffolding upon which our memes preside (which is partially part of reality and partially part of our brain structure, etc.)
A few more: (I’m just having fun trying to figure more out at this point)
Language
Mathematics
Long term medical advances and study that influence what food people are allowed to sell you (the F in FDA)
Commercial art and aesthetics which influence the literal shape of all products that surround you everyday (from the curves on the edges of your monitor to the grooves and overall shape of the water bottle you drink from.
Humankind’s overall attempt at dealing with gravity (which defines the way we walk, create chairs and desks, build objects like toasters and fridges, fortify buildings, etc.)
The above one could likely fall into an umbrella of something like “The way in which we design our world based on possible human limitations based around enabling ourselves to accomplish goals while limited to human movement and shape. I’m imagining the creation and shape of hammers, screwdrivers, cups, and pretty much everything that could actually have an alternate shape if our biology (talons instead of hands?) was different.
Atmosphere and nature in general.
On the obsession with dead ideas.
I’m really skeptical of the idea of “dead ideas”. This was recently discussed on the IRC, in the context of Eugenics. Embryo selection and soon direct genetic modification will be possible. But eugenics has a really bad reputation because it’s associated with Nazism. “It’s been tried!” It’s now one of those dead ideas.
Another example might be messing with the environment. Including eliminating mosquito’s or engineering the climate to lessen the damage of climate change. “It’s been tried!” After all there are many examples of humans interfering with the environment and screwing it up.
Nuclear power is basically a taboo issue because a few previous generation plants have failed in the past. New ones might be much safer, but it doesn’t matter, the subject is off the table.
In general there are very few ideas which “have been proven false by incontrovertible evidence”. The real world is complex and has a lot of variables, and drawing strong conclusions from single historical examples is not rational. Having “dead ideas” is not a desirable thing.
The article’s main beef seems to be with populist politicians. Which I think we can all agree are terrible. Whether or not they happen to support dead ideas is really irrelevant.
Then when the article gets to politicians supporting dead ideas, the examples they give are not dead at all. Not that they are correct, just that they are far from “definitely proven wrong” like nuclear power. E.g. the example of a temporary ban on Muslim immigrants. Now I think you can dispute this morally, but how is this a thing that has been tried and failed before? When has it been tried before, and how did it fail? The article certainly doesn’t provide any citations. The other examples are similarly weak and weakly argued.
Right, the author makes no effort whatsoever to actually argue these points—he only needs to call them “dead” i.e. unfashionable.
Um. Deliberate climate engineering hasn’t been tried, to the best of my knowledge. As to eliminating malaria mosquitos, yes, it has been tried and it was very successful. That’s why Florida is America’s resort and not a tropical swamp unfit for human habitation.
The examples usually given are more general “messing with the environment”, e.g. Australia’s introduction of Cane Toads, China’s campaigns to eradicate certain pests, or various silly things the Soviet Union did.
As for mosquitos, I don’t mean not just controlling their populations. I’m talking about eradicating them to extinction, e.g. spreading engineered self destruct genes through the population. That is extremely controversial for some reason.
You know what’s the most radical “messing with the environment” thing that humans ever did?
It’s called agriculture.
I think it turned out pretty well.
And that reason is unclear to you?
Well, that remains to be seen.
No, I don’t think it remains to be seen.
How large a human population can Earth support without agriculture, do you think?
That’s the point of the article: agriculture allowed the Earth to support a vastly larger human population than it could have otherwise, but at a cost.
Personally I’m more optimistic than the author of the article I linked that the median quality of life of a human on Planet Earth will ultimately exceed the median quality of life of a human on an Earth where agriculture had never been developed—in fact I think there’s a good chance that that’s already the case. But I don’t think it’s completely obvious, for reasons the author describes in detail.
Your claim was that it “remains to be seen” (whether agriculture turned out pretty well). I don’t think it stands. Everything has a cost.
I am aware of the Jared Diamond arguments, but note that they are based on comparison between ancient hunter-gatherers and ancient farmers. Contemporary agriculture is a wee bit different—in particular, note the diversity of food it provides, as well as its ability to deliver food out of local season.
What are, if any, the contemporary names for each of Francis Bacon’s idola and/or Roger Bacon’s offendicula?
His famous 4-part division of “idols” divides them according to their origins rather than what they actually are. That particular division hasn’t been found terribly useful, and I don’t think there are contemporary names for his classes.
(They are: “idols of the tribe”, meaning errors common to humankind, which is what he means by the “tribe”; “idols of the cave”, meaning errors idiosyncratic to particular individuals, each conceived as inhabiting his own private cave; “idols of the marketplace”, meaning errors that result from interactions between people—conceived as meeting in the marketplace—rather than within a single individual working in isolation; and “idols of the theatre”, meaning errors we inherit from incorrect philosophical theories, which Bacon thought were like stage plays representing worlds that differ from the real world.)
The offendicula were Roger Bacon’s, not Francis Bacon’s. (Deference to authority; custom and convention; popular prejudice; covering up our ignorance with a show of wisdom.) They are specific failure modes rather than broad classes of error. I don’t think they have particular standard contemporary names.
[EDITED to do a bit more indicating of where Francis Bacon’s odd names come from.]
Sorry. Fixed the Bacons.
Know about geospatial analysis?
I’m building a story involving Jupiter’s moon, Io. There’s a geological map of it on the bottom half of this image. What I’d like to figure out is which sites have the widest variety of terrain types within the shortest distances? Or, put another way, which sites would be most worth dropping an automated factory down on, as they’d require the minimal amount of road-building to get to a useful variety of resources? Or, put a third way, what’s the fewest number of sites that would be required to have a complete set of the terrain-types within, say, a hundred kilometres of each site?
I can sort of see how a computer program might run some colour-detection on that image to figure out a map, and then run some algorithms about the value of each pixel—but that’s a notch or two above my programming skill, and I don’t think I have the time to both improve my programming skill and keep working on the story.
Any suggestions?
Thank you for your time.
https://dl.dropboxusercontent.com/u/67168735/io%20geo.pdf gives a more detailed and thorough map.
You don’t need to do any figuring out—you already have an image. The image is a bitmap, just directly read the value of each pixel.
Your three questions are asking for different things and (1) and (2) are underspecified—you will be trading off “variety” against closeness and you have to get precise about that.
For a not too large a map, it’s probably easiest to just brute-force it. You can be sophisticated about it, but CPU time is much much cheaper than human time.
I’m still trying to figure out which questions would provide the most useful answers for my purposes. If you don’t mind my asking, what questions do you think are the ones I should be trying to get answers to?
I don’t know your purpose. What are you trying to achieve?
A reasonable realistic set of worldbuilding and background details and constraints, within which to build a story according to the principles of “hard SF”. While I could get away with simply not getting into specific details about what locations on Io are industrialized, the more details I have to work with, the more I interesting plot points I can build based on those details. Put another way, I find it less important to know what a technology or character can do than to know what they can’t do.
In the story, a seed-factory was dropped on Io from another of Jupiter’s moons, because Io’s geology allows for easy access to a number of different minerals, which allow for the development of a number of industrial processes that would have been economically infeasible on the other moon, thus allowing for more rapid production of certain useful tools and products, such as computer chips. In my current story draft, at a certain point, there will be an ‘Io Site 1’ which has spread into ‘Io Site 1b’, ‘Io Site 1c’, and the like; and an ‘Io Site 2’ elsewhere to take advantage of a different collection of resources. Building entirely new sites is more expensive than expanding pre-existing ones, and maintaining long roads is moderately expensive, so it would be useful for my story-building to know how few sites would be necessary to make all of Io’s various kinds of resources available, and/or where some good sites with an unusual variety of resources are located.
A resource I’m not entirely sure is sufficiently relevant suggests that Gish Bar Mons, https://tools.wmflabs.org/geohack/geohack.php?pagename=Gish_Bar_Patera¶ms=16.18_N_90.26_W_globe:io_type:landmark_source:gpn , might be a site for Io Site 1, and Loki Patera, https://tools.wmflabs.org/geohack/geohack.php?pagename=Loki_Patera¶ms=13_N_308.8_W_globe:io_type:mountain , for Io SIte 2; but Io’s got over 40 million square kilometres to consider, and I’m more than willing to use other sites, if I can figure any out.
So it’s basically economics?
Then you need to assign values to resources, costs to factories and roads, and optimize for profit :-)
You can put aaronsw into the LessWrong karma tool to see Aaron Swartz’s post history, and read his most highly rated comments. I bet some of them would be good to spread more widely.
http://www.ibiblio.org/weidai/lesswrong_user.php allows you to sort comments by karma points.
I didn’t find any results for that username, but I did get some for aaronsw.
Fixed typo, thanks!
idea log
If you want a corporate job and don’t know what kind, try go to your cbd at around 6pm and watch who looks the happiest. Ask them what they do and maybe even an interview or to take your cv if you’re a job searcher.
a commentor on 7chan concisely making a very epistemically and instrumentally complex claim: >‘you only look for advice that confirms what you were going to do anyway’
cold reading of LessWrongers
How can more valuable social contributions capture that value in the form of economic rent, income or other forms of non-psychological receipts? By [spreading the Hindu karma economy
Is there a yelp or ‘rate my lawyer/teacher/politician’ for psychological therapists?
Ask people around you this question. The responses I’ve been getting are fascinating and have been a boon to my friendships: ‘what’s something you don’t get enough if from people around you’?
When you’re not sure if a certain job, training or qualification will be good for your career capital, shift the dates in your CV back for the number of years the thing is, then change your number, and do a randomised control trial of your CV with and without that qualification. You can even test multiple alternatives in one sitting. Perhaps an agency could be set up to market-test career paths on behalf of client’s. Of course, they might be short-sighted to changes in market conditions, but it’s a far more realistic insight into the career capital of a certain qualification than bullshit hear-say or prestige which seems to be most people’s default.
Julian Blanc from RSD points out that he doesn’t scan (‘checking out’) people because he attracts, not pursues, and is confidnent if anything will happen, he doesn’t have to do some kind of flirty eye or physical distance game. I imagine it conserves a lot of willpower too.
Teach for >insert poor foreign country here< programs are really poorly paid: $6000 AUD max incl. provisions for rent for one I looked at. I doubt it’s an effective career path for givers.
Can someone point me to that program that prints every lesswrong comment you’ve made in one page? It’s Wai Dao’s creation IIRC.
The self-interest rationale for effective altruism probably dips into self efficacy, compassion, gratitude, meaning and community.
If you’re ever interested in doing something random with a long time commitment that you might dislike part-of-the-way like join the French Foreign Legion, consider applying for an anthropology PhD first to study >insert random thing here<. You may hate the thing, but you could get a doctorate out of it!
Then again, I can find danger like that on my own, like my time in Colombia where I was too chicken to go to FARC and Venezualan territories!
Why isn’t there more support for a parliamentary budget office. Get your shit together Ricky Muir!
Tyler from RSD supports Peter Thiel’s thesis that non-deterministic attitudes to the future are instrumentally catastrophic, irrespective of the empirical truth value. Any variance that can’t be explained with known natural phenomenon? Close your eyes to it! This may not make sense if you haven’t read Zero to One. Tyler’s video here. Always fascinating watch very separate premises arriving at the same instrumental conclusions, independent of robust mechanisms promoting or identifying that idea in advance, let alone compatible systems of worldviews.
Cognitive reframes log
All conditions can be reframed to appear as if they were of service to you
Look at yesterdays goals as an opportunity highlighted by your past self not an obligation you’re tied to
Optimalism is hard but worth it
If I have been depressed from late primary school onwards from the perception of pressure for high achievement and the teasing from family and friend from the prospect of failure to achieve that doesn’t have to be attributed to a weakness of my volition. At the end of the day if I’m getting teased that person is being mean and it’s not going to motivate or get me to work more effectively, it’s just gonna lower my self esteem.
The things you are doing today are bringing you closer to tomorrow
Infographics
While Regulatory Spending and Output Increase, Economic Analysis of Regulations Is Often Incomplete
Regulation and Productivity
Did Deregulation Cause the Financial Crisis? Examining a Common Justification for Dodd-Frank
The Code of Federal Regulations: The Ultimate Longread
Why We Need Regulatory Reform, in Two Charts
How the Top Ten Regulators of 2012 Changed over Ten Years
...
US spending on science, space, and technology correlates with. Suicides by hanging, strangulation and suffocation
Juvenile arrests for pot possession (US) correlates with Total US crude oil imports
Well no. You can look for things you don’t know; or hey this crazy idea—does it work?
it’s against Australian law to rate medical professionals
That’s true for some people. Mostly in social enviroments were people aren’t good at giving advice.
[cross-posted from LW FB page. Seeking mentor or study pal for SICP]
Hello everyone,
I have decided to start learning programming, and I am beginning my journey with SICP—I’m just a few weeks in.
I am looking for a study partner or someone experienced to chat for around 1 hour a week on Skype about the topics covered in the book to verify what I know and hopefully speed up the learning process.
Would anyone be willing to take on the above?
Thank you!
From http://lesswrong.com/lw/js/the_bottom_line/
I remember a similar quotation regarding actions as opposed to thoughts. Does anyone remember how it went?
From: https://www.lesswrong.com/posts/bfbiyTogEKWEGP96S/fake-justification
In The Bottom Line, I observed that only the real determinants of our beliefs can ever influence our real-world accuracy, only the real determinants of our actions can influence our effectiveness in achieving our goals.
What would a man/female focused solely on their biological fitness do in the modern world? In animals, males would procreate with as many females, as they could, with the best scenario where they have different males raise their offsprings. Today, we have morning-after pills, and abortions, it is no longer enough for Pan to pinky-swear he would stay around. How does he alter his strategy? One I can quickly think of is sperm donation, but would that be his optimal strategy? I am certain that the hypothetical sultan from the old days could possibly produce more children, but how do their relative fitnesses compare, taking into account that in the western world, most places’ population growths decreased in intensity, or plateaued, or are even falling.
For females, egg donation seems like it should beat older methods hands down.
Would these really be the optimal strategies? In most cases, successful reproduction requires that both sides desire to do so. I am not sure that the level of attractiveness exists at which one could simply put on offer their genes, without a bank as an intermediate. On the other hand, I have heard tales of same-sex couples organizing such endeavors.
Consider Cecil Jacobson, though that trick probably only works once.
Cecil Jacobson lied and did not conform to standard practices, but the standard practice at the time was for the physician to conscript an arbitrary medical student. Aside from the times that he substituted his sperm for the husband’s, he just grabbed control of a variable that no one else cared enough to steer. The difference today is not so much that people worry about fraud, but that the patient exhibits control (and is allowed control by the establishment). Caring about fraud is a consequence of caring at all.
If Jacobson and a colleague had swapped sperm, it would have come to the same thing, while providing the claimed distance between the donor and the recipient. And it would have been close to the standard procedure, except that the donors would have been older and thus less fertile.
Maybe a possible strategy for a man could be to become a popular actor or singer, then become a famous advocate for gay/lesbian rights, and then publicly offer his sperm to lesbian pairs.
(The idea is that lesbian pairs need a sperm donor anyway, and why not choose someone who is both popular and sympathetic to their cause?)
Apparently being a postman in the 60s and having a good Johnny Cash impression worked out well …
http://infamoustribune.com/dna-tests-prove-retired-postman-1300-illegimitate-children/
Or, alternatively, not.
Let’s look at empirical evidence X-)
The most successful women are probably the primoridal Eves. To emulate them, as a woman, you’ll have to build a time machine, travel to their time and their pocket of (kinda) humanity and screw your brains out.
For a male, it looks likely that Genghis Khan was very very successful at leaving a lot of offspring. So as a male your best best is to gather a Horde and invade China and Russia.
You might want to read again the last four words in the first sentence of the comment you’re replying to. :-)
Are there any egoist arguments for (EA) aid in Africa? Does investment in Africa’s stability and economic performance offer any instrumental benefit to a US citizen that does not care about the welfare of Africans terminally?
If you are talking about egoistic in the sense that you want as an US citizen outcomes that are generally good for US citizens:
Government-consultant Simon Anholt argues that if a country does a lot of good in the world that results in a positive brand in his TED talk. The better reputation than makes a lot of things easier.
You are treated better when you travel in foreign countries. A lot of positive economic trade happens on the back on good brand reputations. Good reputations reduce war and terrorism.
Spending money on EA interventions likely has better returns for US citizens than spending money on waging wars like the Iraq war on a per-dollar basis.
I’m curious why this was downvoted. The last statement, which has political context?
I downvoted this because it was content-free bullshit. You asked :-/
Simon Anhold is someone who’s payed to consult governments. Both Western governments and countries like Sierra Leone and Saudi Arabia. If he’s simply talking bullshit why do government seek him out as a highly-payed advisor?
I guess the rejection is more based on the fact that his message seems like it violates deep-seated values on your end about how reality should work than his work being bullshit.
Because the official who made the proposal gets to look good for consulting with someone high status. There’s a reason consultants have the reputation they do in the business world and governments have even worse internal incentive problems.
My main point is that Simon Anhold is a high status consultant and not a hippy. Lumifer rejects him because he thinks Simon Anhold is simply a person who isn’t serious but a hippy. He’s payed by governments to advice them how to achieve foreign policy objectives.
The solution he proposes does happen to be more effective than the status quo of achieving foreign policy objectives.
He also gives data-driven advice in a field where most other consultants aren’t.
How about you let Lumifer speak for Lumifer’s rejection, rather than tilting at straw windmills?
I think it’s valuable for discussion to make clear statements. That allows other people to either agree with them or reject them. Both of those moves the discussion forward. Being to wage to be wrong is bad.
There’s nothing straw about the analysis of how most people who are not aware who Simon Anhold happens to be will pattern match the argument. Simon Anhold makes his case based on non-trival empiric research that Lumifer very likely is unaware. If he would be aware of the research I don’t think he would have voted down the post and called it bullshit. I even believe that’s a charitible intepretation of Lumifer’s writing.
I didn’t downvote because it was already at minus one, but it seemed to apply mainly to government policies rather than private donations and be missing the point because of it, and “miss the point so as to bring up politics in your response” is not good.
I’m not exactly sure. My first guess would be karma-slash damage from other conversations.
There are definitely social benefits to being seen as generous. Also, a lot of infectious diseases originate in Africa, which might eventually spread into other countries if we don’t help control them in Africa. Overall I doubt the selfish benefits are sufficient to make it a good deal for a typical person.
Anti-polyamory propoganda which clearly had some thought put into constructing a persuasive argument while doing lots of subtle or not so subtle manipulations. Always interesting to observe which emotional/psychological threads this kind of thing tries to pull on.
Well, given how ridiculously niche polyamory actually is in the real world, maybe we should be thinking of this as pro-polyamory propoganda instead. Or rather, I think this video is using polyamory as a stand-in/steelman for mere casual relationships with no formalized boundaries, which is what actually tends to happen IRL. But then most polyamorists would agree that these are generally a bad idea.
In a time where every link is a vote Google and people care about high click rates, why link to content like this? Why do you think it’s worth our attention?
Good point about Google. I’ve asked a question on stackexchange about how to avoid promoting a thing I’ve linked to. I’ll switch it over as soon as I know how.
And, for the reasons in my reply to gjm. I think it’s both interesting and useful rationality training to expose yourself to and analyze the psychological tools used in something you can easily pick out as propaganda. Here your brain will raise nice big red warning flags when it hits a trick, and you’ll be more able to notice similar things which may have been used to reinforce false beliefs by your own side’s propaganda. It’s also a good idea to have accurate models of why people come to the views they do, and what reinforces their norms.
(I don’t think this is super important at all, but noticed a few tricks which I had not specifically thought about before, and figured other people may get something similar out of it)
If that’s your goal, read a book like Cialdini’s Influence. It’s time much better invested into understanding tricks then directly watching propaganda yourself. Especially propaganda that isn’t annotated.
If you notice tricks you haven’t thought before, why don’t you write about them when directing people to the propaganda piece? Written reflection is a quite useful tool for building mental models of concepts.
That way we had something to talk about here and I wouldn’t object to having the link as an illustration.
I figured anti-polyamory propaganda did not need annotations on LessWrong. It’s telling that all but one reply took it as something which needs to be suppressed/counterargued, despite me calling it propaganda and saying it was interesting as an example of psychological tricks people pull. No one here is going to be taken in by this. I would not have posted this on facebook or another more general audience site.
I did not feel like writing it up in any detail would make a great use of my time, the examples to use for future pattern matching to are pretty obvious in the video and don’t need spelling out. I just wanted to drop the link here because I’d found it mildly enlightening, and figured others may have a similar experience. I take the negative feedback as meaning content people are politically sensitive to is not welcome, even if it’s rationality relevant (resistance to manipulation tricks) and explicitly non-endorsed. That’s unfortunate, but okay.
What does it tell?
Why do you see “suppressed” and “counterargued” as similar?
Who suggested it needs to be suppressed?
You had four replies. One, from Elo, queries a claim made in a document from the Austin Institute on the same topic as the video. One, from bogus, suggests that although the video is meant to be anti-poly, maybe it’s effectively pro-poly because it’s consciousness-raising. One, from ChristianKI, suggests that you’re giving googlejuice to an unpleasant piece of propaganda and asks why you posted it. One, from me, gives readers some information about the likely motivations of the organization that put out the video. No one tried to suppress it; no one said it should be suppressed. No one offered counterarguments, though one person questioned a claim. No one said it needs to be argued against.
So your description of what happened doesn’t seem to me to match reality.
That seems to me like an overgeneralization and an overreaction.
If you choose so. You could also choose to take it as a feedback that linking to a 15-minutes long video will mostly annoy people.
If a trick is trivially seen, there likely no update made by seeing the trick in action and I don’t see the argument for the value of seeing it in action. To the extend you claim you saw new tricks that you weren’t aware of in the past that does raise the question of how you conceptualize those newly seen tricks.
Political sensitivity has nothing to do with my assessment.
You could label any bad source on the internet rationality relevant by saying that it serves to see bad reasoning in action. You haven’t provided any argument why this particular piece of propaganda is more worthy of attention than other pieces of propaganda.
Apart from that I’m doubtful that the mechanism you propose actually leads to resistance to manipulation tricks. Adopting new habits is hard.
If the article lead you to see manipulation attempt at content that supports your own position that you previously haven’t seen that would interesting information to talk about. Till now I haven’t seen that the video had that effect on you and even less that the video has that effect on other potential viewers.
It’s easily seen in this context, because of the material covered and the fact that they don’t try very hard to be subtle about it. In other contexts the same set of tricks may slip past, unless you have an example to pattern match to (not a whole new habit). Immunization using a weak form of memetic attack you’re primed to defend against.
The literature on the ability of people to learn about tricks and then resists them suggests that it’s hard. Transfer is hard.
The organization that put this out has a pretty clear sociopolitical agenda.
(The second and third links there are from sites with a definite leftish tilt. It doesn’t look to me as if they’re telling any lies about the Austin Institute, but they’re unlikely to be sympathetically disposed.)
Of course, they’re very clearly trying to push a right wing traditional morals agenda, with a bit of dressing up to make it appear balanced to the unobservant. Their other major video is even more overtly propaganda.
I just find it fascinating to watch this kind of attempt at manipulating people’s views, especially when a bunch of smart people have clearly tried to work out how to get their message across as effectively as possible. Being aware of those tricks seems likely to offer some protection against them being used to try and push me in ways I may be more susceptible, and knowing the details of what as been used to shape certain opinion means I am better prepared if I get in a debate with people who have been persuaded by them.
from http://www.austin-institute.org/wp-content/uploads/2016/02/On-Nonmonogamy.docx—the source behind the video. citation needed
I have a question: It seems to me that Friendliness is a function of more than just an AI. To determine whether an AI is Friendly, it would seem necessary to answer the question: Friendly to whom? If that question is unanswered, then “Friendly” seems like an unsaturated function like “2+”. In the LW context, the answer to that question is probably something along the lines of “humanity”. However, wouldn’t a mathematical definition of “humanity” be too complex to let us prove that some particular AI is Friendly to humanity? Even if the answer to “To whom?” is “Eliezer Yudkowsky”, even that seems like it would be a rather complicated proof to say the least.
Any proofs will be like… assuming that if some laws of aerodynamics and range of conditions hold, proving that a certain plane design will fly. Which of course has some trouble because we don’t know the equivalent of aerodynamics either.
That would seem to be the best possible solution, but I have never heard aeroplane engineers claim that their designs are “provably airworthy”. If you take the aeroplane design approach, then isn’t “provably Friendly” a somewhat misleading claim to make, especially when you’re talking about pushing conditions to the extreme that you yourself admit are beyond your powers of prediction? The aeroplane equivalent would be like designing a plane so powerful that its flight changes the atmospheric conditions of the entire planet, but then the plane uses a complicated assembly of gyroscopes or something to continue flying in a straight line. However, if you yourself cannot predict which specific changes the flight of the plane will make, then how can you claim that you can prove that particular assembly of gyroscopes is sufficient to keep the plane on the preplanned path? On the other hand, if you can prove which specific changes the plane’s flight will make that are relevant to its flight, then you have a mathematical definition of the target atmosphere at a sufficient depth of resolution to design such an assembly. Does MIRI think it can come up with an equivalent mathematical model of humanity with respect to AI?
That’s the reason EY came up with the concept of CEV—Coherent Extrapolated Volition.
The SEP says that preferences cannot be aggregated without additional constraints on how the aggregation is to be done, and the end result changes depending on things like the order of aggregation, so these additional constraints take on the quality of arbitrariness. How does CEV get around that problem?
From the CEV paper:
Which you could sum up as “CEV doesn’t get around that problem, it treats it as irrelevant—the point isn’t to find a particular good solution that’s unique and totally non-arbitrary, it’s just to find even one of the good solutions. If arbitrary reasons shift us from Good World #4 to Good World #36, who cares as long as they both really are good worlds”.
The real difficulty is that when you combine two sets of preferences, each of which make sense on their own, you get a set of preferences that makes no sense whatsoever: http://plato.stanford.edu/entries/economics/#5.2 https://www.google.com/search?q=site%3Aplato.stanford.edu+social+choice&ie=utf-8&oe=utf-8
There is no easy way to resolve this problem. There is also no known method that takes such an inconsistent set of preferences as input and gives a consistent set of preferences as output such that the output would be recognizable to either party who contributed an original set of preferences as furthering any of their original goals. These random decisions are required so often in cases where there isn’t an unanimous agreement that in practice, there would be a large component of arbitrariness every single time CEV tries to arrive at a uniform set of preferences by extrapolating volitions of multiple agents into the future.
This doesn’t mean the problem is unresolvable, just that it’s an AI problem in its own right, but given these problems, wouldn’t it be better to pick whichever Nice Place to Live is the safest to reach instead of bothering with CEV? I say this because I’m not sure Nice Place to Live can be defined in terms of CEV, as any CEV-approved output. Because of the preference aggregation problem, I’m not certain that a world that is provably CEV-abiding also provably avoids flagrant immorality. Two moral frameworks when aggregated by a non-smart algorithm might give rise to an immoral framework, so I’m not sure the essence of the problem is resolved just by CEV as explained in the paper.
Although what if we told each party to submit goals rather than non-goal preferences? If the AI has access to a model specifying which actions lead to which consequences, then it can search for those actions that maximize the number of goals fulfilled regardless of which party submitted them, or perhaps takes a Rawlsian approach of trying to maximize the number of goals fulfilled that were submitted by whichever party will have the least number of goals fulfilled if that sequence of actions were taken, etc. That seems very imaginable to me. You can then have heuristics that constrain the search space and stuff. You can also have non-goal preferences in addition to goals if the parties have any of those.
In that light, it seems to me that the problem was inferring goals from a set of preferences which were not purely non-goal preferences but were actually presented with some unspecified goals in mind. Eg. One party wanted chocolate, but said, “I want to go to the store” instead. If that was the source of the original problem, then we can see why we might need an AI to solve it, since it calls for some lightweight mind reading. Of course, a CEV-implementing AI would have to be a mind reader anyway, since we don’t really know what our goals ultimately are given everything we could know about reality.
This still does not guarantee basic morality, but parties should at least recognize some of their ultimate goals in the end result. They might still grumble about the result not being exactly what they wanted, but we can at least scold them for lacking a spirit of compromise.
All this presupposes that enough of our actions can be reduced to ultimate goals that can be discovered, and I don’t think this process guarantees we will be satisfied with the results. For example, this might erode personal freedom to an unpleasant degree. If we would choose to live in some world X if we were wiser and nicer than we are, then it doesn’t necessarily follow that X is a Nice Place to Live as we are now. Changing ourselves to reach that level of niceness and wisdom might require unacceptably extensive modifications to our actual selves.
My recent paper touches upon preference aggregation a bit in section 8, BTW, though it’s mostly focused on the question of figuring out a single individual’s values. (Not sure how relevant that is for your comments, but thought maybe a little.)
Thanks, I’ll look into it.
(And all my ranting still didn’t address the fundamental difficulty: There is no rational way to choose from among different projections of values held by multiple agents, projections such as Rawlsianism and utilitarianism.)
I think that’s on the list of MIRI open research problems.
Interesting. In that case, would you say an AI that provably implements CEV’s replacement is, for that reason, provably Friendly? That is, AIs implementing CEV’s replacement form an analytical subset of Friendly AIs? What is the current replacement for CEV anyway? Having some technical material would be even better. If it’s open to the public, then I’d like to understand how EY proposes to install a general framework similar to CEV at the “initial dynamic” stage that can predictably generate a provably Friendly AI without explicitly modeling the target of its Friendliness.
There isn’t really one as far as I know; “The Value Learning Problem” discusses some of the questions involved, but seems to mostly at be the point of defining the problem rather than trying to answer it. (This seems appropriate to me; trying to answer the problem at this point seems premature.)
Thanks. That makes sense to me.
I think that’s MIRI’s usage of the term friendly.
He’s not proposing a mechanism as far as I know. That’s another open problem.
See Miris research for details.
The reality of market failures that some ‘contrarians’ like to ignore
The moral high ground of free markets is that those who produce that which others consume will be rewarded with greater power, since they have proved their worth by servicing or producing for others. However, in morally optimal economic system, the movement of wealth shouldn’t enrich those with an anti social agenda. This is allowed happens because of the simplistic understanding of preferences and consumption by small game fallacists who conjuring an evolution to serve you then once they’ve locked into their economic reasoning, become non empirical ideologies since truth seeking as an optimisation process. It could probably be solved by adopting the method of competing hypotheses, if you are one of these permissive free-marketeers, then observing any non-simulated market in operation.
The upside of markets for recreational drug abuse and other self harmful products is that it redistributes money away from self destructive short sighted people towards long sighted other destructive people...what a waste of human capital!
Next time you see some people light cigarettes, bring out the Nelson in you and point and laugh. Sure, it might hurt their feelings in the short term, but your practically saving the lives of countless future generations by influencing a culture-change. Drug abuse aren’t cool, and taking pussy foot approaches won’t help. Your local school bully can make my little pony uncool by beating up the MLP kid, but your government’s does’t care enough to do the same.
-Tom Frieden
There’s a sense in which drug abusers have a right to be teased or bullied out of their behaviour! It’s been said best by others:
-New York University School of Law professor of law Jeremy Waldron’s written in response to critics of the second-generation rights: