Open thread, August 21 - August 27, 2017
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options “Notify me of new top level comments on this article” and “
Here’s an old puzzle:
Alice: How can we formalize the idea of “surprise”?
Bob: I think surprise is seeing an event of low probability.
Alice: This morning I saw a car whose license plate said 3817, and that didn’t surprise me at all!
Bob: Huh.
For everyone still wondering about that, here’s the correct answer! The numerical measure of surprise is information gain (Kullback-Leibler divergence) from your prior to your posterior over models after updating on the data. That gives the intuitive answer to the above puzzle, as long as none of your models assigned high probability to 3817 in advance. It also works for the opposite case, if you expected an ordered string but got a random one, or ordered in a different way.
This is actually well known, I just wanted to put it on LW.
Just to make sure I understand prior and posterior over models, is the following about right?
Alice starts with a prior of 0.999 that non-vanity plates are generated basically randomly (according to some rule of “N letters followed by M digits” or whatever, and with rules e.g. preventing swear words).
Alice sees “3817” (having seen many other 4-digit plates previously).
Alice’s posterior probability over models is still about 0.999 on the same model.
Yeah.
Wait. If you’re talking about surprise because you have said “update your model based on how surprised you are”, you can’t turn around and say “surprise is defined by how much you should update your model”. “update your model based on how much you should update your model” isn’t very helpful.
The intuitive sense of what surprise is corresponds well to the rules for updating your probability distribution over models, which we can therefore take as a formal definition of surprise.
Hmm, I thought about it some more and maybe it’s not that simple. If we formalize surprise like that, it’s easy to come up with situations where you expect to be very “surprised” no matter what data you see. That doesn’t seem right. Does anyone have better ideas?
How is a Frequentist surprised?
I’m missing a lot of knowledge to answer that. Can you?
Presumably, F folks talk about how “surprised” an element of a statistical model is, relative to observed data (maximum likelihood as minimizing surprise in KL sense). That’s about all I can think of.
Grognor has reportedly died: https://twitter.com/MakerOfDecision/status/898625422270889984
Sad. He didn’t like me, but I mostly liked him.
Boo death.
A better explanation of the Monty Hall problem:
A game show host always plays the following game: First he shows you 3 doors and informs you there is a prize behind one of them. After allowing you to select one of the doors, he throws open one of the other doors, showing you that it’s empty. He then offers you a deal: Stick to your original guess, or switch to the remaining door?
What is the most important piece of information in this problem statement? I claim that the bit that ought to shock you is that the host plays this game all the time, and the door he throws open ALWAYS turns out to be empty. Think about it: If the host randomly throws open a door, then in every third show, the door he opens would have the prize behind it. That would ruin the game!
The host knows which door has the prize, and in order not to lose the interest of the spectators, he deliberately opens an empty door every time. What this means is that the door you chose was selected randomly, but the door that the host DIDN’T choose is selected on the basis of a predictable algorithm. Namely, having the prize behind it.
This is the real reason why you would do better if you switched your guess to the remaining door.
What do you think? Is that clearer than the usual explanations?
Yeah, I think it’s better. It highlights the flow of knowledge: where the prize is → host’s knowledge → which door he opens → player’s knowledge.
I’d maybe change the phrase “predictable algorithm”, since the host’s actions aren’t predictable to the player. Maybe
could be replaced by
or something similar?
Thanks. You’re right, that part should be expanded. How about:
At this point, you have two choices: Either 1. one randomly selected door, or 2. one door among two doors, chosen by the host on the basis of the other not having the prize.
You would have better luck with option 2 because choosing that door is as good as opening two randomly selected doors. That is twice as good as opening one randomly selected door as in option 1.
Yeah, I like that.
After reading yet another article which mentions the phrase ‘killer robots’ 5 times and has a photo of terminator (and robo-cop for a bonus), I’ve drafted a short email asking the author to stop using this vivid but highly misleading metaphor.
I’m going to start sending this same email to other journalists that do the same from now on. I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there’s the potential that some journalists will think twice before referencing Terminator in AI Safety discussions, potentially improving the quality of the discourse a little.
The effect of this might be slightly larger if more people do this.
I’ve always liked the phrase ‘The problem isn’t Terminator, it is King Midas. It isn’t that AI will suddenly ‘decide’ to kill us, it is that we will tell it to without realizing it.” I forget where I saw that first, but it usually gets the conversation going in the right direction.
The same is true for the Terminator plot, where Skynet got a command to self-preserve by all means—and concluded that killing humans will prevent its turning off.
I don’t remember Skynet getting a command to self preserve by any means. I thought the idea was that it ‘became self aware’, and reasoned that it had better odds of surviving if it massacred everyone.
It could be a way to turn the conversation from terminator topic to the value alignment topic without direct confrontation with a person.
The fact that you engage with the article and share it, might suggest to the author that he did everything right. The idea that your email will discourage the author from writing similar articles might be mistaken.
Secondly, calling autonomous weapons killer robots isn’t far of the mark. The policy question of whether or not to allow autonomous weapons is distinct from AGI.
The type of engagement that the writer of the article wants is the kind the leads to sharing. If Tenoke is specifically stating their intent not to share the content, it’s not a viral kind of engagement. There is a big difference between seeing a quote-with-retweet captioned “This is terrible!” and receiving a private email telling them to stop.
True, but this is one of the less bad articles that have Terminator references (as it makes a bit more sense in this specific context) so I mind less that I am sharing it. It’s mostly significant insofar as being one I saw today that prompted me to make a template email.
I can see it having no influence on some journalist, but again
..
It’s still fairly misleading, although a lot less than in AGI discussions.
I am not explicitly talking about AGI either.
My point wasn’t that it creates no impact but that you show the journalist by emailing him that his article is engaging. This could encourage him to write more articles like this.
A too easy problem
I suspect the most difficult bit of the problem is defining what we mean by “the length of Antartica’s shore”. Crinkles below a certain size are irrelevant because water can’t flow over them. So we mean the length of the shore as measured by a ruler whose length is the Capillary length of water in air, which is 2.7 mm. Of course no one has ever measured this, but perhaps we can estimate it by using coarser measurements and fitting a curve to them.
Yes, this is the trickiest part. According to some French jokes, Slovenia has 42 kilometers of coast. I agree. This is still not the funny part of those jokes, this is the factual part.
Several thousand kilometers, maybe 10 thousand kilometers of Antarctica’s coast by the same methodology.
According to this amazing paper, Antarctica has a coastline of 39849 km when measured at the 100 m scale, and 43449 km when measured at the 25 m scale. They say its fractal dimension is 1.096448. Fitting a curve of the form L = M*r^(1-1.096448) to those two points I get that for r = 2.7 mm we get L = 107349 km. This methodology is perhaps nonoptimal, but I think it’s the best we’ve got.
So for the purposes of this question I’ll take the perimeter of Antarctica to be 100 000 km. Wikipedia says the total area of the ocean is 360 000 000 km^2. So to rise 6 m needs a volume of 2.16 10^15 m^3. A century is 3.16 10^9 s, so we need 6.84 10^5 m^3s^-1. The Amazon averages 2.09 10^5 m^3s^-1, so we need about three of them. If the coast of the Antarctic is 10^8 m then we need 6.84 litres flowing over each meter every second.
The equator is 40 000 km long. Antarctica can’t be 2.5 times longer. The Polar circle is what—about 8000 km long.
The beaches of Antarctica must be shorter than that.
EDIT: Or at most twice as long.
You’re wrong here. See the coastline paradox. Lines can be as long as they want, just by being extremely crinkly. There’s no law that says a shorter curve cannot enclose a longer one.
I am right here. Those small bays are not important in this case when we want to calculate the amount of water pouring out to sea. The mouth of the river Amazone is 200 km wide. Not as wide as the sum of all underwater bays and peninsulas.
Okay. So when I was calculating how many Amazons were needed the perimeter didn’t matter, and the answer was just 3. But when you asked how many litres would be pouring over each meter of perimeter I did the calculation based on the idea that an equal amount of water was passing over each bit of the perimeter.
Otherwise the answer is of course that the water forms together into rivers so that most of the perimeter has no water passing over it but the mouths of the rivers have a great deal of water passing over them.
Three Amazons are the right answer. AFAIK, the biggest river there is approximately as large, as the biggest river on the island of Crete. Which may be beautiful, but quite lousy in cubic meters per second.
Where and how some people see three Amazons on Antarctica, is a mystery to me. The amount of ice falling directly into the sea, is quite pathetic, as well.
But mostly, I love how the arithmetic is reigning supreme above all the sciences.
Wikipedia is another nice source of info. It claims that, during the past 20,000 years, the fastest increase in sea level was around 5 meters per century.
(The page on sea level rise mentions 3 meltwater pulses; clicking through it looks like Meltwater Pulse 1A is the one that researchers are the most confident about.)
This increase has some geological traces in the state of Washington. That was the North American glacier melting, for the most part. We don’t see much of that kind of flooding on Greenland or Antarctica recently. This is a real thing.
I am certain, that if your arithmetic isn’t sound, then your science is most likely bogus, no matter how fancy it looks.
This was a good puzzle, but I don’t see how it follows from the puzzle that arithmetic is “reigning supreme” above all the sciences. For one thing, I thought that most scientific estimates of sea level rise over the next 100 years were a lot lower than 6 meters. Do you have any links to projections of 6 meters?
Sure, Inconvenient Truth of Al Gore. He hasn’t returned his Nobel prize, so this still stands.
OK, noted, and thanks. I haven’t actually read An Inconvenient Truth.
But, I think most current scientific estimates are lower, so “reigning supreme above all the sciences” still seems a bit hyperbolic.
Okay, well. The next time I’ll ask, how fast the world ocean is losing water. But that’s for the next time. We had to eliminate this fast-rising possibility first.
Everyone knows Peace prizes don’t count.
Everyone knows Academy Awards do count. He has an Oscar, too.
The amazon begins distributed across brazil, as the occasional drops of rain. Then it comes together because of the shape and material of the landscape, and flows into streams, which join into rivers, which feed one big river. If global warming is causing antarctica to lose mass, do you expect the same thing to happen in antarctica, with meltwater beginning distributed across the surface, and then collecting into rivers and streams?
Yes. How else could it be?
How about glacial flow? Ice doesn’t move fast, but it does move. It can postpone melting until it’s in contact with seawater. What do you think the ratio of mass moved by rivers vs. glaciers is in Antarctica?
A solid state river, promptly melting in the icy, ice covered ocean, is even less plausible than a large watery river. Don’t you think so?
That’s about 0.4 Amazon.
The precipitations alone compensate most of this. Almost 3 Amazons still missing for the 6 meters sea rise in a century,
Besides …
10 million icebergs per year? Per a few summer months? Highly unrealistic.
Neat!
Glaciers don’t have to form icebergs in order to melt. It can just melt where it meets the sea.
You know, now that you mention it, 6 meters sure is a lot. Where did you get that number from? See p. 1181 for IPCC projections.
How many liters per meter per second in icy waters? After the sea ice has already melted away? Which never does in most places?
Told you, The Inconvenient Truth by Al Gore.
Much smaller numbers, popular now, still demands huge melting we don’t see really.
If the glacier is flowing off of the continent into the sea, then sea ice is in an equilibrium between melting at the edges and bottom and being replenished at the middle.
“See” how? It seems to me that you don’t have an involved understanding of the melting of glaciers. If we could measure the mass of the Antarctic glacier straightforwardly, then I’m sure we’d agree on the meaning of changes in that mass. But if we don’t see the particular melting process you expect, perhaps you’re just expectung the wrong process, and haven’t uncovered a conspiracy among all the experts.
In my experience, actually reading the ipcc review has never been popular and still isn’t. I’m sure you could still find someone in the press claiming larger sea level rise, if you tried. But why pick the easiest opponent?
Across the frozen sea around most of Antartica even in the summertime?
No conspiracy, I agree. Some lack of basic arithmetic skills only.
I’m not sure if you’re actually curious, or if you think this is a “gotcha” question.
Here’s a picture. As the glacier flows outward (here’s measured flow rates), it begins floating on the sea and becomes an ice shelf, which then loses mass to the ocean through melting and breaking up into pieces, which then melt. This ice shelf is thick (100m − 1 km scale), because it’s a really thick sheet of ice being pushed out into the water by gravity. It then encounters the sea ice, which is ~1-4 meters thick. The sea ice gets pushed out, or piled up, because there are no particular forces holding the sea ice in place.
At this point I’m tapping out of the conversation. Either you’re ignorant but curious and there’s no point to me typing up things you could look up, or you want to feel superior while remaining ignorant and there’s no point to me typing up things you don’t care about.
That picture is silly. The deep-cold freshwater continental ice flowing into the ocean and melting there in the icy waters, but the 1-4 meters thick salty ice survives the Antarctic summer?
Actually, there are a few places on Antarctica, where glaciers flow into the ocean, but not very fast at all. And where is the heat to melt −40 degrees cold ice, 2000 cubic kilometers per summer? It is not only the question of the heat but the question of the heat transfer.
I think, most people still believe that picture anyway. Most people here, I guess, too.
Perhaps, but:
If the global temperature continues to rise over the next century, then the rate of melting will be higher at the end of the 100 year period than it is now
In addition to Antarctica, Greenland has a significant (~ 2,850,000 km3) ice sheet. Melting of the Greenland ice sheet will also contribute to sea level increases
If. Then we might see something spectacular. But we need A LOT of warming, to actually warm up and melt that ice.
Fine, you’d need one Amazon on Greenland and only two Amazons for Antarctica. Doesn’t compute as well.
Imagine a summertime Greenland Amazon! It should be 3 Amazons really in that 1⁄3 of the year. The melting season is short.
We most certainly DO NOT see anything like that. By far!
Physics (or arithmetic) is almost boring here. The mass psychology of “the 97% percent of the scientific community” and of a large part of the public is very interesting. They keep seeing sea rising. Magically, since there are no such rivers to provide all that water. The number of icebergs around Greenland is at least 100 times too small to substitute one Amazon during the whole year or 3 Amazons in the summertime.
I am sorry, the arithmetic is just crucial.
Presumably there is some temperature that would cause that much sea level rise in that much time. In which case that water would leave Antarctica in one way or another.
Of course, high temperatures are possible. But then, you will actually see not only 3, but even more Amazon rivers there. I am sure, when and if the temperature down there will be high above zero, now they are low below zero, only then we will see some spectacular events.
Now, we don’t.
Just a house somewhere near Minsk
Well, I guess I won’t be complaining about my neighbor’s lawn flamingos any more after reading that!
Huh. We have lawn storks here. Or, rather, roof storks. Don’t know what they are made from, but possibly metal, from the look of those necks.
Given that the linked article isn’t in English, what is it about?
A house near Minsk, just like MaryCh’s link text says. Here, have Google Translate: https://translate.google.co.uk/translate?hl=en&sl=ru&tl=en&u=https%3A%2F%2Frealty.tut.by%2Fnews%2Fofftop-realty%2F557027.html
What, to you, is the difference between a hardcore popular science book and one of the serious science publicistics? It seems to me that it must be great, and I miss the former kind, and I can’t be alone in this, but it’s the latter kind that gets published, weakly supported by the distributors and occasionally, sold.
By ‘gets published’ I mean here in Ukraine, although it might be true for other countries.
In the Less Wrong Sequences, Eliezer Yudkowsky argues against epiphenomenalism on the following basis: He says that in epiphenomenalism, the experience of seeing the color red fails to be a causal factor in our behavior that is consistent with us having seen the color red. However, it occurs to me that there could be an alternative explanation for that outcome. It could be that the human cognitive architecture is set up in such a way that light in the wavelength range we are culturally trained to recognize as red causes both the experience of seeing the color as well as actions consistent with seeing it. After the research which shows that we decide to act before becoming conscious of our decision, such a setup would not be a surprise to me if true.
The point is literally semantic. “Experience” refers to (to put it crudely) the things that generally cause us to say “experience”, because almost all words derive their reference from the things that cause their utterances (inscriptions, etc.). “Horse” means horse because horses typically occasion the use of “horse”. If there were a language in which cows typically occasioned the word “horse”, in that language “horse” would mean cow.
I don’t think epiphenomenalists are using words like “experience” in accordance with your definition. I’m no expert on epiphenomenalism, but they seem to be using subjective experience to refer to perception. Perception is distinct from external causes because we directly perceive only secondary qualities like colors and flavors rather than primary qualities like wavelengths and chemical compositions.
EY’s point is that we behave as if we have seen the color red. So we have: 1. physical qualities, 2. perceived qualities, and 3. actions that accord with perception. To steelman epiphenomenalism, instead of 1 → 2 → 3, are other causal diagrams not possible, such as 1 → 2 and 1 → 3, mediated by the human cognitive architecture? (Or maybe even 1 → 3 → 2 in some cases, where we perceive something on the basis of having acted in certain ways.)
However, the main problem with your explanation is that even if we account for the representation of secondary qualities in the brain, that still doesn’t explain how any kind of direct perception of anything at all is possible. This seems kind of important to the transhumanist project, since it would decide whether uploaded humans perceive anything or whether they are nothing but the output of numerical calculations. Perhaps this question is meaningless, but that’s not demonstrated simply by pointing out that, one way or another, our actions sometimes accord with perception, right?
We not only stop at red lights, we make statements like S1: “subjectively, red is closer to violet than it is to green.” We have cognitive access both to “objective” phenomena like the family of wavelengths coming from the traffic light, and also to “subjective” phenomena of certain low-level sensory processing outputs. The epiphenomenalist has a theory on the latter. Your steelman is well taken, given this clarification.
By the way, the fact that there is a large equivalence class of wavelength combinations that will be perceived the same way, does not make redness inherently subjective. There is an objective difference between a beam of light containing a photon mix that belongs to that class, and one that doesn’t. The “primary-secondary quality” distinction, as usually conceived, is misleading at best. See the Ugly Duckling theorem.
Back to “subjective” qualities: when I say subjective-red is more similar to violet than to green, to what does “subjective-red” refer? On the usual theories of how words in general refer—see above on “horses” and cows—it must refer to the things that cause people to say S2: “subjectively this looks red when I wear these glasses” and the like.
Suppose the epiphenomenalist is a physicalist. He believes that subjective-red is brain activity A. But, by definition of epiphenomenalism, it’s not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B. But now by our theory of reference, subjective-red is B, rather than A. If the epiphenomenalist is a dualist, a similar problem applies.
I don’t see how you can achieve a reductionist ontology without positing a hierarchy of qualities. In order to propose a scientific reduction, we need at least two classes, one of which is reducible to the other. Perhaps “physical” and “perceived” qualities would be more specific than “primary” and “secondary” qualities.
Regarding your question, if the “1->2 and 1->3” theory is accurate, then I suppose when we say that “red is more like violet than green”, certain wavelength ranges R are causing the human cognitive architecture to undertake some brain activity B that drives both the perception of color similarity A a well as behavior which accords with perception C.
So it follows that “But, by definition of epiphenomenalism, it’s not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B.” is true, but “But now by our theory of reference, subjective-red is B, rather than A.” is false. The problem comes from an inaccurate theory of reference which conflates the subset of brain activities that are a color perception A with the entirety of brain activities, which includes preconscious processes B that cause A as well as the behavior C of expressing sentences S1 and S2.
Regarding S2, I think there is an equivocation between different definitions of the word “subjective”. This becomes clear when you consider that the light rays entering your eyes are objectively red. We should expect any correctly functioning human biological apparatus to report the object as appearing red in that situation. If subjective experiences are perceptions resulting from your internal mechanisms alone, then the item in question is objectively red. If the meaning of “subjective experience” is extended to include all misreportings of external states of affairs, then the item in question is subjectively red. This dilemma can be resolved by introducing more terms to disambiguate among the various possible meanings of the words we are using.
So in the end, it still comes down to a mereological fallacy, but not the ones that non-physicalists would prefer we end up with. Does that make sense?
This is an interesting example, actually. Do we have data on how universal perceptions of color similarities, etc. are? We find entire civilizations using some strange analogies in the historical record. For example, in the last century, the Chinese felt they were more akin to Russia than the West because the Russians were a land empire, whereas Westerners came via the sea like the barbaric Japanese who had started the Imjin war. Westerners had employed similar strong arm tactics to the Japanese, forcing China to buy opium and so on. Personally, I find it strange to base an entire theory of cultural kinship on the question of whether one comes by land or sea, but maybe that’s just me.
The core problem remains that, if some event A plays no causal role in any verbal behavior, it is impossible to see how any word or phrase could refer to A. (You’ve called A “color perception A”, but I aim to dispute that.)
Suppose we come across the Greenforest people, who live near newly discovered species including the greater geckos. Greenforesters use the word “gumie” always and only when they are very near greater geckos. Since greater geckos are extremely well camouflaged, they can only be seen at short range. Also, all greater geckos are infested with microscopic gyrating gnats. Gyrating gnats make intense ultrasound energy, so whenever anyone is close to a greater gecko, their environment and even their brain is filled with ultrasound. When one’s brain is filled with this ultrasound, the oxygen consumption by brain cells rises. Greenforesters are hunter-gatherers lacking either microscopes or ultrasound detectors.
To what does “gumie” refer: geckos, ultrasound, or neural oxygen consumption? It’s a no-brainer. Greenforesters can’t talk about ultrasound or neural oxygen: those things play no causal role in their talk. Even though ultrasound and neural oxygen are both inside the speakers, and in that sense affect them, since neither one affects their talk, that’s not what the talk is about.
Mapping this causal structure to the epiphenomenalist story above: geckos are like photon-wavelengths R, ultrasound in brain is like brain activity B, oxygen consumption is like “color perception” A, and utterances of “gumie” are like utterances S1 and S2. Only now I hope you can see why I put scare quotes around “color perception”. Because color perception is something we can talk about.
I’m not sure that analogy can be extended to our cognitive processes, since we know for a fact that: 1. We talk about many things, such as free will, whose existence is controversial at best, and 2. Most of the processes causally leading to verbal expression are preconscious. There is no physical cause preventing us from talking about perceptions that our verbal mechanisms don’t have direct causal access to for reasons that are similar to the reasons that we talk about free will.
Why must A cause C for C to be able to accurately refer to A? Correlation through indirect causation could be good enough for everyday purposes. I mean, you may think the coincidence is too perfect that we usually happen to experience whatever it is we talk about, but is it true that we can always talk about whatever we experience? (This is an informal argument at best, but I’m hoping it will contradict one of your preconceptions.)
I don’t say that we can talk about every experience, only that if we do talk about it, then the basic words/concepts we use are about things that influence our talk. Also, the causal chain can be as indirect as you like: A causes B causes C … causes T, where T is the talk; the talk can still be about A. It just can’t be about Z, where Z is something which never appears in any chain leading to T.
I just now added the caveat “basic” because you have a good point about free will. (I assume you mean contracausal “free will”. I think calling that “free will” is a misnomer, but that’s off topic.) Using the basic concepts “cause”, “me”, “action”, and “thing” and combining these with logical connectives, someone can say “I caused my action and nothing caused me to cause my action” and they can label this complex concept “free will”. And that may have no referent, so such “free will” never causes anything. But the basic words that were used to define that term, do have referents, and do cause the basic words to be spoken. Similarly with “unicorn”, which is shorthand for (roughly) a “single horned horse-like animal”.
An eliminativist could hold that mental terms like “qualia” are referentless complex concepts, but an epiphenomenalist can’t.
Is there any appetite for trying to create a collective fox view of the future?
Model the world under various assumptions (energy consumption predictions + economic growth + limits to the earths energy dissipation + intelligence growth etc) and try and wrangle it into models that are combined together and updated collectively?