Open thread, July 31 - August 6, 2017
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options “Notify me of new top level comments on this article” and “
This is a very interesting part of an interview with Freeman Dyson where he talks about how computation could go on forever even if the universe faces a heat death scenario. https://www.youtube.com/watch?v=3qo4n2ZYP7Y
Even if a computation goes forever, it doesn’t necessarily perform more than a certain finite amount of computation. And when we are below the Planck’s temperature scale, the further cooling is useless for your heat driven machines. Life stops.
I believe that there is a lot of computing down there in the coldness around the absolute zero. But not an infinite amount.
I believe Dyson is saying there could indeed by an infinite amount. Here is a wikipedia article about it https://en.wikipedia.org/wiki/Dyson%27s_eternal_intelligence and the article itself http://www.aleph.se/Trans/Global/Omega/dyson.txt
Yes. But there is another problem. When a super-civilization goes to sleep, for the Universe to cool some more, it has to establish some alarm clock mechanism to wake them up after some time. Which needs some energy. If they circuit their alarm clock to a thermometer to wake them up when it will be cool enough, is that energy free? I don’t think so.
Well, I don’t believe it’s possible to postpone the end of all calculations indefinitely, but I still find this Dyson text fascinating and very relevant.
blather
Thomas’s comment seems quite sensible to me.
It seems to me that Dyson’s argument was that as temperature falls, so does the energy required for computing. So, the point in time when we run out of available energy to compute diverges. But, Thomas reasonably points out (I think—correct me if I am misrepresenting you Thomas) that as temperature falls and the energy used for computing falls, so does the speed of computation, and so the amount of computation that can be performed converges, even if we were to compute forever.
Also, isn’t Thomas correct that Planck’s constant puts an absolute minimum on the amount of energy required for computation?
These seem like perfectly reasonable responses to Dyson’s comments. What am I missing?
You understand me correctly in every way. If I am right, that’s another matter. I think I am.
Dyson opens up another interesting question with this. Is it better to survive forever with a finite subjective time T, or it is better to consume 2*T experience in a finite amount of calendar time?
Isn’t 2*T obviously better? Maybe I’m missing something here...
We are on the same page here. But a lot of people want to survive as long as possible. Not as much as possible, but as long as possible.
I would guess that most people who want that simply haven’t considered the difference between “how much” and “how long”, and if convinced of the possibility of decoupling subjective and objective time would prefer longer-subjective to longer-objective when given the choice.
(Of course the experiences one may want to have will typically include interacting with other people, so “compressed” experience may be useful only if lots of other people are similarly compressing theirs.)
Well. If one (Omega or someone like him) asked me to choose between 1000 years compressed into the next hour or just 100 years uncompressed inside the real time from now on … I am not sure what to say to him.
Again, the existence of other people complicates this (as it does so many other things). If I’m offered this deal right now and choose to have 1000 years of subjective experience compressed into the next hour and then die, then e.g. I never get to see my daughter grow up, I leave my wife a widow and my child an orphan, I never see any of my friends again, etc. It would be nice to have a thousand years of experiences, but it’s far from clear that the benefits outweigh the costs.
This doesn’t seem to apply in the case of, e.g., a whole civilization choosing whether or not to go digital, and it would apply differently if this sort of decision were commonplace.
you are missing the concept of blather
The definition of “blather” that I find is:
“talk long-windedly without making very much sense”, which does not sound like Thomas’s comment.
What definition are you using?
Against Phrasal Taxonomy Grammar, an essay about how any approach to grammar theory based on categorizing every phrase in terms of a discrete set of categories is doomed to fail.
I’m curious about your “system that doesn’t require a strict taxonomy”. Is that written up anywhere? Also, does your work have any relevance to how children should be taught grammar in school?
I haven’t written it up, though you can see my parser in action here.
One key concept in my system is the Theta Role and the associated rule. A phrase can only have one structure for each role (subject, object, determiner, etc).
I don’t have much to say about teaching methods, but I will say that if you’re going to teach English grammar, you should know the correct grammatical concepts that actually determine English grammar. My research is an attempt to find the correct concepts. There are some things that I’m confident about and some areas where the system needs work.
One very important aspect of English grammar is argument structure. Different verbs characteristically can and cannot take various types and combinations of arguments, such as direct objects, indirect objects, infinitive complements, and sentential complements. For example, the word “persuade” takes a sentential (that-) complement, but only when also combined with a direct object (“I will persuade [him] that the world is flat” is incorrect without the direct object). In contrast, the verb “know” can take either a direct object or a that-complement, but not both. To speak English fluently, you need to memorize all these combinations, but before you memorize them, you need to know that the concept exists.
Daniel, I’m curious too. What do you think about Fluid Construction Grammar? Can it be a good theory of language?
What would be the physical/neurological mechanism powering ego depletion, assuming it existed? What stops us from doing hard mental work all the time? Is it even imaginable to, say, study every waking hour for a long period of time, without ever having an evening of youtube videos to relax? I’m not asking what the psychology of willpower is, but rather if there’s a neurology of willpower?
And beyond ego depletion, there’s a very popular model of willpower where the brain is seen as a battery, used up when hard work is being done and charged when relaxing. I see this as a deceptive intuition pump since it’s easy to imagine and yet it doesn’t explain much. What is this energy being used up, physically?
Surely it isn’t actual physical energy (in terms of calories) since I recall that the energy consumption of the brain isn’t significantly increased while studying. In addition, physical energy is abundant nowadays because food is plentiful. If the lack of physical energy was the issue, we could just keep going by eating more sugar.
The reason we can’t workout for 12 hours straight is understood, physiologically. Admittedly, I don’t understand it very well myself, but I’m sure an expert could provide reasons related to muscles being strained, energy being depleted, and so on. (Perhaps I would understand the mental analogue better if I understood this.) I’m looking for a similar mechanism in the brain.
To better explain what I’m talking about, what kind of answer would be satisfying, I’ll give you a couple fake explanations.
Hard mental work sees higher electrical activity in the brain. If this is kept up for too long, neurons would get physically damaged due to their sensitivity. To prevent damage, brains evolved a felling of tiredness when the brain is overused.
There is a resource (e.g. dopamine) that is literally depleted during tasking brain operation and regenerated when resting.
There could also be a higher level explanation. The inspiration for this came from an old text by Yudkowsky. (I didn’t seriously look at those explanations as an answer to my problem because of reasons). I won’t quote the source since I think that post was supposed to be deleted. This excerpt gives a good intuitive picture:
Let me speculate on the answer.
1) There is no neurological limitation. The hardware could, theoretically, run demanding operations indefinitely. But, theories like ego depletion are deceptive memes that spread throughout culture, and so we came to accept an nonexistent limitation. Our belief in the myth is so strong, it might as well be true. The same mechanism as learned helplessness. Needless to say, this could potentially be overcome.
2) There is no neurological limitation, but otherwise useful heuristics stop us from kicking it into higher gear. All of the psychological explanations for akrasia, the kind that are discussed all the time here, come into play. For example, youtube videos provide a tiny, but steady and plentiful stimulus to the reward system, unlike programming, which can have a much higher payout, but one that’s inconsistent, unreliable and coupled with frustration. And so, due to a faulty decision making procedure, the brain never gets to the point where it works to its fullest potential. The decision making procedure is otherwise fast and correct enough, thus mostly useful, so simply removing it isn’t possible. The same mechanism as cognitive biases. It might be similar to how we cannot do arithmetic effortlessly even though the hardware is probably there.
3) There is an in-built neurological limitation because of an evolutionary advantage. Now, defining this evolutionary advantage can lead to the original problem. For example, it cannot be due to minimizing energy consumption, as discussed above. But other explanations don’t run into this problem. Laziness can often lead to more efficient solutions, which is beneficial, so we evolved ego depletion to promote it, and now we’re stuck with it. Of course, all the pitfalls customary to evolutionary psychology apply, so I won’t go in depth about this.
4) There is a neurological limitation deeply related to the way the brain works. Kind of like cars can only go so fast, and it’s not good for them if you push them to maximum speed all the time. At first glance, the brain is propagating charge through neurons all the same, regardless of how tiring an action it’s accomplishing. But one could imagine non-trivial complexities to how the brain functions which account for this particular limitation. I dare not speculate further since I know so little about neurology.
I don’t think either of those explanations is true but writing out my alternative theory and doing it full justice is a longer project.
I think part of the problem is that “hard mental work” is a category that’s very far from a meaningful category on the physical/neurological level. Bad ontology leads to bad problem modeling and understanding.
Imo: legislative gridlock of the congress inside your head (e.g. a software issue). Unclear if a problem or not.
The question about willpower depletion is different from the question about mental fatigue and you tend to conflate the two. Which one do you mean?
I have a hazy memory that there’s some discussion of exactly this in Keith Stanovich’s book “What intelligence tests miss”.
Unfortunately, my memory is hazy enough that I don’t trust it to say accurately (or even semi-accurately) what he said about it :-). So this is useful only to the following extent: if Sandi, or someone else interested in Sandi’s question, has a copy of Stanovich’s book or was considering reading it anyway, then it might be worth a look.
I’ve decided to create a website/community that will focus on improving autonomy of humans.
https://improvingautonomy.wordpress.com
The first goal is to explore how to do intelligence augmentation of humans in safe way (using populations dynamics/etc).
I think that this is both a more likely development path than singleton AIs and also a more desirable one if done well.
Still a work in progress. I’m putting it here so that if people have good arguments that this path should not be developed at all, I would like to hear them before I get too embroiled in it.
By “humans” you mean “individuals”, right?
I expect that one of the arguments contra that you will see (I do not subscribe to it) is that highly-capable individuals are just too dangerous. Basically, power can be used not only to separate oneself from the society you don’t like, but also to hurt the society you don’t like. The contemporary technological society is fragile and a competent malcontent can do terrible damage to it.
Individuals yup. That is the failure mode to guard against.
I want to ask if it is possible to get a safe advanced society with things like mutual inspection for defection and creating technology sharing groups with other pro-social people. Such that anti-social people do not get a strategic decisive advantage (or much advantage at all).
A few issues immediately come to mind.
What’s “pro-social” and “anti-social”? In particular, what if you’re pro-social, but pro-a-different-social? Consider your standard revolutionaries of different kinds.
Pro- and anti-social are not immutable characteristics. People change.
If access to technology/power is going to be gated by conformity, the whole autonomy premise goes out of the window right away
Pro-social is not trying to take over the entire world or threatening . It is agreeing to mainly non-violent competition. Anti-social is genocide/pogroms, biocide, mind crimes, bio/nano warfare.
I’d rather no gating, but some gating might be required at different times.
Heh. If you think there’s any such thing as “non-violent competition”, you’re not seeing through some levels of abstraction. All resource allocation is violent or has the threat of violence behind it.
Poor competitors fail to reproduce, and that is the ultimate violence.
If the competition stops a person reproducing then sure it is a little violent. If it stops an idea reproducing, then I am not so sure I care about stopping all violence.
Failure to reproduce is not the ultimate violence. Killing someone and killing everyone vaguely related to them (including the bacteria that share a genetic code), destroying their culture and all its traces is far more violent.
Ideas have no agency. Agents competing for control/use of resources contain violence. I probably should back up a step and say “denial of goals is the ultimate violence”. If you have a different definition (preferably something more complete than “no hitting”), please share it.
There was a reason I mentioned revolutionaries.
Let’s take something old, say the French Revolution. Which side is pro-social? Both, neither?
Let’s take a hypothetical, say there is a group in Iran which calls the current regime “medieval theocracy” and wants to change the society to be considerably more Western-style. Are they pro-social?
A summer problem.
I guess the important thing to realise is that the size atoms is irrelevant to the problem. If we considered two atoms joined together to be a new “atom” then they would be twice as heavy, so the forces would be four times as strong, but there would be only half as many atoms, so there would be four times fewer pairs.
So the answer is just the integral as r and r’ range over the interior of the earth of G ρ(r) ρ(r’)/(r-r’)^2, where ρ(r) is the density. We can assume constant density, but I still can’t be bothered to do the integral.
The earth has mass 5.97*10^24 kg and radius 6.37*10^6 m, G = 6.674*10^-11 m^3 kg^-1 s^-2 and we want an answer in Newtons = m kg s^-2. So by dimensional analysis, the answer is about G M^2/r^2 = 5.86*10^25.
You estimate around 1 Earth weight on Earth’s surface.
That doesn’t seem right, though? Imagine a one dimensional version of the problem. If a stick of length 1 is divided into n atoms weighing 1/n each, then each pair of adjacent atoms is distance 1/n apart, so the force between them is 1. Since there are n such pairs, the total force grows at least linearly with n. And it gets even worse if some atoms are disproportionately closer to others (in molecules).
Cool insight. We’ll just pretend constant density of 3M/4r^3.
This kind of integral shows up all the time in E and M, so I’ll give it a shot to keep in practice.
You simplify it by using the law of cosines, to turn the vector subtraction 1/|r-r’|^2 into 1/(|r|^2+|r’|^2+2|r||r’|cos(θ)). And this looks like you still have to worry about integrating two things, but actually you can just call r’ due north during the integral over r without loss of generality.
So now we need to integrate 1/(r^2+|r’|^2+2r|r’|cos(θ)) r^2 sin(θ) dr dφ dθ. First take your free 2π from φ. Cosine is the derivative of sine, so substitution makes it obvious that the θ integral gives you a log of cosine. So now we integrate 2πr (ln(r^2+|r’|^2+2r|r’|) - ln(r^2+|r’|^2-2r|r’|)) / 2|r’| dr from 0 to R. Which mathematica says is some nasty inverse-tangent-containing thing.
Okay, maybe I don’t actually want to do this integral that much :P
EDIT: On second thoughts most of the following is bullshit. In particular, the answer clearly can’t depend logarithmically on R.
I had a long train journey today so I did the integral! And it’s more interesting than I expected because it diverges! I got the answer (GM^2/R^2)(9/4)(log(2)-43/12-log(0)). Of course I might have made a numerical mistake somewhere, in particular the number 43⁄12 looks a bit strange. But the interesting bit is the log(0). The divergence arises because we’ve modelled matter as a continuum, with parts of it getting arbitrarily close to other parts.
To get an exact answer we would have to look at how atoms are actually arranged in matter, but we can get a rough answer by replacing the 0 in log(0) by r_min/R, where r_min is the average distance between atoms. In most molecules the bond spacing is somewhere around 100 nm. So r_min ~ 10^-10, and R = 6.37*10^6 so log(r_min/R) ~ −38.7, which is more significant than the log(2)-43/12 = −2.89. So we can say that the total is about 38.7*9/4*GM^2/R^2 which is 87GM^2/R^2 or 5.1*10^27.
[But after working this out I suddenly got worried that some atoms get even closer than that. Maybe when a cosmic ray hits the earth it does so with such energy that it gets really really close to another nucleus, and then the gravitational force between them dominates the rest of the planet put together. Well the strongest cosmic ray on record is the Oh-My-God particle with mass 48J. So it would have produced a spacing of about h_barc/48, which is about 6.6\10^-28. But the mass of a proton is about 10^-27, so Gm^2/r^2 is about G, and this isn’t as significant as I feared.]
Very nice, and the result is a thousand Earth’s weights now. I wonder if every atom inside Earth feels the gravity of every other atom at every moment. (I think not. Which is a heresy, so please don’t pay any attention to that.)
If we want a measure of rationality that’s orthogonal to intelligence, maybe we could try testing the ability to overcome motivated reasoning? Set up a conflict between emotion and reason, and see how the person reacts. The marshmallow test is an example of that. Are there other such tests, preferably ones that would work on adults? Which emotions would be easiest?
It seems like it would be tricky to distinguish “good at reasoning even in the face of emotional distractions” from “not experiencing strong emotions”. The former is clearly good; the latter arguably bad.
I’m not sure how confident I am that the paragraph above makes sense. How does one measure the strength of an emotion, if not via its effects on how the person feeling it acts? But it seems like there’s a useful distinction to be made here. Perhaps something like this: say that an emotion is strong if, in the absence of deliberate effort, it has large effects on behaviour; then you want to (1) feel emotions that have a large effect on you if you let them but (2) be able to reduce those effects to almost nothing when you choose to. That is, you want a large dynamic range.
Among other things I’d like to test the ability to abandon motivated beliefs, like religion. Yes, it might be due to high intelligence or weak emotions. But if we want a numerical measure that’s orthogonal to intelligence, we should probably treat these the same.
So you want something like intellectual lability? I have strong doubts it will be uncorrelated to intelligence.
I’m guessing you’re aiming at “strongly-supported views held strongly, weakly-supported views held weakly”, but stupid people don’t do that.
The marshmallow and Asch experiments aren’t testing anything like intellectual lability. They are testing if you can do the reasonable thing despite emotions and biases. That’s a big part of rationality and that’s what I’d like to test. Reasoning yourself out of religion is an advanced use of the same skill.
The marshmallow experiment tests several things, among them the time preference. Asch tests, also among other things, how much do you value fitting well into society. It’s not all that simple.
May I then suggest calling this ability “vulcanness” and measure it in millispocks?
And how that “ability to do the reasonable thing” is going to be orthogonal to intelligence?
When asked about their preferences verbally, most people wouldn’t endorse the extreme time discounting that would justify eating the marshmallow right away, and wouldn’t endorse killing a test subject to please the experimenter. So I don’t think these behaviors can be viewed as rational.
You are aware of the difference between expressed preferences and revealed preferences, yes? It doesn’t seem to me that sticking with expressed preferences has much to do with rationality.
I prefer to work under the assumption that some human actions are irrational, not just revealed preferences. Mostly because “revealed preferences” feels like a curiosity stopper, and researching specific kinds of irrationality (biases) is so fruitful in comparison.
Huh? Both expressed and revealed preferences might or might not be rational. There’s nothing about revealed preferences which makes them irrational by default.
Nobody’s telling you to stop there. Asking, for example, “why does this person have these preferences and is there a reason they are not explicit?” allows you to continue.
Sexual attraction...
I am imagining how to set up the experiment...
“Sir, I will leave you alone in this room now, with this naked supermodel. She is willing to do anything you want. However, if you can wait for 20 minutes without touching her—or yourself! -- I will bring you one more.”
I don’t know how much sexual satisfaction scales linearly, but from 1 to 2 seems about right.
Yeah. Fear might be even easier. But I’m not sure how to connect it with motivated reasoning.
Now that I think of it, Asch’s conformity experiment might be another example of what I want (if conformity is irrational). It seems like a fruitful direction.
The gom jabbar test.
Gom jabbar might be more about stubbornness than rationality :-)
The point of the test was to distinguish between a human and an animal :-/
This article argues that marshmallow tests are mostly just measuring how much a kid wants to please adults or do what adults expect of them.It seems likely that any single test of rationality someone could come up with will be very noisy, so to get an idea of how rational someone is we’ll need to do lots of tests, or (since that’s very costly) settle for something like a questionaire.
Yeah, I guess a rationality test needs to have many questions, like an IQ test. It will be tricky to make each question emotionally involved, but hey, I just started thinking about it.
Why do you want a measure of rationality that’s orthogonal to (measures of) intelligence? Whatever this reason is will likely lead you to a better phrasing of what aspects of behavior/capability you want to test for.
Keith Stanovich worked on creating a test for rationality: https://mitpress.mit.edu/books/rationality-quotient
His test doesn’t involve emotions.
I know!
Please, recommend me more books in the line of ‘Metaphors we live by’ and ‘Surfaces and Essences’.
Wilson’s Six views of embodied cognition gives a broad overview of embodied cognition in 12 pages and has a few good references. https://people.ucsc.edu/~mlwilson/publications/Embodied_Cog_PBR.pdf
I decided to read Holyoak et al.’s Mental Leaps: Analogy in Creative Thought when Surfaces and Essences started feeling drawn-out.
80,000 Hours recently ranked “Judgement and decision making” as the most employable skill.
I think they’ve simplified too much and ended up with possibly harmful conclusions. To illustrate one problem with their methodology, imagine that they had looked at medieval England instead. Their methods would have found kings and nobles having highest pay and satisfaction, and judgment heavily associated with those jobs. The conclusion? “Peasants, practiceth thy judgment!”
What do you think? If there was a twin study where the other twin pursued programming, and the other judgment, who would end up with higher satisfaction and pay? If you think it’s not the programmer, why?
Also germane is that if a high-schooler asked me how to practice judgement and decision making, I’m not entirely sure how I’d suggest learning that. (Maybe play lots of games like poker or Magic? Read the sequences? Be a treasurer in high school clubs?) If someone asked how to practice programming, I can think of lots of ways to practice that and get better.
Confounder- I make my living by programming and suspending my judgement and decision making.
Good judgement comes from experience.
Experience comes from bad judgement.
Experience alone might not be enough, it’s good when the experience has feedback loops.
I’m not sure what pursuing “judgement and decision making” would look like in practice.
We can’t really well practice or even measure most of the recommended skills, such as judgment, critical thinking, time management, monitoring performance, complex problem solving, active learning. This is one of the reasons why I disagree with the article, and think its conclusions are not useful.
They’re a bit like saying that high intelligence is associated with better pay and job satisfaction.
I think “can’t practice” is a bit strong. CFAR would be a practice that trains a bunch of those skills. The problem is that there’s no 3 year CFAR bachelor where the student does that kind of training all the time but CFAR does 4 day workshops.
I do not mean that it is impossible to practice, just that it’s not a well-defined skill you can measuredly improve like programming. I believe it’s not a skill you can realistically practice in order to improve your employability.
I have been following CFAR from their beginning. If anything, the existence and current state of CFAR demonstrates how judgment is a difficult skill to practice, and difficult to measure. There’s no evidence of CFAR’s effectiveness available on their website (or it is well hidden).