Open Thread June 2010, Part 3
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
The thrilling conclusion of what is likely to be an inaccurately named trilogy of June Open Threads.
How to Keep Someone with You Forever.
This is a description of “sick systems”—jobs and relationships which destructively take people’s lives over.
I’m posting it here partly because it may be of use—systems like that are fairly common and can take a while to recognize, and partly because it leads to some general questions.
One of the marks of a sick system is that the people running it convince the victims that they (the victims) are both indispensable and incompetent—and it can take a very long time to recognize the contradiction. It’s plausible that the crises, lack of sleep, and frequent interruptions are enough to make people not think clearly about what’s being done to them, but is there any more to it than that?
One of the commenters to the essay suggests that people are vulnerable to sick systems because raising babies and small children is a lot like being in a sick system. This is somewhat plausible, but I suspect that a large part of the stress is induced by modern methods of raising small children—the parents are unlikely to have a substantial network of helpers, they aren’t sharing a bed with the baby (leading to more serious sleep deprivation), and there’s a belief that raising children is almost impossible to do well enough.
Also, it’s interesting that people keep spontaneously inventing sick systems. It isn’t as though there’s a manual. I’m guessing that one of the drivers is feeling uncomfortable at seeing the victims feeling good and/or capable of independent choice, so that there are short-run rewards for the victimizers for piling the stress on.
On the other hand, there’s a commenter who reports being treated better by her family after she disconnected from the craziness.
Interesting. I suspect that sick systems are actually highly competitively-fit, and while people who opt-out of them may be happier, those people will propagate themselves less, and therefore will be overwhelmed by Azathothian forces.
Is there any way to combat Azathoth aside from forming a singleton?
Why do you think sick systems are highly competitively fit? They seem to get a lot of work out of people, but also waste a great deal of it.
If your hypothesis is that sick systems must be competitively fit because there are a great many of them, I think stronger evidence is needed.
As long as the system extracts & uses more work than it’s equivalent healthy system—after wastage—then it will outperform it. It doesn’t matter if the system burns through employees every few years, there are plenty of other employees to burn up.
I would think sick systems have less good judgment than healthy systems—they don’t just burn up employees, management is less likely to get information about any mistakes it ’s making.
On the other hand, sick systems do at least persist for quite a while. I’m guessing that they coast on the conscientiousness and other virtues of the employees. It’s conceivable that some fraction of the excess work isn’t wasted.
Message from Warren Buffett to other rich Americans
http://money.cnn.com/2010/06/15/news/newsmakers/Warren_Buffett_Pledge_Letter.fortune/index.htm?postversion=2010061608
I find super-rich people’s level of rationality specifically interesting, because, unless they are heirs or entertainment, it takes quite a bit of instrumental rationality to ‘get there’. Nevertheless it seems many of them do not make the same deductions as Buffett, which seem pretty clear:
In this sense they are sort of ‘natural experiments’ of cognitive biases at work.
Wow. That is some seriously clear thinking. Too bad Mr. Buffet isn’t here to get the upvote himself, so I upvoted you instead. ;-)
I think in Buffett’s case this is not an accident; I venture to claim that his wealth is a result of fortune combining with an unusual doze of rationality (even if he calls it ‘genes’). My strongest piece of evidence is that his business partner for the past 40 years, Charlie Munger, is one of the very early outspoken adopters of the good parts of modern psychology, such as ideas of Cialdini and Tversky/Kahneman and decision-making under uncertainty.
http://vinvesting.com/docs/munger/human_misjudgement.html
Oh wow, I think I have a new role model. Any chance we can get these two (Buffet and Munger) to open a rationality dojo? (Who knows, they might be impressed, given that most people ask them for wealth advice instead...)
I made a couple of comments here http://lesswrong.com/lw/1kr/that_other_kind_of_status/255f at Yvain’s post titled “That Other Kind of Status.” I messed up in writing my first comment in that it did not read as I had intended it to. Please disregard my first comment (I’m leaving it up to keep the responses in context).
I clarified in my second comment. My second comment seems to have gotten buried in the shuffle and so I thought I would post again here.
I’ve been a lurker in this community for three months and I’ve found that it’s the smartest community that I’ve ever come across outside of parts of the mathematical community. I recognize a lot of the posters as similar to myself in many ways and so have some sense of having “arrived home.”
At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they’re sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such posters believe.
I don’t want to get involved in a debate about this point now (although I’d be happy to elaborate and give my thoughts in detail if there’s interest).
What I want to do is to draw attention to the remarks that I made in my second comment at the link. From what I’ve read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you’re a part of these groups (*).
My drawing attention to this question is not out of malice toward any of you—as I indicated above, I feel more comfortable with Less Wrong than I do with almost any other large group that I’ve ever come across. I like you people and if some of you are suffering from the issue (*) I see this as understandable and am sympathetic—we’re all only human.
But I am concerned that I haven’t seen much evidence of serious reflection about the possibility of (*) on Less Wrong. The closest that I’ve seen is Yvain’s post titled “Extreme Rationality: It’s Not That Great”. Even if the most ardent Less Wrong and SIAI supporters are mostly right about their beliefs, (*) is almost certainly at least occasionally present and I think that the community would benefit from a higher level of vigilance concerning the possibility (*).
Any thoughts? I’d also be interested in any relevant references.
[Edited in response to cupholder’s comment, deleted extraneous words.]
You know what… I’m going to come right out and say it.
A lot of people need their clergy. And after a decade of denial, I’m finally willing to admit it—I am one of those people.
The vast majority of people do not give their 10% tithe to their church because some rule in some “holy” book demands it. They don’t do it because they want a reward in heaven, or to avoid hell, or because their utility function assigns all such donated dollars 1.34 points of utility up to 10% of gross income.
They do it because they want their priests to kick more ass than the OTHER group’s priests. OUR priests have more money, more power, and more intellect and YOUR sorry-ass excuse for a holy-man. “My priest bad, cures cancer and mends bones; your priest weak, tell your priest to go home!”
So when I give money to the SIAI (or FHI or similar causes) I don’t do it because I necessarily think it’s the best/most important possible use of my fungible resources. I do it because I believe Eliezer & Co are the most like-me actors out there who can influence the future. I do it because of all the people out there with the ability to alter the flow of future events, their utility function is the closest to my own, and I don’t have the time/energy/talent to pursue my own interests directly. I want the future to look more like me, but I also want enough excess time/money to get hammered on the weekends while holding down an easy accounting job.
In short—I want to be able to just give a portion of my income to people I trust to be enough like me that they will further my goals simply by pursuing their own interests. Which is to say: I want to support my priests.
And my priests are Eliezer Yudkowsky and the SIAI fellows. I don’t believe they leach off of me, I feel they earn every bit of respect and funding they get. But that’s besides the point. The point is that even if the funds I gave were spent sub-optimally, I would STILL give them this money, simply because I want other people to see that MY priests are better taken care of than THEIR priests.
The vatican isn’t made out of gold because the pope is greedy, it’s made out of gold because the peasants demand that it be so. And frankly, I demand that the vatican be put to fucking shame when it compares itself us.
Standard Disclaimer, but really… some enthusiasm is needed to fight Azathoth.
Voted up for honesty.
Umm, while on some visceral level I can relate to this sentiment I still find it hugely inappropriate. Reality --> enthusiasm, not in reverse order, and I think not even a slight deviation from that pattern is permissible.
Comment on markup: I saw the first version of your comment, where you were using “(*)” as a textual marker, and I see you’re now using “#” because the asterisks were messing with the markup. You should be able to get the “(*)” marker to work by putting a backslash before the asterisk (and I preferred the “(*)” indicator because that’s more easily recognized as a footnote-style marker).
Feels weird to post an entire paragraph just to nitpick someone’s markup, so here’s an actual comment!
Let me try and rephrase this in a way that might be more testable/easier to think about. It sounds like the question here is what is causing the correlation between being a member of LW/SIAI and agreeing with LW/SIAI that future AI is one of the most important things to worry about. There are several possible causes:
group membership causes group agreement (agreement with the group)
group agreement causes group membership
group membership and group agreement have a common cause (or, more generally, there’s a network of causal factors that connect group membership with group agreement)
a mix of the above
And we want to know whether #1 is strong enough that we’re drifting towards a cult attractor or some other groupthink attractor.
I’m not instantly sure how to answer this, but I thought it might help to rephrase this more explicitly in terms of causal inference.
I’m not sure that your rephrasing accurately captures what I was trying to get at. In particular, strictly speaking (*) doesn’t require that one be a part of a group , although being part of a group often plays a role in enabling (*).
Also, I’m not only interested in possible irrational causes for LW/SIAI members’ belief that future AI is one of the most important things to worry about, but also possible irrational causes for each of:
(1) SIAI members’ belief that donating to SIAI in particular is the most leveraged way to reduce existential risks? Note that it’s possible to devote ones’ live to a project without believing that it’s the best project for additional funding—see Givewell’s blog posts on Room For More Funding:
For reference, PeerInfinity says
(2) The belief that refining the art of human rationality is very important.
On (2), I basically agree with Yvain’s post Extreme Rationality: It’s Not That Great.
My own take is that the Less Wrong community has been very enriching in some of its members lives on account of allowing them the opportunity to connect with people similar to themselves, and that their very positive feelings connected with their Less Wrong experience have led some of them to overrate the overall importance of Less Wrong’s stated mission. I can write more about this if there’s interest.
Thank you for clarifying. I don’t think I really have an opinion on this, but I figure it’s good to have someone bring it up as a potential issue.
I’m interested. I’ve been thinking about this issue myself for a bit, and something like an ‘internal review’ would greatly help in bringing any potential biases the community holds to light.
I’m not aware of anyone here who would claim that LW is one of the most important things in the world right now but I think a lot of people here would agree that improving human reasoning is important if we can have those improvements apply to lots of different people across many different fields.
There is a definite group of people here who think that SIAI is really important. If one thinks that a near Singularity is a likely event then this attitude makes some sense. It makes a lot of sense if you assign a high probability to a Singularity in the near future and also assign a high probability to the possibility that many Singularitarians either have no idea what they are doing or are dangerously wrong. I agree with you that the SIAI is not that important. In particular, I think that a Singularity is not a likely event for the foreseeable future, although I agree with the general consensus here that a large fraction of Singularity proponents are extremely wrong at multiple levels.
Keep in mind that for any organization or goal, the people you hear the most about it are the people who think that it is important. That’s the same reason that a lot of the general public thinks that tokamak fusion reactors will be practical in the next fifty years: The physicists and engineers who think that are going to loudly push for funding. The ones who don’t are going to generally just go and do something else. Thus, in any given setting it can be difficult to estimate the general communal attitude towards something since the strongest views will be the views that are most apparent.
I don’t think intelligence explosion is imminent either. But I believe it’s certain to eventually happen, absent the end of civilization before that. And I believe that its outcome depends exclusively on the values of the agents driving it, hence we need to be ready, with good understanding of preference theory at hand when the time comes. To get there, we need to start somewhere. And right now, almost nobody is doing anything in that direction, and there is very poor level of awareness of the problem and poor intellectual standards of discussing the problem where surface awareness is present.
Either right now, or 50, or 100 years from now, a serious effort has to be taken on, but the later it starts, the greater the risk of being too late to guide the transition in a preferable direction. The problem itself, as a mathematical and philosophical challenge, sounds like something that could easily take at least 100 years to reach clear understanding, and that is the deadline we should worry about, starting 10 years too late to finish in time 100 years from now.
Vladimir, I agree with you that people should be thinking intelligence explosion, that there’s a very poor level of awareness of the problem, and that the intellectual standards for discourse about this problem in the general public are poor.
I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy “paper clip maximizer vs. Friendly AI” seems like a false dicotomy—I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.
SIAI seems to have focused on the existential risk of “unfriendly intelligence explosion” and it’s not clear to me that this existential risk is greater than the risks coming from world war and natural resource shortage.
Mainly Complexity of value. There is no way for human values to magically jump inside the AI, so if it’s not specifically created to reflect them, it won’t have them, and whatever the AI ends up with won’t come close to human values, because human values are too complex to be resembled by any given structure that happens to be formed in the AI.
The more AI’s preference diverges from ours, the more we lose, and this loss is on astronomic scale (even if preference diverges relatively little). The falloff with imperfect reflection of values might be so sharp that any ad-hoc solution turns the future worthless. Or maybe not, with certain classes of values that contain a component of sympathy that reflects values perfectly while giving them smaller weight in the overall game, but then we’d want to technically understand this “sympathy” to have any confidence in the outcome.
This depends on something like aggregative utilitarianism. If additional resources have diminishing marginal value in fulfilling human aims, that getting a little slice of the universe (in the course of negotiating terms of surrender with the inhuman AI, if it can make credible commitments, or because we serve as acausal bargaining chips with other civilizations elsewhere in the universe) may be enough. Is getting 100% of the lightcone a hundred times better than 1%?
I think yes, if we take into account that the more of the lightcone we (our FAI) get, the more trading opportunities we would have with UFAI in other possible worlds. Diminishing marginal value shouldn’t apply across possible worlds, because otherwise it would imply gross violations of expected utility maximization.
Also, I suspect that there are possible worlds with much greater resources than our universe (perhaps with physics that allow hypercomputation, or just many orders of magnitude more total exploitable resources), and some of them would have potential trading partners who are willing to give us a small share of their world for a large share of ours. We may eventually achieve most of our value from trading with them. But of course such trade wouldn’t be possible if we didn’t have something to trade with!
Interesting. This suggests thinking about FAI not as using its control to produce terminal value in its own world, but as using its control to buy as much terminal value as it can, in various world-programs. Since it doesn’t matter where the value is produced, most of the value doesn’t have to be produced in the possible worlds with FAIs in them. Indeed, it sounds unlikely that specifically the FAI worlds will be optimal for FAI-value optimization. FAIs (and the worlds they control) act as instrumental leverage, a way of controlling the global mathematical universe into having more value for our preference.
Thus, more FAIs means stronger control over the mathematical universe, while more UFAIs mean that the mathematical universe is richer, and so the FAIs can get more value out of it with the same control. The metaphors of trade and comparative advantage start applying again, not on the naive level of cohabitation on the same world, but on the level of the global ontology. Mathematics grants you total control over your domain, so that your “atoms” can’t be reused for something else by another stronger agent, and so you do benefit from most superintelligent “aliens”.
Yes, assuming that trading across possible worlds can be done in the first place. One thing that concerns me is the combinatorial explosion of potential trading partners. How do they manage to “find” each other?
It’s the same combinatorial explosion as with the future possible worlds. Even though you can’t locate individual valuable future outcomes (through certain instrumental sequences of exact events), you can still make decisions about your actions leading to certain consequences “in bulk”, and I expect the trade between possible worlds can be described similarly (after all, it does work on exactly the same decision-making algorithm). Thus, you usually won’t know who are you trading with, exactly, but on the net estimate that your actions are in the right direction.
Isn’t the set of future worlds with high measure a lot smaller?
I currently agree it’s a bad analogy and I no longer endorse the position that global acausal trade is probably feasible, although its theoretical possibility seems to be a stable conclusion.
Robin Hanson would be so pleased that it turns out economics is the fundamental law of the entire ensemble universe.
There are two distinct issues here: (1) how high would a human with original preference value a universe which only gives a small weight to their preference, and (2) how likely is the changed preference to give any weight whatsoever to the original preference, in other words to produce a universe to any extent valuable to the original preference, even if original preference values universes only weakly optimized in its direction.
Moving to a different preference is different from lowering weight of the original preference. A slightly changed (formal definition of) preference may put no weight at all on the preceding preference. The optimal outcome according to the modified preference can thus be essentially moral noise, paperclips, to the original preference. Giving a small slice of the universe, on the other hand, is what you get out of aggregation of preference, and a changed preference doesn’t necessarily have a form of aggregation that includes original preference. (On the other hand, there is a hope that human-like preferences include sympathy, which does make them aggregate preferences of other persons with some weight.)
We should assign some substantial probability to getting some weighting of our preference (from bargaining with transparency, acausal trade, altruistic brain emulations, etc). If a moderate weighting of our preferences gets most of the potential utility, then the expected utility of inhuman AIs getting powerful won’t be astronomically less than the expected utility of, e.g. a ‘Friendly AI’ acting on idealized human preferences.
Game-theoretic considerations are only relevant when you have non-trivial control, not when your atoms are used for something else. If singleton’s preference gives some weight to your preference, this is a case of having control directly through the singleton’s preference, but the origin of this control is not game-theoretic. If the singleton’s preference has sympathy for your preference, your explicit instantiation in the world doesn’t need to have any control, in order to win through the implicit control via singleton’s preference.
Game-theoretic aggregation, on the other hand, doesn’t work by influence on other agent’s preference. You only get your slice of the world because you already control it. Another agent may perform trade, but this is trade of control, rearranging what specifically each of you controls, without changing your preferences.
I assume that control will be winner-takes-all, so preferences of other agents existing at the time only matter if the winner’s preference directly pays to their preferences any attention, but not if they had some limited control from the start.
My point is that inhuman AI may give no weight to our preference, while FAI may give at least some weight to everyone’s preference. Game-theoretic trade won’t matter here because agents other than the singleton have no control to bargain with. FAI gives weight to other preferences not because of trade, but by construction from the start, even if people it gives weight to don’t exist at all (FAI giving them weight in optimization might cause them to appear, or a better event at least as good from their perspective).
This isn’t obviously the most natural way to describe a scenario in which an AI thinks it has a 90% chance of winning a conflict with humanity, but also has the ability to jointly create (with humanity) agents to enforce an agreement (and can do this quickly enough to be relevant), so cuts a deal splitting up the resources of the light cone at a 9:1 ratio.
Given that there are plausible sets of parameter values where this assumption is false, we can’t use it to assess overall expected value to astronomical precision.
I specifically mentioned acausal trade, a la Rolf Nelson’s AI-deterrence scheme, which needs non-trivial control only in some region of the ensemble of possibilities the AI considers. Indeed, the AI might treat us well simply because of the chance that benevolent non-human aliens will respond positively if its algorithm has this output (as the benevolent aliens might be modeling the AI’s algorithm).
Yes, I forgot about that (though I remain uncertain about how well this argument works, not having worked out a formal model). To summarize the arguments for why future is still significantly more valuable than what we have now, even if we run into Unfriendly AI,
(1) if there is a non-negligible chance that we’ll have FAI in the future, or that we could’ve created FAI if some morally random facts in the past (such as the coin in counterfactual mugging) were different, then we can estimate the present expected value of the world as pretty high, as a factor of getting whole universes (counterfactually or probably) optimized towards your specific preference is present in the expected utility computation. The counterfactual value is present even if it’s certain that the future contains Unfriendly AI.
(2) It’s even better, because the unfriendly singletons will also optimize their worlds towards your preference a little for game-theoretic reasons, even if they don’t care at all about your preference. This game is not with you personally, a human that controls very little and whose control can’t compel a singleton to any significant extent, but with the counterfactual FAIs. The FAIs that could be created, but weren’t, can act as Omega in counterfactual mugging, making it profitable for the indifferent singletons to pay the FAI a little in FAI-favored kind of world-optimization.
(3) Some singletons that don’t follow your preference in particular, but have remotely human-like preference, will have a component of sympathy in their preference, and will dole your preference some fair portion of control in their world, that is much greater than the portion of control you held originally. This sympathy seems to be godshatter of game-theoretic considerations that compel even singletons with non-evolved (artificial, random) preferences according to arguments (1) and (2).
The conclusion to this seems to be that creating an Unfriendly AI is significantly better than ending up with no rational singleton at all (existential disaster that terminates civilization), but significantly worse than a small chance of FAI.
Your comments are mostly good, but I dispute the final assumption that no singleton ⇒ disaster. There has as yet been no investigation into the merits of singleton vs. an economy (or ecosystem) of independent agents.
If we were living in the 18th century, it would be reasonable to suppose that the only stable situation is one where one agent is king. But we are not.
Yep, these are key considerations.
So there’s the utility difference between business-as-usual (no AI), and getting a small share of resources optimized for your preference, and the utility difference between getting small and large shares of resources. If the second difference is much larger than the first, then (1) is crucial, and (2) and (3) are not so good. But if the first difference is much bigger than the second, the pattern is the reverse.
And if we’re comparing expected utility conditioning on no local FAI here and EU conditioning on FAI here, moderate credences can suffice (depending on the shape of your utility function).
Whether FAI is local or not can’t matter, whether something is real or counterfactual is morally irrelevant. If we like small control, it means that the possible worlds with UFAI are significantly valuable, just as the worlds with FAI, provided there are enough worlds with FAI to weakly control the UFAIs; and if we like only large control, it means that the possible worlds with UFAI are not as valuable, and it’s mostly the worlds with FAI that matter.
What do “small control” and “large control” mean?
It’s not literally the reverse, because if you don’t create those FAIs, nobody will, and so the UFAIs won’t have the incentive to give you your small share. It’s never good to increase probability of UFAI at the expense of probability of FAI. I’m not sure whether there is any policy guideline suggested by these considerations, conditional on the pattern in utility you discuss. What should we do differently depending on how much we value small vs. large control? It’s still clearly preferable to have UFAI to having no future AI, and to have FAI to having UFAI, in both cases.
Worrying less about our individual (or national) shares, and being more cooperative with other humans or uploads seems like an important upshot.
I’m not convinced by the claim that human values have high Kolmogorov complexity.
In particular, Eliezer’s article Not for the Sake of Happiness Alone is totally at odds with my own beliefs. In my mind, it’s incoherent to give anything other than subjective experiences ethical consideration. My own preference for real science over imagined science is entirely instrumental and not at all terminal.
Now, maybe Eliezer is confused about what his terminal values are, or maybe I’m confused about what my terminal values are, or maybe our terminal values are incompatible. In any case, it’s not obvious that an AI should care about anything other than the subjective experiences of sentient beings.
Suppose that it’s okay for an AI to exclude everything but subjective experience from ethical consideration. Is there then still reason to expect that human values have high Kolmogorov complexity?
I don’t have a low complexity description to offer, but it seems to me that one can get a lot of mileage out of the principles “if an individual prefers state A to state B whenever he/she/it is in either of state A or state B, then state A is superior for that individual to state B” and “when faced with two alternatives, the moral alternative is the one that you would prefer if you were going to live through the lives of all sentient beings involved.”
Of course “sentient being” is ill-defined and one would have to do a fair amount of work frame the things that I just said in more formal terms, but anyway, it’s not clear to me that there’s a really serious problem here.
I totally agree that if the creation of a superhuman AI is going to precede all other existential threats then we should focus all of our resources on trying to get the superhuman AI to be as friendly as possible.
Have you read the Heaven post by denisbider and the two follow-ups constituting a mini-wireheading series? There have been other posts on the difference between wanting and liking; but it illustrates a fairly strong problem with wireheading: Even if all we’re worried about is “subjective states,” many people won’t want to be put in that subjective state, even knowing they’ll like it. Forcing them into it or changing their value system so they do want it are ethically suboptimal solutions.
So, it seems to me that if anything other than maximized absolute wireheading for everyone is the AI’s goal, it’s gonna start to get complicated.
Thanks for the references to the posts which I had not seen before and which I find relevant. I’m sympathetic toward denisbider’s view, but will read the comments to see if I find diverging views compelling.
Maybe you should start with what’s linked from fake fake utility functions then (the page on the wiki wasn’t organized quite as I expected).
But I would qualify the last sentence of my reply by saying that the best way to get a superhuman AI to be as friendly as possible may not be to work on friendly AI or advocate for friendly AI. For example, it may be best to work toward geopolitical stability to minimize the chances of some country rashly creating a potentially unsafe AI out of a sense of desperation during wartime.
(?) I never said that.
Yes, I was agreeing with what I inferred your attitude to be rather than agreeing with something that you said. (I apologize if I distorted your views—if you’d like I can edit my comment to remove the suggestion that you hold the position that I attributed to you.)
I don’t believe that we “should focus all of our resources” on FAI, as there are many other worthy activities to focus on. The argument is that this particular problem gets disproportionally little attention, and while with other risks we can in principle luck out even if they get no attention, it isn’t so for AI. Failing to take FAI seriously is fatal, failing to take nanotech seriously isn’t necessarily fatal.
Thus, although strictly speaking I agree with your implication, I don’t see its condition plausible, and so implication as whole relevant.
Re: “Is there then still reason to expect that human values have high Kolmogorov complexity?”
Human values are mosly a product of their genes and their memes. There is an awful lot of information in those. However, it is true that you can fairly closely approximate human values—or those of any other creature—by the directive to make as many grandchildren as possible—which seems reasonably simple.
Most of the arguments for humans having complex values appear to list a whole bunch of proximate goals—as though that constitutes evidence.
I disagree. You need to know much more than just the drive for grandchildren, given the massively diverse ways we observe even in our present world for species to propagate, all of which correspond to different articulable values once they reach human intelligence.
Human values should be expected to have a high K-complexity because you would need to specify both the genes/early environment, and the precise place in history/Everett branches where humans are now.
The idea was to “approximate human values”—not to express them in precise detail: nobody cares much if Jim likes strawberry jam more than he likes raspberry jam.
The environment mostly drops out of the equation—because most of it is shared between the agents involved—and because of the phenomenon of Canalisation: http://en.wikipedia.org/wiki/Canalisation_%28genetics%29
Sure, but I take “approximation” to mean something like getting you within 10 or so bits of the true distribution, but the heuristic you gave still leaves you maybe 500 or so bits away, which is huge, and far more than you implied.
That would help you on message length if you had already stored one person’s values and were looking to store a second person’s. It does not for describing the first person’s value, or some aggregate measure of humans’ values.
10 bits!!! That’s not much of a message!
The idea of a shared environment arises because the proposed machine—in which the human-like values are to be implemented—is to live in the same world as the human. So, one does not need to specify all the details of the environment—since these are shared naturally between the agents in question.
10 bits short of the needed message, not a 10-bit message. I mean that e.g. an approximation gives 100 bits when full accuracy would be 110 bits (and 10 bits is an upper bound).
That still doesn’t answer my point; it just shows how once you have one agent, adding others is easy. It doesn’t show how getting the first, or the “general” agent is easy.
Re: “That still doesn’t answer my point; it just shows how once you have one agent, adding others is easy. It doesn’t show how getting the first, or the “general” agent is easy.”
To specify the environment, choose the universe, galaxy, star, planet, lattiude, longitude and time. I am not pretending that information is simple, just that it is already there, if your project is building an intelligent agent.
Re: “10 bits short of the needed message”.
Yes, I got that the first time. I don’t think you are appreciating the difficulty of coding even relatively simple utility functions. A couple of ASCII characters is practically nothing!
ASCII characters aren’t a relevant metric here. Getting within 10 bits of the correct answer means that you’ve narrowed it down to 2^10 = 1024 distinct equiprobable possibilities [1], one of which is correct. Sounds like an approximation to me! (if a bit on the lower end of the accuracy expected out of one)
[1] or probability distribution with the same KL divergence from the true governing distribution
Or you can implement constant K-complexity learn-by-example algorithm and get all the rest from environment.
How about “Do as your creators do (generalize this as your creators generalize)”?
Maybe you should start with what’s linked from fake fake utility functions then (the page on the wiki wasn’t organized quite as I expected).
Not clear to me either that unfriendly AI is the greatest risk, in the sense of having the most probability of terminating the future (though “resource shortage” as existential risk sounds highly implausible—we are talking about extinction risks, not merely potential serious issues; and “world war” doesn’t seem like something particularly relevant for the coming risks, dangerous technology doesn’t need war to be deployed).
But Unfriendly AI seems to be the only unavoidable risk, something we’d need to tackle in any case if we get through the rest. On other problems we can luck out, not on this one. Without solving this problem, the efforts to solve the rest are for naught (relatively speaking).
I mean “existential risk” in a broad sense.
Suppose we run out of a source of, oh, say, electricity too fast to find a substitute. Then we would be forced to revert to a preindustrial society. This would be a permanent obstruction to technological progress—we would have no chance of creating a transhuman paradise or populating the galaxy with happy sentient machines and this would be an astronomical waste.
Similarly if we ran out of any number of things (say, one of the materials that’s currently needed to build computers) before finding an adequate substitute.
My understanding is that a large scale nuclear war could seriously damage infrastructure. I could imagine this preventing technological development as well.
On the other hand, it’s equally true that if another existential risk hits us before we friendly AI, all of our friendly AI directed efforts will be for naught.
That’s not how economics works. If one source of electricity becomes scarce, that means it’s more expensive, so people will switch to cheaper alternatives. All the energy we use ultimately comes from either decaying isotopes (fission, geothermal) or the sun; neither of those will run out in the next thousand years.
Modern computer chips are doped silicon semiconductors. We’re not going to run out of sand any time soon, either. Of course, purification is the hard part, but people have been thinking up clever ways to purify stuff since before they stopped calling it ‘alchemistry.’
The energy requirements for running modern civilization aren’t just a scalar number—we need large amounts of highly concentrated energy, and an infrastructure for distributing it cheaply. The normal economics of substitution don’t work for energy.
It’s entirely possible that failure to create a superintelligence before the average EROI drops too low for sustainment would render us unable to create one for long enough to render other existential risks inevitabilities.
“Substitution economics” seems unlikely to stop us eventually substituting fusion power and biodesiel for oil. Meanwhile, we have an abundance of energy in the form of coal—more than enough to drive progress for a loooog while yet. The “energy apocalypse”-gets-us-first scenario is just very silly.
Energy economics is interconnected enough with politics to make me lower my expectation of rationality from both of us for the remainder of the discussion due to reference class forecasting. Also, we are several inferential steps away from each other, so any discussion is going to be long and full of details. Regardless, I’m going to go ahead, assuming agreement that market forces cannot necessarily overcome resource shortages (or the Easter Islanders would still be with us).
Historically, the world switched from coal to petroleum before developing any technologies we’d regard as modern. The reason, unlike so much else in economics, is simple: the energy density of coal is 24 MJ/kg; the energy density of gasoline is 44 MJ/kg. Nearly doubling the energy density makes many things practical that wouldn’t otherwise be, like cars, trucks, airplanes, etc. Coal cannot be converted into a higher energy density fuel except at high expense and with large losses, making the expected reserves much smaller. The fuels it can be converted to require significant modifications to engines and fuel storage.
Coal is at least plausible, although a stop-gap measure with many drawbacks. It’s your hopes for fusion that really show the wishful thinking. Fusion is 20 years away from being a practical energy source, just like it was in 1960. The NIF has yet to reach break-even; economically practical power generation is far beyond that point; assuming a substantial portion of US energy generation needs is farther still. It’d be nice if Polywell/Bussard fusion proved practical, but that’s barely a speck on the horizon, getting its first big basic research grant from the US Navy. And nothing but Mr. Fusion will help unless someone makes an order of magnitude improvement in battery or ultracapacitor energy density.
No matter which of the alternatives you plan to replace the energy infrastructure with, you needed to start about 20 years ago. World petroleum production is no longer sufficient to sustain economic growth and infrastructure transition simultaneously. Remember, the question isn’t whether it’s theoretically possible to substitute more plentiful energy sources for the ones that are getting more difficult to extract, it’s whether the declining EROI of current energy sources will remain high enough for the additional economic activity of converting infrastructure to other sources while still feeding people, let alone indulging in activities with no immediate payoff like GAI research.
We seem to be living in a world where the EROI is declining faster than willingness to devote painful amounts of the GDP to energy source conversion is increasing. This doesn’t mean an immediate biker zombie outlaw apocalypse, but it does mean a slow, unevenly distributed “catabolic collapse” of decreasing standards of living, security, and stability.
Upvoted chiefly for
but I appreciate the analysis. (I am behind on reading comments, so I will be continuing downthread now.)
I don’t know why you focus so much on fusion although I agree it isn’t practical at this point. But note that batteries and ultracapacitors are just energy storage devices. Even if they become far more energy dense they don’t provide a source of energy.
Unfortunately, that appears to be part of the bias I’d expected in myself—since timtyler mentioned fusion, biofuels, and coal; I was thinking about refuting his arguments instead of laying out the best view of probable futures that I could.
The case for wind, solar, and other renewables failing to take up petroleum’s slack before it’s too late is not as overwhelmingly probable as fusion’s, but it takes the same form—they form roughly 0.3% of current world power generation, and even if the current exponential growth curve is somehow sustainable indefinitely they won’t replace current capacity until the late 21st century.
With the large-scale petroleum supply curve, that leaves a large gap between 2015 and 2060 where we’re somehow continuing to build renewable energy infrastructure with a steadily diminishing total supply of energy. I expect impoverished people to loot energy infrastructure for scrap metal to sell for food faster than other impoverished people can keep building it.
That we will eventually substitute fusion power and biodesiel for oil seems pretty obvious to me. You are saying it represents “wishful thinking”—because of the possibility of civilisation not “making it” at all? If so, be aware that I think that the chances of that happening seem to grossly exaggerated around these parts.
It seem very doubtful that we’ll have practical fusion power any time soon or necessarily ever. The technical hurdles are immense. Note that any form of fusion plant will almost certainly be using deuterium-tritium fusion. That means you need tritium sources. This also means that the internal structure will undergo constant low-level neutron bombardment which seriously reduces the lifespan of basic parts such as the electromagnets used. If we look at he form of proposed fusion that has had the most work and has the best chance of success, tokamaks, then we get to a number of other serious problems such as plasma leaks. Other forms of magnetic containment have also not solved the plasma leak problem. Forms of reactors that don’t use magnetic containment suffer from other similarly serious problems. For example, the runner up to magnetic containment is laser confinement but no one hasa good way to actually get energy out of laser confinement.
That said, I think that there are enough other potential sources of energy (nuclear fission, solar (and space based solar especially), wind, and tidal to name a few) that this won’t be an issue.
Um.. not sure what you mean. The energy out of inertial (i.e., laser) confinement is thermal. You implode and heat a ball of D-T, causing fusion, releasing heat energy, which is used to generate steam for a turbine.
Fusion has a bad rap, because the high benefits that would accrue if it were accomplished encourage wishful thinking. But that doesn’t mean it’s all wishful thinking. Lawrence Livermore has seen some encouraging results, for example.
EDIT: for fact checking vis-a-vis LLNL.
Yeah, but a lot of that energy that is released isn’t in happy forms. D-T releases not just high energy photons but also neutrons which are carrying away a lot of the energy. So what you actually need is something that can absorb the neutrons in a safe fashion and convert that to heat. Lithium blankets are a commonly suggested solution since a lot of the time lithium will form tritium after you bombard it with neutrons (so you get more tritium as a result). There’s also the technically simpler solution of just using paraffin. But the conversion of the resulting energy into heat for steam is decidedly non-trivial.
I see, thanks.
Imagine what people must have thought in 1910 about the feasibility of getting to the Moon or generating energy by artificially splitting atoms (especially within the 20th century).
Two problems with that sort of comparison: First, something like going to the Moon is a goal, not a technology. Thus, if we have other sources of power, the incentive to work out the details for fusion becomes small. Second, one shouldn’t forget how many technologies have been tried and have fallen by the wayside as not very practical or not at all practical. A good way of getting a handle on this is to read old issue of something like Scientific American from the 1950s and 1960s. Or read scifi from that time period. One of example of historical technology that never showed up on any substantial scale is nuclear powered airplanes, despite a lot of research in the 1950s about them. Similarly, nuclear thermal rockets have not been made. This isn’t because they are impossible, but because they are extremely impractical compared to other technologies. It seems likely that fusion power will fall into the same category. See this article about Project Pluto for example.
These are perfectly valid arguments and I admit that I share your skepticism concerning the economic competitiveness of the fusion technology. I admit, if I had a decision to make about buying some security, the payout of which would depend on the amount of energy produced by fusion power within 30 years, I would not hurry to place any bet.
What I lack is your apparent confidence in ruling out the technology based on the technological difficulties we face at this point in time.
I am always surprised how the opinion of so called experts diverges when it comes to estimating the feasibility and cost of different energy production options (even excluding fusion power). For example there is recent TED video where people discuss the pros and cons of nuclear power. The whole discussion boils down to the question: What are the resources we need in order to produce X amount of energy using
nuclear
wind
solar
biofuel
geothermal
power. For me, the disturbing thing was that the statements about the resource usage (e.g. area consumption, but also risks) of the different technologies were sometimes off by magnitudes.
If we lack the information to produce numbers in the same ballpark even for technologies that we have been using for decades (if not longer), then how much confidence can we have about the viability, costs, risks and competitiveness of a technology, like fusion, that we have not even started to tap.
Ask and ye shall receive: David MacKay, Sustainable energy without the hot air. A free online book that reads like porn for LessWrong regulars.
Yes, I’ve read that (pretty good) book quite a while ago and it is also referenced in the TED talk I mentioned.
This was one of the reasons I was surprised that there is still such a huge disagreement about the figures even among experts.
Re: “Second, one shouldn’t forget how many technologies have been tried and have fallen by the wayside as not very practical or not at all practical. [...] It seems likely that fusion power will fall into the same category.”
Er, not to the governments that have already invested many billions of dollars in fusion research it doesn’t! They have looked into the whole issue of the chances of success.
Automatically self-repairing nanotech construction? (To suggest a point where a straightforward way of dealing with this becomes economically viable.)
You would need not only self-repairing nanotech but such technology that could withstand both large amounts of radiation as well as strong magnetic fields. Of the currently proposed major methods of nanotech I’m not aware of any that has anything resembling a chance to meet those criteria (with the disclaimer that I’m not a chemist.) If we had nanotech that was that robust it would bump up so many different technologies that fusion would look pretty unnecessary. For example the main barrier to space elevators is efficient reliable synthesis of long chains of carbon nanotubes that could be placed in a functional composite (see this NASA Institute for Advanced Concepts Report for a discussion of these and related issues). We’d almost certainly have that technology well before anything like self-repairing nanotech that stayed functional in high radiation environments. And if you have functional space elevators then you get cheap solar power because it becomes very easy to launch solar power satellites.
I’m not talking about plausible now, but plausible some day, as a reply to your “It seem very doubtful … any time soon or necessarily ever”. The sections being repaired could be offline. “Self-repair” doesn’t assume repair within volume of an existing/operating structure, it could be all cleared out and rebuilt anew, for example. That it’s done more or less automatically is the economic requirement. Any other methods of relatively cheap and fast production, assembly and recycling will work too.
Ah ok. That’s a lot more plausible. There’s still the issue that once you have cheap solar the resources it takes to make fusion power will simply cost so much more as to likely not be worth it. But if it could be substantially more efficient than straight fission then maybe it would get used for stuff not directly on Earth if/when we have large installations that aren’t the inner solar system.
Estimating feasibility using exploratory engineering is much simpler than estimating what will actually happen. I’m only arguing that this technology will almost certainly be feasible on human level in not absurdly distant future, not that it’ll ever be actually used.
In that case, there’s no substantial disagreement.
There don’t seem to be too many electromagnets at the NIF: https://lasers.llnl.gov/
It seems to me that the problems are relatively minor, and so that we will have fusion power—with high probabilty this century.
[Wow—LW codebase doesn’t know about https!]
I would have thought that those ‘cheaper alternatives’ could still be more expensive than the initial cost of the original source of electricity...? In which case losing that original source of electricity could still bite pretty hard (albeit maybe not to the extent of being an existential risk).
Yes.
A stably benevolent stable world government/singleton could take its time solving AI, or inching up to it with biological and culture intelligence enhancement. From our perspective we should count that as almost a maximal win in terms of existential risks.
I don’t see your point. It would take an unrealistic world dictatorship (whether it’s “benevolent” seems like irrelevant hair-splitting at that point) to stop the risks (stop the technological progress in the wild!) and allow more time for development of FAI. And in the end, solving FAI still remains a necessary step, even if done by modified/improved people, even if given a safe environment to work in.
You were talking about hundred year time scales. That’s time enough for neuroscience lie detectors, whole brain emulation, democratization in authoritarian countries, continued expansion of EU-like arrangements, and many other things to occur. That’s time for lie detectors/neuroscience to advance a lot, whole brain emulation to take off
But from our perspective, if we can get the benevolent non-AI (but perhaps WBE) singleton, it can do the FAI work at leisure and we don’t need to. So the relative marginal impact of our working on say, FAI theory or institutional arrangements for WBE, need to be weighed against one another.
It’s also time enough for any of the huge number of other outcomes. It’s not outright impossible, but pretty improbable, that the world will go this exact road. And don’t underestimate how crazy people are.
After the change of mind about value of drifted human preference, I agree that WBE/intelligence enhancement is a viable road. Here’re my arguments about the impact of these paths at this point.
WBE is still at least decades away, probably more than a hundred years if you take planning fallacy into account, and depends on the development of global technological efforts that are not easily influenced. Value of any “institutional arrangements” and viability of arguing for them given the remoteness (hence irrelevance at present) and implausibility (to most people) of WBE, also seems doubtful at present. This in my mind makes the marginal value on any present effort related to WBE relatively small. This will go up sharply as WBE tech gets closer
I suspect that FAI theory, once understood, will still be simple enough (if any general theory is possible), and can be developed by vanilla humans (on unknown timescale, probably decades to hundreds of years, but at some point WBEs overtake the timescale estimates). By the time WBE becomes viable, the risk situation will be already very explosive, so if we can get a good understanding earlier, we could possibly avoid that risky period entirely. Also, having a viable technical Friendliness programme might give academic recognition to the problem (that these risks are as unavoidable as laws of physics, and not just something to talk with your friends about, like politics or football), which might spread awareness of the AI risks on an otherwise unachievable level, helping with institutional change promoting measures against wild AI and other existential risks. On the other hand, I won’t underestimate human craziness on this point as well—technical recognition of the problem may still live side to side with global indifference.
I believed similarly until I read Steve Omohundro’s The Basic AI Drives. It convinced me that a paperclip maximizer is the overwhelmingly likely outcome of creating an AGI.
That paper makes a convincing case that the ‘generic’ AI (some distribution of AI motivations weighted by our likelihood of developing them) will most prefer outcomes that rank low in our preference ordering, i.e. the free energy and atoms needed to support life as we know it or would want it will get reallocated to something else. That means that an AI given arbitrary power (e.g. because of a very hard takeoff, or easy bargaining among AIs but not humans, or other reasons) would be lethal. However, the situation seems different and more sensitive to initial conditions when we consider AIs with limited power that must trade off chances of conquest with a risk of failure and retaliation. I’m working on a write up of those issues.
Thanks Craig, I’ll check it out!
“But I believe it’s certain to eventually happen, absent the end of civilization before that.”
And I will live 1000 years, provided I don’t die first.
(As opposed to gradual progress, of course. I could make a case with your analogy facing an unexpected distinction also, as in what happens if you got overrun by a Friendly intelligence explosion, and persons don’t prove to be a valuable pattern, but death doesn’t adequately describe the transition either, as value doesn’t get lost.)
---this is a good point, thanks.
# refers to a pattern of incorrect (intuitive) reasoning. This pattern is potentially dangerous specifically because it leads to incorrect beliefs. But if you are saying that there is no significant distortion in beliefs (in particular about the importance of Less Wrong or SIAI’s missions*), doesn’t this imply that the role of this potential bias is therefore unimportant? Either # isn’t important, because it doesn’t significantly distort beliefs, or it does significantly distort beliefs and therefore important.
* Although I should note that I don’t remember there being a visible position about the importance of Less Wrong.
There’s no single point at which distortion of beliefs becomes sufficiently large to register as “significant”—it’s a gradualist thing
Probably I’ve unfairly conflated Less Wrong and SIAI. But at this post Kevin says “We try to take existential risk seriously around these parts. Each marginal new user that reads anything on Less Wrong has a real chance of being the one that tips us from existential Loss to existential Win.” This seemed to me to carry the connotation of ascribing extremely high significance to Less Wrong and I (quite possibly incorrectly) interpreted the fact that nobody questioned the statement or asked for clarification as an indication that the rest of the community is in agreement with the idea that Less Wrong is extremely significant. I will respond to the post asking Kevin to clarify what he was getting at.
Would you respond differently if someone else talked about every single person who becomes an amateur astronomer and searches for dangerous asteroids? There are lots of ways of potential existential threats. Unfriendly or rogue AIs are certainly one of them. Nuclear war is another. And I think a lot of people would agree that most humans don’t pay nearly enough attention to existential threats. So one aspect of improving rational thinking should be a net reduction in existential threats of all types, not just those associated with AI. Kevin’s statement thus isn’t intrinsically connected to SIAI at all (although I’d be inclined to argue that even given that Kevin’s statement is possibly a tad hyperbolic).
The parallel is a good one. I would think it sort of crankish if somebody went around trying to get people to engage in amateur astronomy and search for dangerous asteroids on the grounds that any new amateur astronomer may be the one save us from being killed by a dangerous asteroid. Just because an issue is potentially important doesn’t mean that one should attempt to interest as many people as possible in it. There’s an issue of opportunity cost.
Sure there’s an opportunity cost, but how large is that opportunity cost? What if someone has good data that suggests that the current number of asteroid seekers is orders of magnitude below the optimum?
Two points:
(1) It’s not clear that improving rational thinking matters much. The factors limiting human ability to reduce existential risk seem to me to have more to do with politics, marketing and culture rather than rationality proper. Devoting oneself to refining rationality may come at the cost of increasing one’s ability to engage in politics and marketing and influence culture. I guess what I’m saying is that rationalists should win and consciously aspiring toward rationality may interfere with one’s ability to win.
(2) It’s not clear how much it’s possible to improve rational thinking. It may be that beyond a certain point, attempts to improve rational thinking are self defeating (e.g. combating one bias may cause another bias).
Part of influencing culture should include the spreading of rationality. This is actually related to why I think that the rationality movement has more in common with organized skepticism than is generally acknowledged. Consider what would happen if the general public had enough epistemic rationality to recognize that homeopathy was complete nonsense. In the United States alone, people spend around three billion dollars a year on homeopathy (source). If that went away, and only 5% of that ended up getting spent on things that actually increase general utility, that means around $150 million dollars are now going into useful things. And that’s only a tiny example. The US spends about 30 to 40 billion dollars a year on alternative medicine much of which is also a complete waste. We’re not talking here about a Hansonian approach where much medicine is only of marginal use or only helps the very sick who are going to die soon. We’re talking about “medicine” that does zero. And many of the people taking those alternatives will take those alternatives instead of taking medicine that will improve their lives. Improving the general population’s rationality will be a net win for everyone. And if some tiny set of those freed resources goes to dealing with existential risk? Even better.
Okay, but now the rationality that you’re talking about is “ordinary rationality” rather than “extreme rationality” and the general public rather than the Less Wrong community. What is Less Wrong community doing to spread ordinary rationality within the general public?
Are you sure that the placebo effects are never sufficiently useful to warrant the cost?
A lot of the aspects of “extreme rationality” are aspects of rationality in general (understanding the scientific method and the nature of evidence, trying to make experiments to test things, being aware of serious cognitive biases, etc.) Also, I suspect (and this may not be accurate) that a lot of the ideas of extreme rationality are ones which LWers will simply spread in casual conversation, not necessarily out of any deliberate attempt to spread them, but because they are really neat. For example, the representativeness heuristic is an amazing form of cognitive bias. Similarly, the 2-4-6 game is independently fun to play with people and helps them learn better.
I was careful to say that much, not all. Placebos can help. And some of it involves treatments that will eventually turn out to be helpful when they get studied. There are entire subindustries that aren’t just useless but downright harmful (chelation therapy for autism would be an example). And large parts of the alternative medicine world involve claims that are emotionally damaging to patients (such as claims that cancer is a result of negatives beliefs). And when one isn’t talking about something like homeopathy which is just water but rather remedies that involve chemically active substances the chance that actual complications will occur from them grows.
Deliberately giving placebos is of questionable ethical value, but if we think it is ok we can do it with cheap sugar pills delivered at a pharmacy. Cheaper, safer and better controlled. And people won’t be getting the sugar pills as an alternative to treatment when treatment is possible.
Anything we seek to do is a function of our capabilities and how important the activity is. Less Wrong is aimed mainly as a pointer towards increasing the capabilities of those who are interested in improving their rationality and Eliezer has mentioned in one of the sequences that there are many other aspects of the art that have to be developed. Epistemic rationality is one, luminosity as mentioned by Alicorn is another, so on and so forth.
Who knows that in the future, we may get many rational offshoots of lesswrong—lessshy, lessprocastinating, etc.
Now, getting back to my statement. Function of capabilities and Importance.
Importance—Existential risk is the most important problem that is not having sufficient light on it. Capability—The singinst is a group of powerless, poor and introverted geeks who are doing the best, that they think they can do, to reduce existential risk. This may include things that improve their personal ability to affect the future positively. It may include charisma and marketing, also. For all the time that they have thought on the issue, the singinst people consider raising the sanity waterline as really important to the cause. Unless and until you have specific data that that avenue is not the best use of their time, it is a worthwhile cause to pursue.
Before reading the paragraph below, please answer this simple question—What is your marginal time unit, taking into account necessary leisure, being used for?
If your capability is great, then you can contribute much more than SIAI. All you need to see is whether on the margin, your contribution is making a greater difference to the activity or not. Even Singinst cannot absorb too much money without losing focus. You, as a smart person know that. So, stop contributing to Singinst when you think your marginal dollar gets better value when spent elsewhere.
It is not whether you believe that singinst is the best cause ever. Honestly assess and calculate where your marginal dollar can get better value. Are you better off being the millionth voice in the climate change debate or the hundredth voice in the existential risk discussion?
EDIT : Edited the capability para for clarity
One other factor which influences how much goes into reducing existential risk is the general wealth level. Long term existential risk gets taken care of after people take care of shorter term risks, have what they consider to be enough fun, and spend quite a bit on maintaining status.
More to spare means being likely to have a longer time horizon.
On the level of society, there seems to be tons of low-hanging fruit.
What are some examples of this low-hanging fruit that you have in mind?
Fact-checking in political discussions (i.e. senate politics), parenting and teaching methods, keeping a clean desk or being happy at work (see here), getting effective medical treatments rather than unproven treatments (sometimes this might require confronting your doctor), and maintaining budgets seem like decent examples (in no particular order, and of course these are at various heights but well within the reach of the general public).
Not sure if Vladimir would have the same types of things in mind.
Factcheck, which keeps track of the accuracy of statements made by politicians and about politics, strikes me as a big recent improvement.
Just added that to my RSS feed.
But to avoid turning this into a fallacy of gray, you still need to take notice of the extent of the effect. Neither working on a bias, nor ignoring the bias, are “defaults”—it necessarily depends on the perceived level of significance.
I think I agree with you. My suggestion is that Less Wrong and SIAI are, at the margin, not paying enough attention to the bias (*).
I’d like to share introductory level posts as widely as possible. There are only three with this tag. Can people nominate more of these posts, perhaps messaging the author to encourage them to tag their post “introduction.”
We should link to, stumble on, etc. accessible posts as much as possible. The sequences are great, but intimidating for many people.
Added: Are there more refined tags we’d like to use to indicate who the articles are appropriate for?
There are a few scattered posts in Eliezer’s sequences which do not, I believe, have strong dependencies (I steal several from the About page, others from Kaj_Sotala’s first and second lists) - I separate out the ones which seem like good introductory posts specifically, with a separate list of others I considered but do not think are specifically introductory.
Introductions:
Absence of Evidence is Evidence of Absence
Feeling Rational
Taboo Your Words
Back Up and Ask Whether, Not Why
Expecting Short Inferential Distances
Knowing About Biases Can Hurt People
Burdensome Details
The Proper Use of Humility
Mysterious Answers to Mysterious Questions
The Bottom Line
Not introductions, but accessible and cool:
Outside the Laboratory
Beyond the Reach of God
Conjunction Controversy
As usual, I’ll have to recommend Truly Part of You as an excellent introductory post, given the very little background required, and the high insight per unit length.
Thanks for this list.
Ladies and gentlemen, the human brain: acetaminophen reduces the pain of social rejection.
Wikipedia says the term “Synthetic Intelligence” is a synonym for GAI. I’d like to propose a different use: as a name for the superclass encompassing things like prediction markets. This usage occurred to me while considering 4chan as a weakly superintelligent optimization process with a single goal; something along the lines of “producing novelty;” something it certainly does with a paperclippy single-mindedness we wouldn’t expect out of a human.
It may be that there’s little useful to be gained by considering prediction markets and chans as part of the same category, or that I’m unable to find all the prior art in this area because I’m using the wrong search terms—but it does seem somewhat larger and more practical than gestalt intelligence.
That is usually called “collective intelligence”:
http://en.wikipedia.org/wiki/Collective_intelligence
Calling it “synthetic Intelligence” would be bad, IMO.
It appears the “wrong search terms” hypothesis was the correct one. Curses.
Thanks for correcting me.
Could you expand on what would be included and excluded from Synthetic Intelligence?
Would a free market count?
Good question. I didn’t mean to take ownership of the term, but I’d consider the “invisible hand” part to be the synthetic intelligence; and the rest of the market’s activities to be other synthetic appendages and organs.
I’ve noticed a surprising conclusion about moral value of the outcomes (1) existential disaster that terminates civilization, leaving no rational singleton behind (“Doom”), (2) Unfriendly AI (“UFAI”) and (3) FAI. It now seems that although the most important factor in optimizing the value of the world (according to your personal formal preference) is increasing probability of FAI (no surprise here), all else equal UFAI is much more preferable than Doom. That is, if you have an option of trading Doom for UFAI, while forsaking only negligible probability of FAI, you should take it.
The main argument (known as Rolf Nelson’s AI deterrence) can be modeled by counterfactual mugging: an UFAI will give up a (small) portion of the control over its world to FAI’s preference (pay the $100), if there is a (correspondingly small) probability that FAI could’ve been created, had the circumstances played out differently (which corresponds to the coin landing differently in counterfactual mugging), in exchange for the FAI (counterfactually) giving up a portion of control to the UFAI (reward from Omega).
As a result, having an UFAI in the world is better than having no AI (at any point in the future), because this UFAI can work as a counterfactual trading partner to a FAI that could’ve existed under other circumstances, which would make the FAI stronger (improve the value of the possible worlds). Of course, the negative effect of decreasing the probability of FAI is much stronger than the positive effect of increasing the probability of UFAI to the same extent, which means that if the choice is purely between UFAI and FAI, the balance is conclusively in FAI’s favor. That there are FAIs in the possible worlds also shows that the Doom outcome is not completely devoid of moral value.
More arguments and a related discussion here.
It can mostly be ignored, but uFAI affects physically-nearby aliens who might have developed a half-Friendly AI otherwise. (But if they could have, then they have counterfactual leverage in trading with your uFAI.) No reason to suspect that those aliens had a much better shot than we did at creating FAI, though. Creating uFAI might also benefit the aliens for other reasons… that I won’t go into, so instead I will just say that it is easy to miss important factors when thinking about these things. Anyway, if the nanobots are swarming the Earth, then launching uFAI does indeed seem very reasonable for many reasons.
Fascinating! Do you still agree with what you wrote there? Are you still researching this issues and do you plan on writing a progress report or an open problems post? Would you be willing to write a survey paper on decision theoretic issues related to acausal trade?
My best guess about what’s preferable to what is still this way, but I’m significantly less certain of its truth (there are analogies that make the answer come out differently, and level of rigor in the above comment is not much better than these analogies). In any case, I don’t see how we can actually use these considerations. (I’m working in a direction that should ideally make questions like this more clear in the future.)
If you know how to build a uFAI (or “probably somewhat reflective on its goal system but nowhere near provably Friendly” AI), build one and put it in an encrypted glass case. Ideally you would work out the AGI theory in your head, determine how long it would take to code the AGI after adjusting for planning fallacy, then be ready to start coding if doom is predictably going to occur. If doom isn’t predictable then the safety tradeoffs are larger. This can easily go wrong, obviously.
I have an idea that I would like to float. It’s a rough metaphor that I’m applying from my mathematical background.
Map and Territory is a good way to describe the difference between beliefs and truth. But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps. You might think that there is a silly distinction, but there are a few reason why it may not be.
First, different maps in the atlas may disagree with one another. For instance, we might have a series of maps that each very accurately describe a small area but become more and more distorted the farther we go out. Each ancient city state might have accurate maps of the surrounding farms for tax purposes but wildly guess what lies beyond a mountain range or desert. A map might also accurately describe the territory at one level of distance but simplify much smaller scales. The yellow pixel in a map of the US is actually an entire town, with roads and buildings and rivers and topography, not perfectly flat fertile farmland.
Or take another example. Suppose you have a virtual reality machine, one with a portable helmet with a screen and speakers, in a large warehouse, so that you can walk around this giant floor as if you were walking around this virtual world. Now, suppose two people are inserted into this virtual world, but at different places, so that when they meet in the virtual world, their bodies are actually a hundred yards apart in the warehouse, and if their bodies bump into each other in the warehouse, they think they are a hundred yards apart in the virtual world.
Thus, when we as rationalists are evaluating our maps and those of others, an argument by contradiction does not always work. That two maps disagree does not invalidate the maps. Instead, it should cause us to see where our maps are reliable and where they are not, where they overlap with each other or agree and are interchangeable and where only 1 will do. Even more controversially, we should examine maps that are demonstrably wrong in some places to see whether and where they are good maps. Moreover, it might be more useful to add an entirely new map to our atlas instead of trying to improve the resolution on one we already have or moving around the lines every so slightly as we bring it asymptotically closer to truth.
My lesson for the rationality dojo would thus be: -Be comfortable that your atlas is not consistent. Learn how to use each map well and how they fit together. Recognize when others have good maps and figure out how to incorporate those maps into your atlas, even if they might seem inconsistent with what you already have.
If you noticed, this idea comes from Differential Geometry, where you use a collection (“atlas”) of overlapping charts/local homeomorphisms to R^n (“maps”) as a suitable structure for discussing manifolds.
I tend to agree that we frequently would do better to make do with an atlas of charts rather than seeking the One True Map. But I’m not sure I like the differential geometry metaphor. It is not the location on the globe which makes use of one chart more fruitful than another. It is the question of scale, or as a computer nerd might express it, how zoomed in you are. And I would prefer to speak of different models rather than different maps.
For example, at one level of zoom, we see the universe as non-deterministic due to QM. Zoom out a bit and you have billiard-ball atoms in a Newtonian billiard room. Zoom out a bit more and find non-deterministic fluctuations. Out a bit more and you have deterministic chemical thermodynamics (unless you are dealing with a Brusselator or some such).
But I would go farther than this. I would also claim that we shouldn’t imagine that these maps (as you zoom in) necessarily become better and better maps of the One True Territory. We should remain open to the idea that “It’s maps (or models, or turtles) all the way down”.
What’s an example of people doing this?
I think one place to look for this phenomenon is when in a debate, you seize upon someone’s hidden assumptions. When this happens, it usually feels like a triumph, that you have successfully uncovered an error in their thinking that invalidates a lot of what they have argued. And it is incredibly annoying to have one of your own hidden assumptions laid bare, because it is both embarrassing and means you have to redo a lot of your thinking.
But hidden assumptions aren’t bad. You have to make some assumptions to think through a problem anyway. You can only reason from somewhere to somewhere else. It’s a transitive operation. There has to be a starting point. Moreover, assumptions make thinking and computation easier. They decrease the complexity of the problem, which means you can figure out at least part of the problem. Assuming pi is 3.14 is good if you want an estimate of the volume of the Earth. But that is useless if you want to prove a theorem. So in the metaphor, maps are characterized by their assumptions/axioms.
When you come into contact with assumptions, you should make them as explicit as possible. But you should also be willing to provisionally accept others’ assumptions and think through their implications. And it is often useful to let that sit alongside your own set of beliefs as an alternate map, something that can shed light on a situation when your beliefs are inadequate.
This might be silly, but I tend to think there is no Truth, just good axioms. And oftentimes fierce debates come down to incompatible axioms. In these situations, you are better off making explicit both sets of assumptions, accepting that they are incompatible and perhaps trying on the other side’s assumptions to see how they fit.
Mostly agree. It’s really irritating and unproductive (and for me, all too frequent) when someone thinks they’ve got you nailed because they found a hidden assumption in your argument, but that assumption turns out to be completely uncontroversial, or irrelevant, or something your opponent relies on anyway.
Yes, people need to watch for the hidden assumptions they make, but they shouldn’t point out the assumptions others make unless they can say why it’s unreasonable and how its weakening would hurt the argument it’s being used for. “You’re assuming X!” is not, by itself, relevant counterargument.
You might be interested in How to Lie with Maps.
The Real Science Gap
ETA: Here’s a money quote from near the end of the article:
(Ouch)
I’m not sure I see what the problem is. Capitalism works? It makes it seem like this system is unsustainable or bound to collapse, but I’m not sure I see how two and two fit together. I am particularly confused with this quote:
First of all, how is it a ponzi scheme that is bound to collapse? Also limiting the number of scientists is not going to make the system better, except that maybe individuals will have less competition = more opportunities, which is not a benefit to the whole system, just to the individual.
EDIT: Fixed spelling.
I’m not sure if it meets the Ponzi scheme model, but the problem is this: lots of students are going deeper into debt to get an education that has less and less positive impact on their earning power. So the labor force will be saturated with people having useless skills (given lack of demand, government-driven or otherwise, for people with a standard academic education) and being deep in undischargeable debt.
The inertia of the conventional wisdom (“you’ve gotta go to college!”) is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart’s Law.
On top of that, to the extent that people do pick up on this, the sciences will continue to be starved of the people who can bring about advances—this past generation they were lured away to produce deceptive financial instruments that hid dangerous risk, and which (governments claim) put the global financial system at the brink of collapse.
My take? The system of go-to-college/get-a-job needs to collapse and be replaced, for the most part, by apprenticeships (or “internships” as we fine gentry call them) at a younger age, which will give people significantly more financial security and enhance the economy’s productivity. But this will be bad news for academics.
And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.
So the slack will have to be picked up by people “outside the system”. Yes, they’ll be starved for funds and rely on rich people and donations to non-profits, but they’ll mostly make up for it by their ability to get much more insight out of much less data: knowing what data-mining techniques to use, spotting parallels across different fields, avoiding the biases that infect academia, and generally automating the kind of inference currently believed to require a human expert to perform.
In short: this, too, shall pass—the only question is how long we’ll have to suffer until the transition is complete.
Sorry, [/rant].
I agree that college as an institution of learning is a waste for most folks—they will “never use this,” most disregard the parts of a liberal arts education that they’re force-fed, and neither they nor their jobs benefit. Maybe students gain something from networking with each other. But yes, Goodhart’s Law applies. Employers appear to use a diploma as an indicator of deligence and intelligence. So long as that’s true, students will fritter away four years of their lives and put themselves deep in debt to get a magic sheet of paper.
It’s been broken forever, in basically the same way it is now. Most working scientists are trying to prove their idea, because neagtive results don’t carry nearly so much prestige as positive results, and the practice of science is mostly about prestige. I’m sure I could find citations for peer review being “pal review” throughout its lifetime. (ooh. I’ll try this in a moment.)
To the extent that science has ever worked, it’s because the social process of science has worked—scientists are just open-minded enough to, as a whole, let strong evidence change their collective minds. I’m not really convinced that the social process of science has changed significantly over the last decades, and I can imagine these assertions being rooted in generalized nostalgia. Do you have reasons to assert this?
(Are you just blowing off steam about this? I can totally support that, because argh argh argh the publication treadmill in my field headdesk headdesk expletives. But if you have evidence, I’d love to hear it.)
I mainly have evidence for the absolute level, not necessary for the trend (in science getting worse). For the trend, I could point to Goodhart phenomena like having to rely on the publication per unit time metric being gamed, and getting worse as time progresses.
I also think that in this context, the absolute level is evidence of the trend, when you consider that the number of scientists has increased; if the quality of science in general has not increased with more people, it’s getting worse per unit person.
For the absolute level, I’ve noticed scattered pieces of the puzzle that, against my previous strong presumption, support my suspicions. I’m too sleepy to go into detail right now, but briefly:
There’s no way that all the different problems being attacked by researchers can be really, fundamentally different: the functionspace is too small for a unique one to exist for each problem, so most should be reducible to a mathematical formalism that can be passed to mathematicians who can tell if it’s solvable.
There is evidence that such connections are not being made. The example I use frequently is ecologists and the method of adjacency matrix eigenvectors. That method has been around since the 1960s and forms the basis of Google’s PageRank, allowing it to identify crucial sites. Ecologists didn’t apply it to the problem of identifying critical ecosystem species until a few years ago.
I’ve gone into grad school myself and found that existing explanations of concepts is a scattered mess: it’s almost like they don’t want you to understand papers or break into advanced topics that are the subject of research. Whenever I understand such a topic, I find myself able to explain it in much shorter time than experts in the field in explained it to me. This creates a fog over research, allowing big mistakes to last for years, with no one ever noticing it because too few eyeballs are on it. (This explanation barrier is the topic of my ever-upcoming article “Explain yourself!”)
As an example of what a mess it is (and at risk of provoking emotions that aren’t relevant to my point), consider climate science. This is an issue where they have to convince LOTS of people, most of whom aren’t as smart. You would think that in documenting the evidence supporting their case, scientists would establish a solid walkthrough: a runnable, editable model with every assumption traceable to its source and all inputs traceable to the appropriate databases.
Yet when climate scientists were in the hot seat last fall and wanted to reaffirm the strength of their case, they had no such site to point anyone to. RealClimate.org made a post saying basically, “Um, anyone who’s got the links to the public data, it’d be nice if you could post them here...”
To clarify, I’m NOT trying to raise the issue about AGW being a scam, etc. I’m saying that no matter how good the science is, here we have a case where it’s of utmost important to explain research to the masses, and so it would have the most thorough documentation and traceability. Yet here, at the top of the hill, no one bothered to trace out the case from start to finish, fully connecting this domain to the rest of collective scientific knowledge.
Er, I’d just expect to see more science being done. I know of no one studying overall mechanisms of science-as-it-is-realized (little-s “science”), and thereby seriously influencing it. Further, that’s not something current science is likely to worry about, unless someone can somehow point to irrefutable evidence that science is underperforming.
All of the points you list are real issues; I watch them myself, to constant frustration. I think they have common cause in the incentive structure of science. The following account has been hinted at many times over around Less Wrong, but spelling it out may make it clear how your points follow:
Researchers focus on churning out papers that can actually get accepted at some highly-rated journal or conference, because the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants. This quantity has a strong effect on scientists’ individual futures and their reputations. For all but the most well-established or idealistic scientists, this pressure overrides the drive to promote general understanding, increase the world’s useful knowledge, or satisfy curiosity[*].
This pressure means that scientists seek the next publication and structure their investigations to yield multiple papers, rather than telling a single coherent story from what might be several least publishable units. Thus, you should expect little synthesis—a least publishable unit is very nearly the author’s research minus the current state of knowledge in a specialized subfield. Thus, as you say, existing explanations are a scattered mess.
Since these explanations are scattered and confusing, it’s brutally difficult to understand the cutting edge of any particular subfield. Following publication pressure, papers are engineered to garner acceptance from peer reviewers. Those reviewers are part of the same specialized subfield as the author. Thus, if the author fails to use a widely-known concept from outside his subfield to solve a problem in his paper, the reviewers aren’t likely to catch it, because it’s hard to learn new ideas from other subfields. Thus, the author has no real motivation to investigate subfields outside of his own expertise, and we have a stable situation. Thus, your first and second points.
All this suggests to me that, if we want to make science better, we need to somehow twiddle its incentive structure. But changing longstanding organizational and social trends is, er, outside of my subfield of study.
[*] This demands substantiation, but I have no studies to point to. It’s common knowledge, perhaps, and it’s true in the research environments I’ve found myself in. Does it ring true for everyone else reading this, with appropriate experience of academic research?
No, these are recent developments (though the stuff from your first post may be old). For the first 300 years, scientists were amateurs without grants and no one cared about quantity. For evidence of recent changes, look at the age of NIH PIs
-- Bruno Latour, Portait of a Biologist as Wild Capitalist
(ETA: see also.)
I think you’ve got an example of generalizing from one example, and perhaps the habit of thinking of oneself as typical—you’re unusually good at finding clear explanations, and you think that other people could be about as good if they’d just try a little.
I suspect they’d have to try a lot.
As far as I can tell, most people find it very hard to imagine what it’s like to not understand knowledge they’ve assimilated, which is another example of the same mistake.
Well, I appreciate the compliment, but keep in mind you haven’t personally put me to the test on my claim to have that skill at explaining.
But I don’t understand why this would be hard—people make quite a big deal about how “I was little boy/girl like you too one time”. Certainly a physics professor would generally remember what it was like to take their first physics class, what confused them, what way of thinking made it clearer, etc.
(I remember one of my professors, later my grad school advisor (bless his heart), was a master at explaining and achieving Level 2 understanding on topics. He was always able to connect it back to related topics, and if students had trouble understanding something, he was always able to identify what the knowledge deficit was and jump in with an explanation of the background info needed.)
To the extent that your assessment is accurate, this problem people have can still be corrected by relatively simple changes in practice. For example, instead of just learning the next class up and moving on, people could make a habit of checking for how it connects to the previous class’s knowledge, to related topics, to introductory class knowledge, and to layperson knowledge. It wouldn’t help current people, as you have to make it an ongoing effort, but it doesn’t sound like it’s hard.
Also, is it really that hard for people to ask themselves, “Assume I know nothing. What would I have to be told to be able to do this?”
I remember that it was all pretty straightforward and intuitive. This was not a typical experience, and it also means that I don’t really know what average students have trouble with in basic Newtonian physics. Physics professors tend to be people who were unusually good at introductory physics classes. (Meanwhile, I can’t seem to find an explanation of standard social skills that doesn’t assume a lot of intuitions that I find non-obvious. Fucking small talk, how does it work?!)
Most professors weren’t typical students, so why would their recollections be a good guide to what problems typical students have when learning a subject for the first time?
I remember intro physics being straightforward and intuitive, and I had no trouble explaining it to others. In fact, the first day we had a substitute teacher who just told us to read the first chapter, which was just the basics like scientific notation, algebraic manipulation, unit conversion, etc. I ended up just teaching the others when something didn’t make sense.
If there was any pattern to it, it was that I was always able to “drop back a level” to any grounding concept. “Wait, do you understand why dividing a variable by itself cancels it out?” “Do you understand what multiplying by a power of 10 does?”
That is, I could trace back to the beginning of what they found confusing. I don’t think I was special in having this ability—it’s just something people don’t bother to do, or don’t themselves possess the understanding to do, whether it’s teaching physics or social skills (for which I have the same complaint as you).
Someone who really understands sociality (i.e., level 2, as mentioned above) can fall back to the questions of why people engage in small talk, and what kind of mentality you should have when doing so. But most people either don’t bother to do this, or have only an automatic (level 1) understanding.
Do you ever have trouble explaining physics to others? Do you find any commonality to the barriers you encounter?
In mathy fields, how much of it is caused by insufficiently deep understanding and how much of it is caused by taboos against explicitly discussing intuitive ways of thinking that can’t be defended as hard results? The common view seems to be that textbooks/lectures are for showing the formal structure of whatever it is you’re learning, and to build intuitions you have to spend a lot of time doing exercises. But I’ve always thought such effort could be partly avoided if instead of playing dignified Zen master, textbooks were full of low-status sentences like “a prisoner’s dilemma means two parties both have the opportunity to help the other at a cost that’s smaller than the benefit, so it’s basically the same thing as trade, where both parties give each other stuff that they value less than the other, so you should imagine trade as people lobbing balls of stuff at each other that grow in the lobbing, and if you zoom out it’s like little fountains of stuff coming from nowhere”. (ETA: I mean in addition to the math itself, of course.) It’s possible that I’m overrating how much such intuitions can be shared between people, maybe because of learning-style issues.
I think you’ve got something really important here. If you want to get someone to an intuitive understanding of something, then why not go with explanations that are closer to that intuitive understanding? I usually understand such explanations a lot better than more dignified explanations, and I’ve seen that a lot of other people are the same way.
I remember when a classmate of mine was having trouble understanding mutexes, semaphores, monitors, and a few other low-level concurrency primitives. He had been to the lectures, read the textbook, looked it up online, and was still baffled. I described to him a restroom where people use a pot full of magic rocks to decide who can use the toilets, so they don’t accidentally pee on each other. The various concurrency primitives were all explained as funny rituals for getting the magic toilet permission rocks. E.g. in one scheme people waiting for a rock stand in line; in another scheme they stand in a throng with their eyes closed, periodically flinging themselves at the pot of rocks to see if any are free. Upon hearing this, my friend’s confusion was dispelled. (For my part, I didn’t understand this stuff until I had translated it into vague images not too far removed from the stupid bathroom story I told my friend. The textbook explanations are just bad sometimes.)
Or for another example, I had terrible trouble with basic probability theory until I learned to imagine sets of things that could happen, and visualize them as these hazy blob things. Once that happened, it was as if my eyes had finally opened, and everything became clear. I was kind of pissed off that all the classes I’d been in that tried to teach probability focused exclusively on the equations, so I’d had to figure out the intuitive stuff without any help.
As a side-note, this is one reason why I’m optimistic about online education like Salman Khan’s videos. It’s not that they’re inherently better, obviously, but they have the potential for much more competition. I can imagine students in The Future comparing lecturers, with the underlying assumption that you can trivially switch at any time. “Oh, you’re trying to learn about the ancient Roman sumptuary laws from Danrich Parrol’s lectures? Those are pretty mind-numbing; try Nile Etland’s explanations instead. She presents the different points of view by arguing vehemently with herself in several funny accents. It’s surprisingly clear, even if she does sound like a total nutcase.”
[Side-note to the side-note: I think more things should be explained as arguments. And the natural way to do this is for one person to hold a crazy multiple-personality argument-monologue. This also works for explaining digital hardware design as a bunch of components having a conversation. “You there! I have sent you a 32-bit integer! Tell me when you’re done with it!” Works like a charm.]
Man, the future of education will be silly. And more educational!
It wouldn’t surprise me if a big part of the problem now is the assumption that there’s virtue to enduring boredom, and a proof of status if you impose it.
If by boredom you mean dominance and inequality, then Robin Hanson has been riffing on this theme lately. The main idea is that employers need employees who will just accept what they’re told do instead of rebelling and trying to form a new tribe in a nearby section of savannah. School trains some of the rebelliousness out of students. See e.g., this, this, and this.
No, by boredom I mean lack of appropriate levels of stimulus, and possibly lack of significant work.
Dominance and inequality can play out in a number of ways, including chaos (imagine a badly run business with employees who would like things to be more coherent), physical abuse, and deprivation. Imposed boredom is only one possibility.
Causing people to have, or feel they have, no alternatives is how abusive authorities get away with it.
That sounds like such fun!
It’s every bit as fun as you imagine. And it works great.
Heh, this reminds me of this discussion of Plain Talk on a wiki I participated in years ago. I must have drawn those little characters, what, ten years ago? Not quite (more like six or seven), but it feels like ages ago.
I agree with this. It is also true that people’s intuitions differ, and people respond differently to different kinds of informal explanation. steven0461′s explanation of Prisoner’s Dilemma would be good for someone accustomed to thinking visually, for example. For this reason, your vision of individual explanations competing (or cooperating) is important.
One of the things I’ve always disliked about mathematical culture is this taboo against making allowances for human weakness on the part of students (of any age.) For example, the reluctance to use “plain English” aids to intuition, or pictures, or motivations. Sometimes I almost think this is a signaling issue, where mathematicians want to display that they don’t need such crutches. But it seems to get in the way of effective communication.
You can go too far in the other direction—I’ve found that it can also be hard to learn when there’s too little rigorous formalism. (Most recently I’ve had that experience with electrical engineering and philosophy.) There ought to be a happy medium somewhere.
This isn’t really a signaling issue so much as a response to the fact that mathematicians have had centuries of experience where apparent theorems turned out to be not proven or not even true and the failings were due to too much reliance on intuition. Classical examples of this include how in the 19th century there was about a decade long period where people thought that the Four Color Theorem was proven. Also, a lot of these sorts of issues happened in calculus before it was put on a rigorous setting in the 1850s.
There may be a signaling aspect but it is likely a small one. I’d expect more likely that mathematicians err on the side of rigor.
ETA: Another data point that suggests this isn’t about signaling; I’ve been too a fair number of talks in which people in the audience get annoyed because they think there’s too much formalism hiding some basic idea in which case they’ll ask questions sometimes of the form “what’s the idea behind the proof” or “what’s the moral of this result?”
Just to be clear: I’m not against rigor. Rigor is there for a good reason.
But I do think that there’s a bias in math against making it easy to learn. It’s weird.
Math departments, anecdotally in nearly all the colleges I’ve heard of, are terrible at administrative conveniences. Math will be the only department that doesn’t put lecture notes online, won’t announce the correct textbook for the course, won’t produce a syllabus, won’t announce the date of the final exam. Physics, computer science, etc., don’t do this to their students. This has nothing to do with rigor; I think it springs from the assumption that such details are trivial.
I’ve noticed a sort of aesthetic bias (at least in pure math) against pictures and “selling points.” I recall one talk where the speaker emphasized how transformative his result could be for physics—it was a very dramatic lecture. The gossip afterwards was all about how arrogant and salesman-like the speaker was. That cultural instinct—to disdain flash and drama—probably helps with rigorous habits of thought, but it ruins our chances to draw in young people and laymen. And I think it can even interfere with comprehension (people can easily miss the understated.)
Over 99% of students learning math aren’t going to be expected to contribute to cutting-edge proofs, so I don’t regard this as a good reason not to use “plain English” methods.
In any case, a plain English understanding can allow you to bootstrap to a rigorous understanding, so more hardcore mathematicians should be able to overcome any problem introduced this way.
I agree that this is likely often suboptimal when teaching math. The argument I was presenting was that this approach was not due to signaling. I’m not arguing that this is at all optimal.
I don’t think this problem is limited to math: it’s present in all cutting-edge or graduate school levels of technical subjects. Basically, if you make your work easily accessible to a lay audience[1], it’s regarded as lower status or less significant. (“Hey, if it sounds so simple, it must not have been very hard to get!”)
And ironically enough, this thread sprung from me complaining about exactly that (see esp. the third bullet point).
[1] And contrary to what turf-defenders like to claim, this isn’t that hard. Worst case, you can just add a brief pointer to an introduction to the topic and terminology. To borrow from some open source guy, “Given enough artificial barriers to understanding, all bugs are deep.”
I thought that writing was for that and lectures were supposed to be informal, the kind of thing you were asking for. And, I thought everyone agreed that lectures work much better.
I think you’re right, but only to a limited (varying) degree. I also think it’s not just a matter of being informal, but a matter of just stating explicitly a lot of insights that you’re “supposed” to get only through hard mental labor.
I don’t have an answer, but I can attest to not mimicking a textbook when I try to explain high school math to someone. Rather, I first find out where gap is between their understanding and where I want them to be.
Of course, textbooks don’t have the luxury of probing each student’s mind.
This demonstrates a highly developed theory of mind. In order to do this one needs to both have a good command of material and a good understanding of what people are likely to understand or not understand. This is often very difficult.
I thought I should add a pointer one of the replies, because it’s another anecdote from when poster noticed the difference (in what “understand” means) on an encounter with another person who had a lower threshold.
Maybe there is a wide variance in “understanding criteria” or “curiosity shut-off point” which has real importance for how people learn.
Maybe so, but then this would be the only area where I have a highly-developed theory of mind. If you’ll ask the people who have seen me post for a while, the consensus is that this is where I’m most lacking. They don’t typically put it in terms of a theory of mind, but one complaint about me can be expressed as, “he doesn’t adequately anticipate how others will react to what he does”—which amounts to the saying I lack a good theory of mind (which is a common characteristic of autistics).
But that gives me an idea: maybe what’s unique about me is what I count as a genuine understanding. I don’t regard myself as understanding the material until I have “plugged it in” to the rest of my knowledge, so I’ve made a habit of ensuring that what I know in one area is well-connected to other areas, especially its grounding concepts. I can’t, in other words, compartmentalize subjects as easily.
(That would also explain what I hated about literature and, to a lesser extent, history—I didn’t see what they were building off of.)
Yes, I had that thought also but wasn’t sure how to put it. Frankly, I’m a bit surprised that you had that good a theory of mind for physics issues. Your hypothesis about plugging in seems plausible.
Also, it looks like EY already wrote an article about the phenomenon I described: when people learn something in school, they normally don’t bother to ground it like I’ve described, and so don’t know what a true (i.e., level 2) understanding looks like.
(Sorry to keep replying to this comment!)
Don’t let that stop you from writing about related topics.
For me, a small but significant hack suggested by Anna Salamon was to try to act (and later, to actually be) cheerful and engaged instead of wittily laconic and ‘intelligent’. That said, it’s rare that I remember to even try. Picking up habits is difficult.
I wish I could vote this comment up a hundred times. This insane push toward college without much thought about the quality of the education is extremely harmful. People are more focused on slips of paper that signal status versus the actual ability to do things. Not only that, but people are spending tens of thousands of dollars for degrees that are, let’s be honest, mostly worthless. Liberal arts and humanities majors are told that their skill set lies in the ability to “think critically”; this is a necessary but not sufficient skill for success in the modern world. (Aside from the fact that their ability to actually “think critically” is dubious in the first place.) In reality, the entire point is networking, but there has to be a more efficient way of doing this that isn’t crippling an entire generation with personal debt.
I would settle for just 10 times if it were in the form of a post. ;)
Evidently the ability to think critically is instilled after the propaganda is spread.
Wow, now that is what I would call fraud. It’s something the students should be able to detect right off the bat, given the lack of liberal arts success stories they can point to. It’s like they just think, “I like history, so I’ll study that”, with no consideration of how they’ll earn a living in four years (or seven). That can’t last.
And I wish I could vote that up a hundred times. I wouldn’t mind as much if colleges were more open about “hey, the whole point of being here is networking”, but I guess that’s something no one can talk about in polite company.
Tell my parents this one.
On the other hand, is ‘success’ an existentialist concept (in that you have to define it yourself)? I would think it’d be near impossible to come to a consensus as to what is necessary and sufficient for success.
Sure, it’s vague. The point is that, for any plausible, conventional definition of success you might be able to come up with, a typical liberal arts degree is definitely insufficient and probably unnecessary to meet that definition’s criteria.
Or, it may not pass, and the American educational system may continue to gather detritus until it collapses. Anybody familiar enough with the Chinese Ming dynasty to rationally assess the similarities? I’m not.
Not to be pedantic, but that would be passing. I made no pretense about shortness in the time this will take to pass.
Sounds like we need some heroic ninjas to fill the university water supply with concentrated hallucinogens and blast it with a giant portable microwave.
So what is the realistic alternative for those who have no other marketable skills, such as myself? (I specifically don’t have a high school diploma, though I suppose it would be trivially easy to nab a GED.)
Until the adjustment happens, there won’t be a common way because most people are still in the current inefficient mentality so you don’t get scaling effects. Whatever internships friends and family can offer would probably be the best alternative.
In the future, there will probably be some standardized test you’ll have to take at age 16-18 to show that you’re reasonably competent and your education wasn’t a sham. (The SAT tests could probably be used as they stand for this purpose.) Then, most people will go straight to unpaid or low-paid interships in the appropriate field, during which they may have to take classes to get a better theoretical background in their field (like college, but more relevant).
After a relatively short time, they will either prove their mettle and have contacts, experience, and opportunities, or realize it was a bad idea, cut their losses, and try something else. It sounds like a big downside, until you compare it to college today.
If you’re not happy with what they did in finance, why do you think they would have been useful in science?
They’re smart. They’re capable of figuring out a creative solution. And the financial instruments they designed were creative, for what they were intended, which was to hide risk and allow banks to offload mortgages to someone else. Someone benefited from the creativity, just not the average worker or consumer.
I recommend The Quants—those guys weren’t just hiding risk in mortages, and it’s plausible that they were so hooked on competition and smugness that there was a lot of risk they were failing to see. They weren’t just hiding it on purpose.
Yes, capable of figuring out a creative solution to maximizing their goals when faced with the incentive structure of science. You think that the people who remain fail to do science when faced with these incentives, so why do you expect these others be more altruistic?
I suppose thats true, but there is such thing as equilibrium where the factors equal each other out. I do fear that it might be to high, but again, when the price becomes unreasonable, people look for the other options that are cheaper.
Thats kind of sad actually, but no amount of government regulation can fix that. Unfortunatley there is little actual incentive for actual science in a pure capitalist society, though we’ve been going good so far.
From the article:
I’m not sure how typical this experience is, but assuming it is as common as the article suggests: you don’t see a problem with the fact that huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) are getting paid very little to work in conditions with almost no long-term job security? You see that as being perfectly fine, and comment that “capitalism works?” I’m not sure what to say. Such job prospects are decidedly unappealing (some might say intolerable), and I think it’s reasonable to suggest that such conditions will result in a substantial decrease in the number of smart, dedicated young people interested in becoming scientists. This, to put it bluntly, is a fucking shame.
Maybe that was a little harsh. But the question is, why are “huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) [...] getting paid very little to work in conditions with almost no long-term job security?” The article suggests it’s because we have a surplus. But if those people weren’t so highly trained, would they then get those better jobs? Probably not, people don’t discriminate against you because you’re “highly trained”.
They likely wouldn’t, but I doubt that’s the point. I think the point is that if they weren’t so highly trained, their employment status would be more in line with their qualifications, as opposed to the current situation where Ph.Ds are doing jobs that could be done by less-credentialed people.
That sounds unlikely to me; I’ve heard the word ‘overqualified’ used to refer to that kind of discrimination.
And also, the money which is spent on useless “education” could be spent on something more useful, or at least more fun. People with mediocre incomes at least wouldn’t lose a lot of flexibility from indebtedness.
A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:
Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.
Second, a theoretical issue with self-improving AI: can a mind understand itself? If you watch a simple linear Rube Goldberg machine in action, then you can more or less understand the connection between the low- and the high-level behavior. You see all the components, and your mind contains a representation of those components and of how they interact. You see your hand, and understand how it is made of fingers. But anything more complex than an adder circuit quickly becomes impossible to understand in the same way. Sure, you might in principle be able to isolate a small component and figure out how it works, but your mind simply doesn’t have the capacity to understand the whole thing. Moreover, in order to improve the machine, you need to store a lot of information outside your own mind (in blueprints, simulations, etc.) and rely on others who understand how the other parts work.
You can probably see where this is going. The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas. I posit that the AI cannot purposefully improve itself because this would require it to understand in a deep, level-spanning way how it itself works. Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.
So: do you know any counterarguments or articles that address either of these points?
Two counters to the majoritarian argument:
First, it is being mentioned in the mainstream—there was a New York Times article about it recently.
Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought—nuclear war. I’ve been reading Bertrand Russel’s autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK’s upper chamber), explaining that more and more nations would attempt to acquire nuclear weapons, until mutual annihilation seemed certain. His fellow Lords agreed with this, but believed the matter to be a problem for their grandchildren.
Looking back even further, for decades after the concept of a nuclear bomb was first formulated, the possibility of nuclear was was only seriously discussed amongst physicists.
I think your second point is stronger. However, I don’t think a single AI rewiring itself is the only way it can go FOOM. Assume the AI is as intelligent as a human; put it on faster hardware (or let it design its own faster hardware) and you’ve got something that’s like a human brain, but faster. Let it replicate itself, and you’ve got the equivalent of a team of humans, but which have the advantages of shared memory and instantaneous communication.
Now, if humans can design an AI, surely a team 1,000,000 human equivalents running 1000x faster can design an improved AI?
If your argument is based on information capacity alone, it can be knocked down pretty easily. An AI can understand some small part of its design and improve that, then pick another part and improve that, etc. For example, if the AI is a computer program, it has a sure-fire way of improving itself without completely understanding its own design: build faster processors. Alternatively you could imagine a population of a million identical AIs working together on the problem of improving their common design. After all, humans can build aircraft carriers that are too complex to be understood by any single human. Actually I think today’s humanity is pretty close to understanding the human mind well enough to improve it.
I don’t think the number of AIs actually matters. If multiple AI’s can do a job, then a single AI should be able to simulate them as though it was multiple AI’s (or better yet just figure out how to do it on it’s own) and then do it as well. Another thing to note is that if the AI makes a copy of its program and puts it in external storage, it doesn’t add any extra complexity to itself. It can then run it’s optimization process on it, although I do agree that it would be more practical if it only improved parts of itself at a time.
You’re right, I used the million AIs as an intuition pump, imitating Eliezer’s That Alien Message.
It depends upon what designing a mind is like. How much minds intrinsically rely on interactions between parts and how far those interactions reach.
In the brain most of the interesting stuff such as science and the like is done by culturally created components. The evidence for this is the stark variety of the worldviews that exist in the world and have existed in history (with most of the same genes) and the ways those views made those that hold them interact with the world.
Making a powerful AI, in this view, is not just a problem of making a system with lots of hardware or the right algorithms from birth; it is a problem of making a system with the right ideas. And ideas interact heavily in the brain. They can squash or encourage each other. If one idea goes, others that rely on it might go as well.
I suspect that we might be close to making the human mind able to store more ideas or make the ideas process more quickly. How much that will lead to the creation of better ideas I don’t know. That is will we get a feedback loop? We might just get better at storing gossip and social information.
This is strictly true if you’re talking about the working memory that is part of a complete model of your “mind”. But a mind can access an unbounded amount of externally stored data, where a complete self-representation can be stored.
A Turing Machine of size N can run on an unbounded-size tape. A von Neumann PC with limited main memory can access an unbounded-size disk.
Although we can only load a part of the data into working memory at a time, we can use virtual memory to run any algorithm written in terms of the data as a whole. If we had an AI program, we could run it on today’s PCs and while we could run out of disk space, we couldn’t run out of RAM.
I’d just forget the majoritarian argument altogether, it’s a distraction.
The second question does seem important to me, I too am skeptical that an AI would “obviously” have the capacity to recursively self-improve.
The counter-argument is summarized here, whereas we humans are stuck with an implementation substrate which was never designed for understandability, an AI could be endowed with both a more manageable internal representation of its own capacities and a specifically designed capacity for self-modification.
It’s possible—and I find it intuitively plausible—that there is some inherent general limit to a mind’s capacity for self-knowledge, self-understanding and self-modification. But an intuition isn’t an argument.
I see Yoreth’s version of the majoritarian argument as ahistorical. The US Government did put a lot of money into AI research and became disillusioned. Daniel Crevier wrote a book AI: The tumultuous history of the search for artificial intelligence. It is a history book. It was published in 1993, 17 years ago.
There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today’s belief that AI is around the corner from yesterday’s belief that AI is around the corner. Wrong then, right now, because...
Alternatively one might argue that scaling died at 90 nanometers, practical computer science is just turning out Java monkeys, the low hanging fruit has been picked, there is no road map, theoretical computer science is a tedious sub-field of pure mathematics, partial evaluation remains an esoteric backwater, theorem provers remain an esoteric backwater, the theorem proving community is building the wrong kind of theorem provers and will not rejuvenate research into partial evaluation,...
The lack of mainstream interest in explosive developments in AI is due to getting burned in the past. Noticing that the scars are not fading is very different from being unaware of AI.
I’m reminded of a historical analogy from reading Artificial Addition. Think of it this way: a society that believes addition is the result of adherence to a specific process (or a process isomorphic thereto), and understands part of that process, is closer to creating “general artificial addition” than one that tries to achieve “GAA” by cleverly avoiding the need to discover this process.
We can judge our own distance to artificial general intelligence, then, by the extent to which we have identified constraints that intelligent processes must adhere to. And I think we’ve seen progress on this in terms of more refined understanding of e.g. how to apply Bayesian inference. For example, the work by Sebastian Thrun on how to seamlessly aggregate knowledge across sensors to create a coherent picture of the environment, which has produced tangible results (navigating the desert).
Can you point me to an overview of this understanding? I would like to apply it to the problem of detecting different types of data in a raw binary file.
I don’t know of a good one. You could try this, but it’s light on the math. I’m looking through Thrun’s papers to find a good one that gives a simple overview of the concepts, and through the CES documentation.
I was introduced to this advancement in EY’s Selling nonapples article.
And I’m not sure how this helps for detecting file types. I mean, I understand generally how they’re related, but not how it would help with the specifics of that problem.
Thanks I’ll have a look. I’m looking for general purpose insights. Otherwise you could use the same sorts of reasoning to argue that the technology behind deep blue was on the right track.
True, the specific demonstration of Thrun’s that referred to was specific to navigating a terrestrial desert environment, but it was a much more general problem than chess, and had to deal with probabilistic data and uncertainty. The techniques detailed in Thrun’s papers easily generalize beyond robotics.
I’ve had a look, and I don’t see anything much that will make the techniques easily generalize to my problems (or any problem that has similar characteristics to mine, such as very large amounts of possibly relevant data). Oh, I am planning to use bayesian techniques. But easy is not how I would characterize the translating of the problem.
Now that you mention it, one of the reasons I’m trying to get acquainted with the methods Thrun uses is to see how much they rely on advance knowledge of exactly how the sensor works (i.e. its true likelihood function). Then, I want to see if it’s possible to infer enough relevant information about the likelihood function (such as through unsupervised learning) so that I can design a program that doesn’t have to be given this information about the sensors.
And that’s starting to sound more similar to what you would want to do.
That’d be interesting. More posts on the real world use of bayesian models would be good for lesswrong I think.
But I’m not sure how relevant to my problem. I’m in the process of writing up my design deliberations and you can judge better once you have read them.
Looking forward to it!
The reason I say that our problems are related is that inferring the relevant properties of a sensor’s likelihood function looks like a standard case of finding out how the probability distribution clusters. Your problem, that of identifying a file type from its binary bitstream, is doing something similar—finding what file types have what PD clusters.
I know of partial evaluation in the context of optimization, but I hadn’t previously heard of much connection between that and AI or theorem provers. What do you see as the connection?
Or, more concretely: what do you think would be the right kind of theorem provers?
I think I made a mistake in mentioning partial evaluation. It distracts from my main point. The point I’m making a mess of is that Yoreth asks two questions:
I read (mis-read?) the rhetoric here as containing assumptions that I disagree with. When I read/mis-read it I feel that I’m being slipped the idea that governments have never been interested in AI. I also pick up a whiff of “the mainstream doesn’t know, we must alert them.” But mainstream figures such as John McCarthy and Peter Norvig know and are refraining from sounding the alarm.
So partial evaluation is a distraction and I only made the mistake of mentioning it because it obsesses me. But it does! So I’ll answer anyway ;-)
Why am I obsessed? My Is Lisp a Blub post suggests one direction for computer programming language research. Less speculatively, three important parts of computer science are compiling (ie hand compiling), writing compilers, and tools such as Yacc for compiling compilers. The three Futamura projections provide a way of looking at these three topics. I suspect it is the right way to look at them.
Lambda-the-ultimate had an interesting thread on the type-system feature-creep death-spiral. Look for the comment By Jacques Carette at Sun, 2005-10-30 14:10 linking to Futamura’s papers. So there is the link to having a theorem proving inside a partial evaluator.
Now partial evaluating looks like it might really help with self-improving AI. The AI might look at its source, realise that the compiler that it is using to compile itself is weak because it is a Futamura projection based compiler with an underpowered theorem prover, prove some of the theorems itself, re-compile, and start running faster.
Well, maybe, but the overviews I’ve read of the classic text by Jones, Gomard, and Sestoft, make me think that the start of the art only offers linear speed ups. If you write a bubble sort and use partial evaluation to compile it, it stays order n squared. The theorem prover will never transform to an n log n algorithm.
I’m trying to learn ACL2. It is a theorem prover and you can do things such as proving that quicksort and bubble sort agree. That is a nice result and you can imagine that fitting into a bigger picture. The partial evaluator wants to transform a bubble sort into something better, and the theorem prover can annoint the transformation as correct. I see two problems.
First, the state of the art is a long way from being automatic. You have to lead the theorem prover by the hand. It is really just a proof checker. Indeed the ACL2 book says
it is a long way from proving (bubble sort = quick sort) on its own.
Second that doesn’t actually help. There is no sense of performance here. It only says that they agree, without saying which is faster. I can see a way to fix this. ACL2 can be used to prove that interpreters conform to their semantics. Perhaps it can be used to prove that an instrumented interpreter performs a calculation in fewer than n log n cycles. Thus lifting the proofs from proofs about programs to proofs about interpreters running programs would allow ACL2 to talk about performance.
This solution to problem two strikes me as infeasible. ACL2 cannot cope with the base level without hand holding, which I have not managed to learn to give. I see no prospect of lifting the proofs to include performance without adding unmanageable complications.
Could performance issues be built in to a theorem prover, so that it natively knows that quicksort is faster than bubble sort, without having to pass its proofs through a layer of interpretation? I’ve no idea. I think this is far ahead of the current state of computer science. I think it is preliminary to, and much simple than, any kind of self-improving artificial intelligence. But that is what I had in mind as the right kind of theorem prover.
There is a research area of static analysis and performance modelling. One of my Go playing buddies has just finished a PhD in it. I think that he hopes to use the techniques to tune up the performance of the TCP/IP stack. I think he is unaware of and uninterested in theorem provers. I see computer science breaking up into lots of little specialities, each of which takes half a life time to master. I cannot see the threads being pulled together until the human lifespan is 700 years instead of 70.
Ah, thanks, I see where you’re coming from now. So ACL2 is pretty much state-of-the-art from your point of view, but as you point out, it needs too much handholding to be widely useful. I agree, and I’m hoping to build something that can perform fully automatic verification of nontrivial code (though I’m not focusing on code optimization).
You are right of course that proving quicksort is faster than bubble sort, is even considerably more difficult than proving it is equivalent.
But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests. To be sure, that approach is fallible, but what of it? The optimized version only needs to be probably faster than the original. A formal guarantee is only needed for equivalence.
“But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests.”
“no need”? Sadly, it’s hard to use such simple methods as anything like a complete replacement for proofs. As an example which is simultaneously extreme and simple to state, naive quicksort has good expected asymptotic performance, but its (very unlikely) worst-case performance falls back to bubble sort. Thus, if you use quicksort naively (without, e.g., randomizing the input in some way) somewhere where an adversary has strong influence over the input seen by quicksort, you can create a vulnerability to a denial-of-service attack. This is easy to understand with proofs, not so easy either to detect or to quantify with random sampling. Also, the pathological input has low Kolmogorov complexity, so the universe might well happen give it to your system accidentally even in situations where your aren’t faced by an actual malicious intelligent “adversary.”
Also sadly, we don’t seem to have very good standard technology for performance proofs. Some years ago I made a horrendous mistake in an algorithm preprint, and later came up with a revised algorithm. I also spent more than a full-time week studying and implementing a published class of algorithms and coming to the conclusion that I had wasted my time because the published claimed performance is provably incorrect. Off and on since then I’ve put some time into looking at automated proof systems and the practicalities of proving asymptotic performance bounds. The original poster mentioned ACL2; I’ve looked mostly at HOL Light (for ordinary math proofs) and to a lesser extent Coq (for program/algorithm proofs). The state of the art for program/algorithm proofs doesn’t seem terribly encouraging. Maybe someday it will be a routine master’s thesis to, e.g., gloss Okasaki’s Purely Functional Data Structures with corresponding performance proofs, but we don’t seem to be quite there yet.
Part of the problem with these is that there are limits to how much can be proven about correctness of programs. In particular, the general question of whether two programs will give the same output on all inputs is undecidable.
Proposition: There is no Turing machine which when given the description of two Turing machines accepts iff both the machines will agree on all inputs.
Proof sketch: Consider our hypothetical machine A that accepts descriptions iff they correspond to two Turing machines which agree on all inputs. We shall show that how we can construct a machine H from A which would solve the halting problem. Note that for any given machine D we can construct a machine [D, s] which mimics D when fed input string s (simply append states to D so that the machine first erases everything on the tape, writes out s on the tape and then executed the normal procedure for D). Then, to determine whether a given machine T accepts a given input s, ask machine A whether [T,s] agrees with the machine that always accepts. Since we’ve now constructed a Turing machine which solves the haling problem, our original assumption, the existence of A must be false.
There are other theorems of a similar nature that can be proven with more work. The upshot is that in general, there are very few things that a program can say about all programs.
Wouldn’t it have been easier to just link to Rice’s theorem?
I didn’t remember the name of the theorem and my Googlefu is weak,
True. Test inputs suffice for an optimizer that on average wins more than it loses, which is good enough to be useful, but if you want guaranteed efficiency, that comes back to proof, and the current state-of-the-art is a good way short of doing that in typical cases.
Partial evaluation is interesting to me in a AI sense. If you haven’t have a look at the 3 projections of Futamura.
But instead of compilers and language specifications you have learning systems and problem specifications. Or something along those lines.
Right, that’s optimization again. Basically the reason I’m asking about this is that I’m working on a theorem prover (with the intent of applying it to software verification), and if Alan Crowe considers current designs the wrong kind, I’m interested in ideas about what the right kind might be, and why. (The current state of the art does need to be extended, and I have some ideas of my own about to do that, but I’m sure there are things I’m missing.)
Why is the word obviously in quotes?
Because I am not just saying it’s not obvious an AI would recursively self-improve, I’m also referring to Eliezer’s earlier claims that such recursive self-improvement (aka FOOM) is what we’d expect given our shared assumptions about intelligence. I’m sort-of quoting Eliezer as saying FOOM obviously falls out of these assumptions.
I’m worried about the “sort-of quoting” part. I get nervous when people put quote marks around things that aren’t actually quotations of specific claims.
Noted, and thanks for asking. I’m also somewhat over-fond of scare quotes to denote my using a term I’m not totally sure is appropriate. Still, I believe my clarification above is sufficient that there isn’t any ambiguity left now as to what I meant.
Stephen Hawking, Martin Rees, Max Tegmark, Nick Bostrom, Michio Kaku, David Chalmers and Robin Hanson are all smart people who broadly agree that >human AI in the next 50-100 years is reasonably likely (they’d all give p > 10% to that with the possible exception of Rees). On the con side, who do we have? To my knowledge, no one of similarly high academic rank has come out with a negative prediction.
Edit: See Carl’s comment below. Arguing majoritarianism against a significant chance of AI this century is becoming less tenable, as a significant set of experts come down on the “yes” side.
It is notable that I can’t think of any very reputable nos. The ones that come to mind are Jaron Lanier and that Glenn Zorpette.
10% is a low bar, it would require a dubiously high level of confidence to rule out AI over a 90 year time frame (longer than the time since Turing and Von Neumann and the like got going, with a massively expanding tech industry, improved neuroimaging and neuroscience, superabundant hardware, and perhaps biological intelligence enhancement for researchers). I would estimate the average of the group you mention as over 1/3rd by 2100. Chalmers says AI is more likely than not by 2100, I think Robin and Nick are near half, and I am less certain about the others (who have said that it is important to address AI or AI risks but not given unambiguous estimates).
Here’s Ben Goertzel’s survey. I think that Dan Dennett’s median estimate is over a century, although at the 10% level by 2100 I suspect he would agree. Dawkins has made statements that suggest similar estimates, although perhaps with someone shorter timelines. Likewise for Doug Hofstadter, who claimed at the Stanford Singularity Summit to have raised his estimate of time to human-level AI from 21st century to mid-late millenium, although he weirdly claimed to have done so for non-truth-seeking reasons.
None of those people are AI theorists so it isn’t clear that their opinions should get that much weight given that it is outside their area of expertise (incidentally, I’d be curious what citation you have for the Hawking claim). From the computer scientists I’ve talked to, the impression I get is that they see AI as such a failure that most of them just aren’t bothering to do much in the way of research in it except for narrow purpose machine learning or expert systems. There’s also an issue of a sampling bias: the people who think a technology is going to work are generally more loud about that than people who think it won’t. For example, a lot of physicists are very skeptical of Tokamak fusion reactors being practical anytime in the next 50 years, but the people who talk about them a lot are the people who think they will be practical.
Note also that nothing in Yoreth’s post actually relied on or argued that there won’t be moderately smart AI so it doesn’t go against what he’s said to point out that some experts think there will be very smart AI (although certainly some people on that list, such as Chalmers and Hanson do believe that some form of intelligence explosion like event will occur). Indeed, Yoreth’s second argument applies roughly to any level of intelligence. So overall, I don’t think the point about those individuals does much to address the argument.
I disagree with this, basically because AI is a pre-paradigm science. Having been at a big CS/AI dept, I know that the amount of accumulated wisdom about AI is virtually nonexistent compared to that for physics.
What does an average AI prof know that a physics graduate who can code doesn’t know? I’m struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.
The only examples of genuine scientific insight in AI I have seen are in the works of Pearl, Hutter, Drew McDerrmot and recently Josh Tenebaum.
That’s a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn’t likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that’s just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I’ll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.
Upvoted for honest debating!
So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs? What do they teach in AI courses, if not the kind of thing that would make you better at this?
How to code, and rookie Bayesian stats/ML, plus some other applied stuff, like statistical Natural Language Processing (this being an application of the ML/stats stuff, but there are some domain tricks and tweaks you need).
The point is that there would only be experience, not theory, separating someone who knew Bayesian stats, coding and how to do science from an AI “specialist”. Yes, there are little shortcuts and details that a PhD in AI would know, but really there’s no massive intellectual gulf there.
I am gratified to find that someone else shares this opinion.
A better way to phrase the question might be: what can an average AI prof. do that a physics graduate who can code, can’t?
Each prof will, of course, have a niche app that they do well (in fact sometimes there is too much pressure to have a “trick” you can do to justify funding), but the key question is: are they more like a software engineer masquerading as a scientist than a real scientist? Do they have a paradigm and theory that enables thousands of engineers to move into completely new design-spaces?
I think that the closest we have seen is the ML revolution, but when you look at it, it is not new science, it is just statistics correctly applied.
I have seen some instances of people trying to push forward the frontier, such as the work of Hutter, but it is very rare.
Statistics vs machine learning: FIGHT!
Could you clarify exactly what Hutter has done that has advanced the frontier? I used to be very nearly a “Hutter enthusiast”, but I eventually concluded that his entire work is:
“Here’s a few general algorithms that are really good, but take way too long to be of any use whatsoever.”
Am I missing something? Is there something of his I should read that will open my eyes to the ease of mechanizing intelligence?
I think that the way of looking at the problem that he introduced is the key, i.e. thinking of the agent and environment as programs. The algorithms (AIXI, etc) are just intuition pumps.
Surely everyone has been doing that from the beginning.
This seems like a fairly reasonable description of the work’s impact:
“Another theme that I picked up was how central Hutter’s AIXI and my work on the universal intelligence measure has become: Marcus and I were being cited in presentations so often that by the last day many of the speakers were simply using our first names. As usual there were plenty of people who disagree with our approach, however it was clear that our work has become a major landmark in the area.”
http://www.vetta.org/2010/03/agi-10-and-fhi/
But why does it get those numerous citations? What real-world, non-academic consequences have resulted from this massive usage of Hutter’s intelligence definition, which would distinguish it from a mere mass frenzy?
No time for a long explanation from me—but “universal intelligence” seems important partly since it shows how simple an intelligent agent can be—if you abstract away most of its complexity into a data-compression system. It is just a neat way to break down the problem.
Machine learning, more math/probability theory/belief networks background?
A good physics or math grad who has done bayesian stats is at no disadvantage on the machine learning stuff, but what do you mean by “belief networks background”?
Do you mean “deep belief networks”?
There is ton of knowledge about probabilistic processes defined by networks in various ways, numerical methods for inference in them, clustering, etc. All the fundamental stuff in this range has applications to physics, and some of it was known in physics before getting reinvented in machine learning, so in principle a really good physics grad could know that stuff, but it’s more than standard curriculum requires. On the other hand, it’s much more directly relevant to probabilistic methods in machine learning. Of course both should have good background in statistics and bayesian probability theory, but probabilistic analysis of nontrivial processes in particular adds unique intuitions that a physics grad won’t necessarily possess.
Re: “What does an average AI prof know that a physics graduate who can code doesn’t know? I’m struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.”
A very odd opinion. We have 60 years of study of the field, and have learned quite a bit, judging by things like the state of translation and speech recognition.
The AI prof is more likely to know more things that don’t work and the difficulty of finding things that do. Which is useful knowledge when predicting the speed of AI development, no?
Which things?
Trying to model the world as crisp logical statements a la block worlds for example.
That being in the “things that don’t work” category?
Yup… which things were you asking for? Examples of things that do work? You don’t actually need to find them to know that they are hard to find!
I think Hofstadter could fairly be described as an AI theorist.
So could Robin Hanson.
Dan Dennett and Douglas Hofstadater don’t think machine intelligence is coming anytime soon. Those folk actually know something about machine intelligence, too!
Re: “can a mind understand itself?”
That is no big deal: copy the mind a few billion times, and then it will probably collectively manage to grok its construction plans well enough.
Another argument against the difficulties of self-modeling point: It’s possible to become more capable by having better theories rather than by having a complete model, and the former is probably more common.
It could notice inefficiencies in its own functioning, check to see if the inefficiencies are serving any purpose, and clean them up without having a complete model of itself.
Suppose a self-improving AI is too cautious to go mucking about in its own programming, and too ethical to muck about in the programming of duplicates of itself. It still isn’t trapped at its current level, even aside from the reasonable approach of improving its hardware, though that may be a more subtle problem than generally assumed.
What if it just works on having a better understanding of math, logic, and probability?
In addition to theoretical objections, I think the majoritarian argument is factually wrong. Remember, ‘future is here, just not evenly distributed’.
http://www.google.com/trends?q=singularity shows a trend
http://www.nytimes.com/2010/06/13/business/13sing.html?pagewanted=all—this week in NYT. Major MSFT and GOOG involvement.
http://www.acceleratingfuture.com/michael/blog/2010/04/transhumanism-has-already-won/
Re: “http://www.google.com/trends?q=singularity shows a trend”
Not much of one—and also, this is a common math term—while:
“Your terms—“technological singularity”—do not have enough search volume to show graphs.”
The critical aspect of a “major-impact intelligence-explosion singularity” isn’t the method for improvement but the rate of improvement. If computer processing power continues to grow at an exponential rate, even an inefficiently improving AI will have the growth in raw computing power behind it.
I don’t have any articles but I’ll take a stab at counterarguments.
A Majoritarian counterargument: AI turned out to be harder and further away than originally thought. The general view is still tempered by the failure of AI to live up to those expectations. In short, the AI researchers cried “wolf!” too much 30 years ago and now their predictions aren’t given much weight because of that bad track record.
A mind can’t understand itself counterargument: Even accepting as a premise that a mind can’t completely understand itself, that’s not an argument that it can’t understand itself better than it currently does. The question then becomes which parts of the AI mind are important for reasoning/intelligence and can an AI understand and improve that capability at a faster rate than humans.
Re: “If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.”
? I see plenty of scaremongering around machine intelligence. So far, few governments have supported it—which seems fairly sensible of them.
How do we know that governments aren’t secretly working on AI?
Is it worth speculating about the goals which would be built into a government-designed AI?
Regarding majoritarianism:
Crash programs in basic science because of speculative applications are very uncommon. Decades of experimentation with nuclear fission only brought a crash program with the looming threat of the Nazis, and after a practical demonstration of a chain reaction.
Over the short time spans over which governments make their plans, the probability of big advances in AI basic science coming is relatively small, even if substantially over the longer term. So you get all the usual issues with attending to improbable (in any given short period) dangers that no one has recent experience with. Note things like hurricane Katrina, the Gulf oil spill, etc. The global warming effects of fossil fuel use have been seen as theoretically inevitable since at least the Eisenhower administration, and momentum for action has only gotten mobilized after a long period of actual warming providing pretty irrefutable (and yet widely rejected anyway!). evidence.
IBM’s Watson AI trumps humans in “Jeopardy!”
http://news.ycombinator.com/item?id=1436625
Thanks a lot for the link. I remember Eliezer arguing with Robin whether AI will advance explosively by using few big insights, or incrementally by amassing encoded knowledge and many small insights. Watson seems to constitute evidence in favor of Robin’s position as it has no single key insight:
Direct link to printable (readable) single-page version of the article.
A question: Do subscribers think it would be possible to make an open-ended self-improving system with a perpetual delusion—e.g. that Jesus loves them.
Yes, in that it could be open-ended in any “direction” independent of the delusion. However, that might require contrived initial conditions or cognitive architecture. You might also find the delusion becoming neutralized for all practical purposes, e.g. the delusional proposition is held to be true in “real reality” but all actual actions and decisions pertain to some “lesser reality”, which turns out to be empirical reality.
ETA: Harder question: are there thinking systems which can know that they aren’t bounded in such a way?
I’m thinking of writing a top-post on the difficulties of estimating P(B) in real-world applications of Bayes’ Theorem. Would people be interested in such a post?
Funny, I’ve been entertaining the same idea for a few weeks.
Every time I read statements like ”… and then I update the probabilities, based on this evidence …”, I think to myself: “I wish I had the time (or processing power) he thinks he has. ;)”
Sure
yay! music composition AI
we’ve had then for a while though,but who knows, we might have our first narrow focused AI band pretty soon.
good business opportunity there..maybe this is how the SIAI will guarantee unlimited funding in the future :)?
Thanks for the link.
Mozart developed the Mozart sonata.
Great article. Thanks for the link!
Good music isn’t about good music. It’s about which music authorities have approved of it.
What about saleable pop music?
Replicator constructed in Conway’s Life
One of Eliezer’s posts talks about realizing that conventional science is content with an intolerably slow pace. Here we have an example of less time leading to a better solution.
Apparently it doesn’t replicate itself any more than a glider does; the old copy is destroyed as it creates a new copy.
Reading the conwaylife.com thread gives a better sense of this thingie’s importance than the comparison with a glider. ;)
Now I’m wondering what screen resolution and how many potions of longevity would be required to evolve intelligent life while playing ADOM.
An idea I had: an experiment in calibration. Collect, say, 10 (preferably more) occasions on which a weather forecaster said “70% chance of rain/snow/whatever,” and note whether or not these conditions actually occurred. Then find out if the actual fraction is close to 0.7.
I wonder whether they actually do care about being well calibrated? Probably not, I suppose their computers just spit out a number and they report it. But it would be interesting to find out.
I will report my findings here, if you are interested, and if I stay interested.
Note that this sort of thing has been done a bit before. See for example this analysis.
Edit: The linked analysis has a lot of problems. See discussion below.
Cool, but hold on a minute though. I quote:
Isn’t something wrong here? If you say “60% chance of rain,” and it doesn’t rain, you are not necessarily a bad forecaster. Not unless it actually rained on less (or more!) than 60% of those occasions. It should rain on ~60% of occasions on which you say “60% chance of rain.”
Am I just confused about this fellow’s methodology?
If I’m reading this correctly they are doing exactly what you want but only breaking into two categories “more likely to rain than not” and “less likely to rain than not.” But I’m confused by the fact that 50 percent gets into the expecting rain category.
Okay, this is like a sore tooth. Somebody’s wrong, and I don’t know if it’s me. A queazy feeling.
Listen to this though:
Uhhh.… it’s remarkable that a forecast changed significantly in SEVEN DAYS? What?!
The weather is the canonical example of mathematical chaos in an (in principle) deterministic system. Of course the forecasts will change, because Tuesday’s weather sets the initial conditions for Wednesday, and chaotic systems are ultra-sensitive to initial conditions! The forecasters would be idiots if they didn’t update their forecasts as much as possible.
The “close second,” moreover, should be first! That change occurred in a two day period versus a seven! ARGGHHH.
To me it almost seems as though a scenario like this is happening:
In other words, isn’t the author misrepresenting the forecasters in throwing away their POPs, which could be interpreted as subjective beliefs about likelihoods?
I was also sort of confused by:
Is changing the forecast as new information comes in a bad thing?? Or is it merely that they are changing the forecast too much?
Nota bene: I am also very tired and may just be being thickheaded—I rate that possibility at about 50%, and you’re welcome to check my calibration. =)
Related thought: Maybe see if they will give you their data? That would save you sometime and I’m now very interested in if a more careful analysis will substantially disagree with their results.
Oh. I see. Yes, they aren’t taking into account the accuracy estimations at all. Your criticism seems correct. Your complaints about the other aspects seem accurate also.
Huh. This is disturbing; most of the Freakonomics blog entries I’ve read have good analysis of data. It looks like this one really screwed the pooch. I have to wonder if others they’ve done have similar problems that I haven’t noticed.
Yeah, I am a fan of Freakonomics generally too. I will write to them, I think. Will let you know how it goes. I want to confirm I am right about the probability stuff though, I still have a niggling doubt that I’ve just misunderstood something. But I think they are definitely wrong about the forecast updating.
I think the criticism is that if they need to change their predictions so much between time 1 and time 2, then it is irresponsible to make any prediction at time 1. This is a hard case to make out for the temperature swings, since I think 8 degrees is only about one standard deviation for a prediction of a day’s temperature in a city knowing only what day of the year it is, but it’s an easy case to make out for the precipitation swings: if, on average, you are wrong by 40% objective probability (not even 40% error; 40% chance of rain, here), then a prediction of, e.g., 30% will on average convey virtually no information; that could easily mean 0% or it could easily mean 70%, and without too much implausibiliy it could even mean 90% -- so why bother saying 30% at all when you could (more honestly) admit your ignorance about whether it will rain next week.
In the meteorologists’ defense, their medium-range predictions become useful when tested against broader time periods. Specifically, a 60% chance of rain on Thursday means you can be pretty sure that it will rain on Wednesday, Thursday, or Friday—perhaps with 90% confidence. The reason for this is that predictions of rain generally come from tracking low-pressure pockets of air as they sweep across the continent; these pockets might speed up or slow down, or alter their course by a few degrees, but they rarely disappear or turn around altogether.
This is a much more reasonable testing method when one’s predictions are based on an alleged causal process. For example, suppose I claim that I can predict how many cards Bob will draw in a game of blackjack by taking into consideration all of the variables in the game. A totally naive predictor might be “Bob will hit no matter what.” That predictor might be right about 60% of the time. A slightly better predictor might be “Bob will hit if his cards show a total of 13 or less.” That predictor might be right about 70% of the time. If I, as a skilled blackjack kibitzer, can really add predictive value to these simple predictors, then I should be able to beat their hit-miss ratio, maybe getting Bob’s decision right 75% of the time. If I knew Bob quite well and could read his tells, maybe I would go up to 90%.
Anyway, 66% is pretty good for a blind guess that can’t be varied from episode to episode. So the test with the die that you’re using in your analogy is a fair test, but the bar is set too high. If you can get 66% on a hit-miss test with a one-sentence rule, you’re doing pretty well.
Point taken about forecast updating—information changing that drastically may be merely worthless noise.
However, on the coin toss/blackjack thing...
In your blackjack example, the answer you give is binary—Bob will either say “hit me” or “[whatever the opposite is, I’ve never played].” The meteorologists are giving answers in terms of probabilities: “there is a 70% chance that it will rain.”
If you did that in the Blackjack example; i.e., you said “I rate it as 65% likely that Bob will take another card,” and then he DIDN’T take another card, that would not mean you were bad at predicting—we would have to watch you for longer.
My complaint is that the author interpreted forecasters’ probabilities as certainties, rounding them up to 1 or down to 0. This was unfair as it ignored their self-stated levels of confidence.
Sorry, I didn’t communicate clearly.
Correct. However, suppose we repeat this experiment 100 times, each time reducing my probability estimate to a binary prediction of hit-stay. Suppose that Bob hits 60 times, 50 of which were on occasions when I assigned greater than 50% probability to Bob hitting, and Bob stays 40 times, 13 of which were on occasions when I assigned less than 50% probability to Bob hitting. Thus, my overall accuracy, when reduced to a hit-stay prediction, is 63%. This is worse than my claimed certainty level of 65%, but better than the naive predictor “Bob always hits,” which only got 60% of the episodes right. Thus, the pass-fail test is one way of distinguishing my predictive abilities from the predictive abilities of a broad generalization.
To see this, suppose instead that I always predict, with 65% certainty, that Bob will hit or that Bob will stay. I might rate the chance of Bob hitting at 65%, or I might rate it at 35%. In this experiment, Bob hits 75 times, 50 of which were on occasions when I assigned a 65% probability that Bob would hit. Bob stays 25 times, 18 of which were on occasions when I assigned a 65% probability that Bob would stay. I correctly predicted Bob’s action 68% of the time, which is better than my stated certainty of 65%. However, my accuracy is worse than the accuracy of the naive predictor “Bob always hits,” which would have scored 75%. Thus, my predictions are not very good, by one relatively objective benchmark, despite the fact that they are, in a narrow Bayesian sense, fairly well-calibrated.
Again, sorry for the confusion. I gave an incomplete example before.
So if I understand correctly, the issue is not that the meteorologists are poorly calibrated (maybe they are, maybe they aren’t), but rather that their predictions are less useful than a simple rule like “it never rains” for actually predicting whether it will rain or not.
I think I am beginning to see the light here. Basically, in this scenario you are too ignorant of the phenomenon itself, even though you are very good at quantifying your epistemic state with respect to the phenomenon? If this is more or less right, is there terminology that might help me get a better handle on this?
Bingo! That’s exactly what I was trying to say. Thanks for listening. :-)
My jargon mostly comes from political science. We’d say the meteorologists are using an overly complicated model, or seizing on spurious correlations, or that they have a low pseudo-R-squared. I’m not sure any of those are helpful. Personally, I think your words—the meteorologists are too ignorant for us to applaud their calibration—are more elegant.
The only other thing I would add is that the reason why it doesn’t make sense to applaud the meteorologists’ guess-level calibration is because they have such poor model-level calibration. In other words, while their confidence about any given guess seems accurate, their implicit confidence about the accuracy of their model as a whole is too high. If your (complex) model does not beat a naive predictor, social science (and, frankly, Occam’s Razor) says you ought to abandon it in favor of a simpler model. By sticking to their complex models in the face of weak predictive power, the meteorologists suggest that either (1) they don’t know or care about Occam’s Razor, or (2) they actually think their model has strong predictive power.
Here’s a really crude indicator of improvement in weather forecasting: I can remember when jokes about forecasts being wrong were a cliche. I haven’t heard a joke about weather forecasts for years, probably decades, which suggests that forecasts have actually gotten fairly good, even if they’re not as accurate as the probabilities in the forecasts suggest.
Does anyone remember when weatherman jokes went away?
Can we conclude that the prevalence of the cliche dropping is related to the quality of weather forecasting? All else being equal I expect a culture to develop a resistance to any given cliche over time. For example the cliche “It’s not you it’s me” has dropped in use and been somewhat relegated to ‘second order cliche’ . But it is true now at least as much as it has been in the past.
A fair point, though if a cliche has lasted for a very long time, I think it’s more plausible that its end is about changed conditions rather than boredom.
Gotcha. Thanks for the explanation, it’s been very clarifying. =)
Econ question: if a child is renting an apartment for $X, and the parents have a spare apartment that they are currently renting out for $Y, would it help or hurt the economy if the child moved into that apartment instead? Consider the cases X<Y, X=Y, X>Y.
Good question, not because it’s hard to answer, but because of how pervasive the wrong answer is, and the implications for policy for economists getting it wrong.
If your parents prefer you being in their apartment to the forgone income, they benefit; otherwise they don’t.
If you prefer being in their apartment to the alternative rental opportunities, you benefit; otherwise, you don’t.
If potential renters or the existing ones prefer your parents’ unit to the other rental opportunities and they are denied it, they are worse off; otherwise, they aren’t.
ANYTHING beyond that—anything whatsoever—is Goodhart-laden economist bullsh**. Things like GDP and employment and CPI were picked long ago as a good correlate of general economic health. Today, they are taken to define economic health, irrespective of how well people’s wants are being satisfied, which is supposed to be what we mean by a “good economy”.
Today, economists equate growing GDP—irrespective of measuring artifacts that make it deviate from what we want it to measure—with a good economy. If the economy isn’t doing well enough, well, we need more “aggregate demand”—you see, people aren’t buying enough things, which must be bad.
Never once has it occurred to anyone in the mainstream (and very few outside of the mainstream) that it’s okay for people to produce less, consume less, and have more leisure. No, instead, we have come to define success by the number of money-based market exchanges, rather than whether people are getting the combination of work, consumption, and leisure (all broadly defined) that they want.
This absurdity reveals itself when you see economists scratching their heads, thinking how we can get people to spend more than they want to, in order to help the economy. Unpack those terms: they want people to hurt themselves, in order to hurt less.
Now, it’s true there are prisoner’s dilemma-type situations where people have to cooperate and endure some pain to be better off in the aggregate. But the corresponding benefit that economists expect from this collective sacrifice is … um … more pointless work that doesn’t satisfy real demand .. but hey, it keeps up “aggregate demand”, so it must be what a sluggish economy needs.
Are you starting to see how skewed the standard paradigm is? If people found a more efficient, mutualist way to care for their children rather than make cash payments to day care, this would be regarded as a GDP contraction—despite most people being made better off and efficiency improving. If people work longer hours than they’d like, to produce stuff no one wants, well, that shows up as more GDP, and it’s therefore “good”.
How the **** did we get into this mindset?
Sorry, [/another rant].
What isn’t reflected in the GDP is huge.
There’s the underground economy—I’ve seen claims about the size of it, but how would you check them?
There’s everything people do for each other without it going through the official economy.
And there’s what people do for themselves—every time you turn over in bed, you are presumably increasing value. If you needed paid help, it would be adding to the GDP.
I don’t understand where you acquired this view of economists. I am an economist and I assure you economists don’t ascribe to the “measured GDP is everything” view you attribute to them.
This is not an accurate portrayal of what Keynesians believe. The Keynesian theory of depressions and recessions is that excessive pessimism leads people to avoid investing or starting businesses, which lowers economic activity further, which promotes more pessimism, and so on.
The goal of stimulus is effectively to trick people into thinking the economy is better than it is, which then becomes a self-fulfilling prophesy; low quality spending by government drives high quality spending by the private sector.
If you wish to be sceptical of this story (I’m fairly dubious about it myself), then fine, but Keynesians aren’t arguing what you think they’re arguing.
No, that’s precisely what I assumed they’re arguing, and I believe my points were completely responsive. I will address the position you describe in the context of the criticism in my rant.
Now, unpack the meaning of all of those terms, back to the fundamentals we really care about, and what is all that actually saying? Well, first of all, have you played rationalist taboo with this and tried to phrase everything without economics jargon, so as to fully break down exactly what all the above means at the layperson level? To me, economists seem to talk as if they have not done so.
I would like for you to tell me whether you have done so in the past, and write up the phrasing you get before reading further. You’ve already tabooed a lot, but I think you need to go further, and remove the terms: recession, depression, stimulus, excessive, pessimism, invest, and economic activity. (What’s left? Terms like prefer, satisfaction, wants, market exchange, resources, working, changing actions.)
Now, here’s what I get: (bracketed phrases indicate a substitution of standard economic jargon)
“People [believe that future market interactions with others will be less capable of satisfyng their wants], which leads them to [allocate resources so as to anticipate lower gains from such activity]. As people do this, the combined effect of their actions is to make this suspicion true, [increasing the relative benefit of non-market exchanges or unmeasured market exchanges].
“The government should therefore [purchase things on the market] in order to produce a [false signal of the relative merit of selling certain goods], and facilitate production of [goods people don’t want at current prices or that they previously couldn’t justify asking their government to provide]. This, then, becomes a self-fulfilling prophecy: once people [sell unwanted goods due to this government action], it actually becomes beneficial for others to sell goods people do want on the market, [preventing a different kind of adjustment to conditions from happening].”
Phrased in these terms, does it even make sense? Does it even claim to do something people might want?
That was a very useful exercise since it helped me identify the key point of disagreement between you an Keynesianism. If I’m right, you’re coming at this from a goods market perspective i.e. “I, a typical consumer am not interested in any of these goods at these prices, so I’m going to not buy so much”, whereas the Keynesians are blaming this kind of attitude: “I, a typical consumer am fearful of the future. While I want to buy stuff, I’d better start saving for the future instead in case I lose my job” and it’s the saving that triggers the recession (money flows out of the economy into savings, this fools people into thinking they are poorer and the death spiral begins).
A couple of other contextual points: 1) The monetary stimulus that Keynes recommended was based on governments running deficits, not necessarily spending more. Cutting taxes works just as well
2) Keynes was trying to reduce the magnitude of boom-bust swings, not increase trend economic growth rates. As such he prescribed the opposite behaviour in boom times, have government run surpluses to tamp down consumer exuberance. This is less widely known since politicians only ever talk about Keynes during recessions, when it gives them intellectual cover to spend lots of money.
3) The Keynesian consensus is not universal. Arnold Kling’s “recalculation” story is much closer to your picture, and you’ll notice he doesn’t advocate stimulus, but rather waiting to see how people adjust to the new economic circumstances.
4) GDP is the preoccupation of macroeconomists. Microeconomists (like me) care much more about allocative efficiency, which is to say to what extent are things in the hands of the people who value them most? So there’s a whole branch of the profession to which your initial GDP-centrism comment does not apply.
It’s points 3 and 4 in particular that lead me to object to your claim that economists are obsessed with GDP. To my way of thinking, it’s politicians that are obsessed with GDP because they believe their chances of re-election are tied to economic growth and unemployment figures. So they spend a lot of time asking economists how to increase GDP, and therefore economists more often than not to discuss GDP when they appear in public.
It’s still not clear to me that you’ve done what I asked (taboo your model’s predicates down to fundamentals laypeople care about), or that you have the understanding that would result from having done what I asked.
What’s the difference between the “goods market” perspective and the “blaming this kind of attitude”/Keynesian perspective? Why is one wrong or less helpful, and what problems would result from using it?
Why is it bad for people to believe they are poorer when they are in fact poorer?
Why is it bad for more money to go into savings? Why does “the economy” entirely hinge on money not doing this?
Until you can answer (or avoid assuming away) those problems, it’s not clear to me that your understanding is fully grounded in what we actually care about when we talk about a “good economy”, and so you’re making the same oversights I mentioned before.
No, I’m not making those oversights because I am a) not a Keynesian and b) not a macroeconomist. My offering defences of this position should not be construed as fundamental agreement that position.
This is quickly turning into a debate about the merits of Keynesianism which is not a debate I am interested in because stabilisation policy is not my field and I don’t find it very interesting, I got enough of it at university. I’m going to touch on a few points here, but I’m not going to engage fully with your argument; you really need to talk to a Keynesian macroeconomist if you want to discuss most of this stuff. For one thing my ability to taboo certain words is affected by the fact I don’t have a very solid grip on the theory and I don’t spend much of my time thinking about high level aggregates like GDP.
Now here’s the best I can do on your bullet point questions, sorry if it doesn’t help much, but it’s all I’ve got: 1) The difference is that Keynesians believe savings reduce the money supply by taking money out of circulation, this makes them think they are poorer, which makes them act like they’re poorer, which makes other people poorer.
2) Because it starts with an illusion of poverty. The first cause of recessions in a Keynesian model is “animal spirits”, or in layman’s terms, irrational fear of financial collapse. Viewed from this perspective, stimulus is a hack that undoes the irrationality that caused the problem in the first place (and because it’s caused by irrationality they can feel confident it is a problem).
3) This is actually one of my biggest problems with Keynesian theory. If it strikes you as counter-intuitive or silly, I’m not going to dissuade you.
One final point: The reason I replied to your initial comment in the first place, was your suggestion that all economists are obsessed with maximising measured GDP over everything else.
But many economists don’t deal with GDP at all. When I was learning labour market theory we were taught that once people’s wage rate gets high enough, one could expect them to work fewer hours since the demand for leisure time increases with income. There was never a suggestion that this was anything to be concerned about, the goal is utility, not income.
In environmental economics I recall reading a paper by Robert Solow (the seminal figure in the theory of economic growth) arguing that it was important to consider changes in environmental quality along with GDP, to get a better picture of how well off people really are.
I look at what I have been taught in economics, and I simply can’t square it with your view of the profession. Some kinds of economists tend to be obsessed with growth, but they tend to be economists who specialise in economic growth. The rest of us have other pursuits, and other obsessions.
Alright, I’ll let anyone judge for themselves if the canonical Keynesian replies reveal a truly grounded understanding of what counts as “helping the economy”.
Forget Keynesian theory for a minute: I want to know if you have the understanding I expect of whatever theory it is you do endorse. Can you taboo that theory’s terminology and ground it layperson level fundamentals? Can you force me to care about whatever jargon you do in fact use?
Because, at risk of sounding rude, I don’t think you’ve acquired this “Level 2” understanding, and I don’t think you’re atypical among economists in lacking it—from what I’ve read of Mankiw, Sumner, and Krugman, they don’t have it either.
(btw, you call yourself an economist but don’t have a grip on Keynesian theory? Isn’t that pretty much required these days?)
Sure—I only meant that economic policy advocates who are concerned about aggregate economic variables are obsessed with GDP as one of those variables, but that should be assumed from context. Obviously, you’re not going to care about GDP in your capacity as a microeconomist of company behavior.
On macro policy I doubt I have level 2 understanding. I had to take papers in macro at university, and I was able to get reasonable grades on them, but level 0 or 1 understanding is sufficient to do that.
My guess is that if you asked a Keynesian why they care, they would say that boom-bust cycles create uncertainty and fear in people because they don’t know if they’re going to lose their job (and they want their job, or they’d have already quit) and by taming the boom-bust cycle people will have a more certain and therefore more pleasant life).
Equally if you asked a development economist, they would point to the misery in third world countries and for wealthy countries point out that productivity growth means being able to do more with less, and whether you want to have more, or want to do less, that’s a win. Unemployed people are by definition people who want a job but don’t have one, so concern about unemployment is easy to work out.
And as for me, well the reason I care about allocative efficiency is that allocative efficiency is the attempt to match reality to people’s preferences as well as is possible under current constraints. How do we use our resources and knowledge to create the things people want and how do we get them to the people who want them the most?
The market does a pretty good job of this most of the time, but it does fail sometimes. And when it fails there are things government can do to improve matters, but the government can fail too, so you have to balance out the imperfections of the market and the imperfections of government and try to work out which set of imperfections is more problematic. If I succeed, or if people like me succeed then people will have more of what they want, be that flat screen TVs, or cars or clean air or time with their families. Not everything falls within economics’ purview of course, love and truth and beauty are things I can’t help with. But for everything else, my goal is to help the market to match infinite wants with finite resources, and imperfect information.
Perhaps it should have been, but I failed to assume this. And microeconomics is a lot wider than company behaviour, it covers pretty much everything but GDP and unemployment.
That wasn’t the question or contributory thereto, though it shows you can ground one concept.
The question is, whatever model/theory you have of the economy, are its predicates fully grounded in what laypersons care about? You mentioned things people care about, but not how they fit into the model that you advocate.
Allocative efficiency is what I work with. If you asked me why I care about GDP, my response would be, “I don’t, particularly”.
As for my economic model, I can’t give you a full rundown in a comment, but here’s the short version: 1) Level 1 is the fully ideal version, unrealistic, but useful for grounding the whole thing in people’s preferences. It basically rests on the notion that if you make a battery of assumptions voluntary exchange will result in allocative efficiency, if person A values something more than person B then they will trade, either directly or through side trades until person A has it. Yes there are a lot of reasons this doesn’t work in practice, but that’s level 2.
2) Level 2 picks at all those assumptions in level 1. Things like externalities (like pollution) imperfect information, irrational behaviour, imperfect competition, transaction costs and other git in the gears. These things cause violations of the assumptions in 1, and therefore prevent potentially efficiency-enhancing trades from occurring. The academic work at level 2 is focused around identifying these problems and considering possible solutions a government could introduce to correct for them.
3) Level 3 looks at the ability of government to effectively implement the policies identified at level 2. Theories like social choice theory (the ability of voting systems to effectively aggregate votes into social preferences) and public choice theory (how well do governments act as agents of the voting public). The academic work at level 3 is focused around identifying the limitations of real world governments, and identifying the side-effects of badly implemented policies.
Level 1 is all about individual preferences, not attempting to measure them directly because you can’t, but rather in setting up a system so people can sort it out themselves.
As for how GDP factors in, well they it doesn’t directly. Macro and micro aren’t integrated, they haven’t been since Keynes. You learn about them indifferent courses, people tend not to specialise in both, so there’s a gap there. Hence the reason I don’t care about GDP per se.
Now productivity I care about, because higher productivity means more resources for people to trade with and more preferences can be satisfied. I care about unemployment because it implies people are willing to make a trade, but unable to do so due to some bug in the system, either a level 2 problem (market failure), or a level 3 problem (government failure).
James_K:
Aside from the standard arguments about the shortcomings of GDP, my principal objection to the way economists use it is the fact that only the nominal GDP figures are a well-defined variable. To make sensible comparisons between the GDP figures for different times and places, you must convert them to “real” figures using price indexes. These indexes, however, are impossible to define meaningfully. They are produced in practice using complicated, but ultimately arbitrary number games (and often additionally slanted due to political and bureaucratic incentives operating in the institutions whose job is to come up with them).
In fact, when economists talk about “nominal” vs. “real” figures, it’s a travesty of language. The “nominal” figures are the only ones that measure an actual aspect of reality (even if one that’s not particularly interesting per se), while the “real” figures are fictional quantities with only a tenuous connection to reality.
It’s pretty easy to get this sort of view just reading books. In my (limited) experience, there are a fair percentage of divergent types that are not like this—and they tend to be the better economists.
You may like Morgenstern’s book On the Accuracy of Economic Observations. How I rue the day I saw this in a used bookstore in NY and didn’t have the cash to buy it..
EDIT: fixed title name
I’m going through Morgenstern’s book right now, and it’s really good. It’s the first economic text I’ve ever seen that tries to address, in a systematic and no-nonsense way, the crucial question of whether various sorts of numbers routinely used by economists (and especially macroeconomists) make any sense at all. That this book hasn’t become a first-rank classic, and is instead out of print and languishing in near-total obscurity, is an extremely damning fact about the intellectual standards of the economic profession.
I’ve also looked at some other texts by Morgenstern I found online. I knew about his work in game theory, but I had no idea that he was such an insightful contrarian on the issues of economic statistics and aggregates. He even wrote a scathing critique of the concept ot GNP/GDP (a more readable draft is here). Unfortunately, while this article sets forth numerous valid objections to the use of these numbers, it doesn’t discuss the problems with price indexes that I pointed out in this thread.
realitygrill:
Could you please list some examples? Aside from Austrians and a few other fringe contrarians, I almost always see economists talking about the “real” figures derived using various price indexes as if they were physicists talking about some objectively measurable property of the universe that has an existence independent of them and their theories.
Thanks for the pointer! Just a minor correction: apparently, the title of the book is On the Accuracy of Economic Observations. It’s out of print, but a PDF scan is available (warning -- 31MB file) in an online collection hosted by the Stanford University.
I just skimmed a few pages, and the book definitely looks promising. Thanks again for the recommendation!
I meant personally—I did my undergrad in economics. I’m extremely skeptical of macroeconomics and currently throw in with the complex adaptive system dynamicists and the behavioral economists (and Hansonian cynicism; that’s just me). But, to give an example, Krugman has done quite a bit of work in the complexity arena.
Yeah, you’re welcome! The first I heard of that book was someone using the example of calculating in-flows and out-flows of gold. Each country’s estimates differed by orders of magnitude or something like that, and even signs.
There are a number of reasonably priced copies on amazon.
Oh good, they certainly weren’t that reasonable the last I checked.
It’s not so much a matter of being overconfident as it is not listing the disclaimers at every opportunity. The Laspeyres Price Index (the usual type of price index) has well understood limitations (specifically that it overestimates consumer price growth as it doesn’t deal with technological improvement and substitution effects very well), but since we don’t have anything better, we use it anyway.
“Real” is a term of art in economics. It’s used to reflect inflation-adjusted figures because all nominal GDP tells you is how much money is floating around, which isn’t all that useful. real GDP may be less certain, but it’s more useful.
Bear in mind that everything economists use is an estimate of a sort, even nominal GDP. Believe it or not, they don’t actually ask every business in the country how much they produced and / or received in income (which is why the income and expenditure methods of calculating GDP give slightly different numbers although they should give exactly the same result in theory). The reason this may not be readily apparent is that most non-technical audiences start to black out the moment you talk about calculating a price index (hell, it makes me drowsy) and technical audiences already understand the limitations.
James_K:
You’re talking about the “real” figures being “less certain,” as if there were some objective fact of the matter that these numbers are trying to approximate. But in reality, there is no such thing, since there exists no objective property of the real world that would make one way to calculate the necessary price index correct, and others incorrect.
The most you can say is that some price indexes would be clearly absurd (e.g. one based solely on the price of paperclips), while others look fairly reasonable (primarily those based on a large, plausible-looking basket of goods). However, even if we limit ourselves to those that look reasonable, there is still an infinite number of different procedures that can be used to calculate a price index, all of which will yield different results, and there is no objective way whatsoever to determine which one is “more correct” than others. If all the reasonable-looking procedures led to the same results, that would indeed make these results meaningful, but this is not the case in reality.
Or to put it differently, an “objective” price index is a logical impossibility, for at least two reasons. First, there is no objective way to determine the relevant basket of goods, and different choices yield wildly different numbers. Second, the set of goods and services available in different times and places is always different, and perfect equivalents are normally not available, so different baskets must be used. Therefore, comparisons of “real” variables invariably involve arbitrary and unwarranted assumptions about the relative values of different things to different people. Again, of course, different arbitrary choices of methodology yield different numbers here.
(By the way, I find it funny how neoclassical economists, who hold it as a fundamental axiom that value is subjective, unquestioningly use price indexes without stopping to think that the basic assumption behind the very notion of a price index is that value is objective and measurable after all.)
Very true. A good general measure in human economic systems should NOT merely look at the ease of availability of finished paperclips. It should also include, in the “basket”, such things as extrudable metal, equipment for detecting and extracting metal, metallic wire extrusion machines, equipment for maintaining wire extrusion machines, bend radius blocks, and so forth.
Thank you for pointing this out; you are a relatively good human.
That is a very poor inference on their part.
Here’s a crude metric I use for gauging the relative goodness of societies as places to live: Immigration vs. emigration.
It’s obviously fuzzy—you can’t get exact numbers on illegal migration, and the barriers (physical, legal, and cultural) to relocation matter, but have to be estimated. So does the possibility that one country may be better than another, but a third may be enough better than either of them to get the immigrants.
For example, the evidence suggests that the EU and the US are about equally good places to live.
I don’t think that’s a good metric. Societies that aren’t open to mass immigration can have negligible numbers of immigrants regardless of the quality of life their members enjoy. Japan is the prime example.
Moreover, in the very worst places, emigration can be negligible because people can be too poor to pay for the ticket to move anywhere, or prohibited to leave.
But “given perfect knowledge of all market prices and individual preferences at every time and place, as well as unlimited computing power”, you could predict how people would choose if they were not faced with legal and moving-cost barriers—e.g. imagine a philanthropist willing to pay the moving costs. So your objection to this metric seems to be a surmountable one, in principle, assuming perfect knowledge etc. The main remaining barrier to migration may be sentimental attachment—but given perfect knowledge etc. one could predict how the choices would change without that remaining barrier.
Applying this metric to Europa versus Earth, presumably Europans would choose to stay on Europa and humans would choose to stay on Earth even with legal, moving-cost, and sentimental barriers removed, indeed both would pay a great deal to avoid being moved.
In contrast to Europans versus humans, humans-of-one-epoch are not very different from humans-of-another-epoch.
Excellent point—although I would pay a good deal to move to Europa, given a few days worth of air and heat.
A fair point, though I think societies like that are pretty rare. Any other notable examples?
Off the top of my head, I know that Finland had negligible levels of immigration until a few years ago. Several Eastern European post-Communist countries are pretty decent places to live these days (I have in mind primarily the Czech Republic), but still have no mass immigration. As far as I know, the same holds for South Korea.
Regarding emigration, the prime example were the communist countries, which strictly prohibited emigration for the most part (though, rather than looking at the numbers of emigrants, we could look at the efforts and risks many people were ready to undertake to escape, which often included dodging snipers and crawling through minefields).
The basket used is based on a representation of what people are currently consuming. This means we don’t have to second-guess people’s preferences. Unique goods like houses pose a problem, but there’s not really anything we can do about that, so the normal process is to take an average of existing houses.
Which is a well understood problem. Every economist knows this, but what would you have us do? It is necessary to inflation-adjust certain statistics, and if the choice is between doing it badly and not doing it at all, then we’ll do it badly. Just because we don’t preface every sentence with this fact doesn’t mean we’re not aware of it.
Just to avoid confusion among readers, I want to distance myself from part of Vladimir_M’s position. While I agree with many of the points he’s made, I don’t go so far as to say that CPI is a fundamentally flawed concept, and I agree with you that we have to pick some measure and go with it; and that the use of it does not require its caveats to be restated each time.
However, I do think that, for the specific purpose that it is used, it is horribly flawed in noticeable, fixable ways, and that economists don’t make these changes because of lost purpose syndrome—they get so focused on this or that variable that they’re disconnected from the fundamental it’s supposed to represent. They’re doing the economic equivalent of suggesting to generals that their living soldiers be burned to ashes so that the media will stop broadcasting images of dead soldier bodies being brought home.
I wouldn’t be in a good position to determine if it’s lost purpose syndrome since I’m an insider, but I would suggest that path dependence has a lot to do with it.
Price indices are produced by governments, who are notoriously averse to change. And what’s worse the broad methodology is dictated by international standards, so if an economist or some other intelligent person comes up with a better price index they have to convince the body of economists and statisticians that they have a good idea, and then convince the majority of OECD countries (at a minimum) that their method is worth the considerable effort of changing every country’s methodology.
That’s a high hurdle to cross.
On my blog I suggested using insulin prices as a good proxy for inflation. That should be pretty easy for economists to find, even historical data. One economists could find the historical data for one country and use it as a competing measure. No collective action problem to solve there! Just a research paper to present.
(Though I can’t find it on google searches, but economists should be able to get access to the appropriate databases.)
The technology to manufacture insulin has been getting a lot cheaper since the late 1970s when bacteria were first used to synthesize insulin (before that it had to be extracted from animals). That process has become even easier since the process for growing E. coli has become much more efficient.
True, that was just one layman’s brief pondering of an alternate metric, and I hadn’t realized the secular technology trend. I was mainly looking for something that can’t be debased because then people will die, but that is also has minimal volatility in demand, supply, and speculation, and requires numerous inputs so as to smooth out the effect of local shocks.
And perhaps I’m running into a Goodhart trap myself—today, the problem seems to be inflation being hidden via product degradation, but if I pick a metric mainly optimized for that, it will get worse over time. So finding a good or basket that covers all those would require more work—but product debasement is pretty clearly being ignored today.
(Note that precious metals are sold in a way that prevents them from being secretly debased, but also are heavily influenced by global extraction rates, and are heavily speculated on and hoarded.)
Anything that has numerous inputs will likely be something which is complicated to manufacture and therefore will have increasing efficiency as the technology improves. I can’t think of a single good that fits your criteria and hasn’thad substantial technological advancements of how it is made in the last 30 years. This sort of approach might work if one had very steady data for some long historical period with not much technological advancement.
That’s making your inflation rate strongly tied to one particular technology. A breakthrough making insulin synthesis easier, or increased diabetes rates, would affect insulin prices but not the rest of the economy.
Would error bars be a bad thing?
Economists could calculate error bars that would say how closely the calculated aggregate figures approximate their exact values according to definitions. This is normally not done, and as Morgenstern noted in the book discussed elsewhere in the thread, the results would be quite embarrassing, since they’d show that economists regularly talk about changes in the second, third, or even fourth significant digit of numbers whose error bars are well into double-digit percentages.
However, when it comes to the more essential point I’ve been making, error bars wouldn’t make any sense, since the problem is that there is no true value out there in the first place, just different arbitrary conventions that yield different results, neither of which is more “true” than the others.
There’s an old joke: “How can you tell macroeconomists have a sense of humour? They use decimal points.” I’ll admit spurious precision is a problem with a quite a bit of economic reporting. Remember that these statistics are produced by governments, not academics and politicians can have trouble grokking error bars.
Actually, that’s not really the case. There is an ideal, it’s just you can’t do it. If you knew everyone’s preferences and information and endowments of income, you could work out how people’s consumption would change as real incomes and relative prices changed so you could figure out what the right basket of goods is to use for the index at every point in time (the right bundle is whatever bundle consumers would actually pick in a given situation).
But in practice you can’t get the information you’d need to do this, and that information would be constantly changing anyway. In practice what statistical agencies do is develop a basket of goods based on current consumption and review it every decade or so. This means the index overestimates inflation (the estimates I’ve seen put it at about 1 percentage point per year) because when prices rise, people change their consumption patterns and we can’t predict how until it’s already happened.
This is a flawed procedure, but it’s not arbitrary, its an honest effort to approximate the ideal price index as well as we can, given the resources at our disposal.
James_K:
To the best of my understanding, what you write above seems to concede that even under the assumption of omniscience, when we consider different times and/or places, with different prices, incomes, and preferences of individuals—and different sets of goods available on the market, though this can be modeled by assigning infinite prices to unavailable goods—there is, after all, no unique objectively correct way to define equivalent baskets of goods. You could calculate the baskets that would actually be consumed at each time and place, but not the ratio of their true values (whatever that might mean), which would be necessary for their use as the basis for a true and objective price index.
Am I wrong in this conclusion, and if I am, would you be so kind to explain how?
I would be really grateful if you could spell out what exactly you mean by “the ideal price index” when it comes to comparing different times and places, given my above observation. Also, you ignore the question of how exactly baskets are “reviewed,” which is a step that requires an arbitrary choice of the new basket that will be declared as equivalent to the old.
Moreover, different kinds of “honest efforts” apparently produce very different figures. The procedures for calculating official price indexes have been changed several times in recent decades in ways that make the numbers look very different compared to what the older methods would yield. (And curiously, the numbers according to the new procedures somehow always end up looking better.) Would you say, realistically, that this is purely because we’ve been moving closer to the truth thanks to our increasing knowledge and insight?
The concept of “true value” is incoherent, at least in my model of reality. The correct price to attach to a good at any time is its market price at that time. If you had the set of information I listed in my last comment, you’d have the market prices, since they’re implied by the other stuff.
I think we’re using different definitions of arbitrary. To me, arbitrary means that there is no correct answer, and all options are equally valid. I don’t accept that as a legitimate description of the process, there are judgement calls, but ambiguity is inevitable in the social sciences, you either get used to it, or find something else to study. Now if you’re using arbitrary in the way I’m using ambiguous, then I don’t think we disagree, except that I think it’s less problematic than I think you do, since as soon as you start dealing with people things get so complex that ambiguity is inevitable.
Now, here you have a point. The Laspeyres Index is biased up, it may be an honest effort, but not one that’s Bayes correct. But Bayesian rationality has not penetrated through the discipline at this time, and as such a biased estimate is allowed to remain, primarily because there’s no methodologically clean way to remove the bias (you’d need to be able to predict things like quality changes and how people change their spending patterns in response to price changes) and without a background in Bayesian probability theory I think most economists would baulk at adding a fudge factor into the calculation.
It might be valuable to talk about a “true value” of a given good to a given agent. Yes, the correct price to buy or sell a good at is always the market price; but whether I want to sell at that price or buy at that price depends on how much I want the good. If I sell, then the “true value” of the good to me is less than the current market price; and if I buy, then the “true value” of the good to me is greater than the current market price. In general, the “true value” of a given good to a given agent is the price such that, if the market were trading at that price, that agent would be indifferent regarding whether to buy or sell that good.
Yes that is a coherent definition of true value. It’s not a concept that maps well to price indices though.
James_K:
I heartily agree—but what is a price index, other than an attempt at answering the question of what the “true value” of a unit of currency is? What are the fabled “real” values other than attempts at coming up with a coherent concept of “true value”?
Yes, but even given perfect knowledge of all market prices and individual preferences at every time and place, as well as unlimited computing power, I still don’t see how this solves the problem. We can find out the average basket consumed per individual (or household or whatever) and its price at each time and place, but what next? How do we establish the relative values of these baskets, whose composition will be different both quantitatively and qualitatively?
To clarify things further, I’d like to ask you a different question. Suppose the moon Europa is inhabited by intelligent jellyfish-like creatures floating in its inner ocean. The Europan economy is complex, technologically advanced, and money-based, but it doesn’t have any goods or services in common with humans, except for a few inevitable ones like e.g. some basic chemical substances, and there is no trade whatsoever between Earth and Europa due to insurmountable distances. Would it make sense to define a price index that would allow us to compare the “real” values of various aggregate variables in the U.S. and on Europa?
If not, what makes the U.S./Europa situation essentially different from comparing different places and epochs on Earth? Or does the meaningfulness of price indexes somehow gradually fall as differences accumulate? But then how exactly do we establish the threshold, and make sure that the differences across decades and continents here on Earth don’t exceed it?
Well, if macroeconomists and other social scientists were just harmless and benign philosophers, I’d be happy to leave them to ponder their ambiguities in peace!
Trouble is, to paraphrase Trotsky’s famous apocryphal quote, you may not be interested in social science, but social science is interested in you. In the present Western political system, whatever passes for reputable high-profile social science will be used as basis for policies of government and various powerful entities on its periphery, which can have catastrophic consequences for all of us if these ideas are too distant from reality. (And arguably already has.) Macroeconomics is especially critical in this regard.
No, no. A price index is an attempt to work out how much things cost relative to what they used to cost. Real GDP is an attempt to measure how much stuff is being produced relative to how much stuff was being produced. GDP is not an attempt to determine what that stuff is worth in a metaphysical or personal sense, the production is merely valued at its market price (adjusted for inflation, in the case of real GDP). To a pacifist, the portion of GDP spent on the military is worth less than nothing, but it’s still part of GDP because it was stuff that was produced.
Yes, the closer the consumption patters of the two economies being compared, the more useful the comparison is. If there were no common goods between two economies it would be impossible to compare them meaningfully. As to where to draw the line, well I wish I had a good answer for you, but I don’t. All I can say is that the value of the comparison decays over “distance” (meaning differences in consumption patterns).
Some economists have created more specialised indices for long-run comparisons; William Nordhaus created a price index for light (based on hours of work per candela-hour) from the stone age to modern times. This is a little unusual at the moment since macroeconomists don’t usually do comparisons over long time periods (it’s fiendishly hard to get data going back before the 20th Century on most indicators), but it shows you that we are aware of the limitations of our tools, including price indices.
I agree wholeheartedly, good quality policy advice is something I take very seriously. The social science we have has significant limitations, but right now, we don’t have anything better. I very much doubt the quality of our policy would improve if politicians paid less attention to their advisers than they do at the moment. So we do what we can, help thing along as much as our knowledge and the institutional frameworks decisions are made will permit. What else can you do?
James_K:
That’s a very interesting paper (available here), thanks for the pointer!
As with nearly all papers addressing such topics, parts of it look as if they were purposefully written to invite ridicule, as when he presents estimates of 19th century prices calculated to six significant digits. (Sorry for being snide, but what was that about spurious precision in economics being the fault of politicians?) However, the rest of it presents some very interesting ideas. Here are a few interesting bits I got from skimming it:
The mathematical discussion in Section 1.3.2. seems to imply (or rather assume) that even assuming omniscience, a “true price index” (Nordhaus’s term) can be defined only for a population of identical individuals with unchanging utility functions. This seems to support my criticisms, especially considering that the very notion of a human utility function is a giant spherical cow.
The discussion in the introduction basically says that the way price indexes are done in practice makes them meaningless over periods of significant technological change. But why do we then get all this supposedly scientific research that uses them nonchalantly, not to mention government policy based on them? Nordhaus is, unsurprisingly, reluctant to draw some obvious implications here.
Nordhaus considers only the fact that price indexes fail to account for the benefits of technological development, so he keeps insisting that the situation is more optimistic than what they say. But he fails to notice that the past was not necessarily worse in every respect. In many places, for example, it is much less affordable than a few decades ago to live in a conveniently located low-crime neighborhood, and this goal will suck up a very significant percentage of income of all but the wealthiest folks. Moreover, as people’s preferences change with time, many things that today’s folks value positively would have been valued negatively by previous generations. How to account for that?
More to the same point, unless I missed the part where he discusses it, Nordhaus seems oblivious to the fact that much consumption is due to signaling and status competition, not utility derived from inherent qualities of goods. I’m hardly an anti-capitalist leftie, but any realistic picture of human behavior must admit that much of the benefit from economic and technological development ultimately gets sucked up by zero-sum status games. Capturing that vitally important information in a price index is a task that it would be insulting to Don Quixote to call quixotic.
Finally, I can’t help but notice that in the quest for an objective measure of the price of light, Nordhaus seems to have reinvented the labor theory of value! Talk about things coming back full circle.
Overall, I would ask: can you imagine a paper like this being published in physics or some other natural science, which would convincingly argue that widely used methodologies on which major parts of the existing body of research rest in fact produce spurious numbers—with the result that everyone acknowledges that the author has a point, and keeps on doing things the same as before?
[facepalm] OK, I’m not making any excuse for that. Given the magnitude of his findings he doesn’t even need them to make his point.
Yes, you can’t produce a true price index. But less-than-true price indices can still be useful.
But houses keep getting bigger and you have to account for that too. Besides which, housing is no more than a third of most people’s income, at least it is in my country. That is a significant percentage, but it’s still less than half. And things keep getting better (or no worse) in the remaining two thirds.
Assuming it’s even possible to adjust for that, I’d really want to apply the adjustment to GDP, not prices. Signalling isn’t a matter of cost but rather value.
No, you’re confusing cost and value. The labour theory of value is the theory that the value of a good derives from the labour taken to produce it. If Nordhaus were using this theory he’d be arguing that the value of light keeps falling. Measuring cost with labour is another thing entirely.
No. I recognise this is a problem. I can only imagine they thing it’s too had to correct for technological change robustly, but that’s not really an excuse. If you can’t do it well, it’s generally still better to do it badly than not at all. And I didn’t realise the research was that old (I’ve actually never read the paper, I read a summary in a much more recent book). Apparently macroeconomists have more catch-up to do than I thought.
This sentence of yours probably captures the heart of our disagreement:
We don’t seem to disagree that much about the limitations of knowledge in this whole area, epistemologically speaking. Where we really part ways is that I believe that historically, the whole edifice of spurious expertise produced by macroeconomists and perpetuated by gargantuan bureaucracies has been an active force giving impetus for bad (and sometimes disastrous) policies, and that it’s overall been a step away from reality compared to the earlier much simpler, but ultimately more realistic conventional wisdom. Whereas you don’t accept this judgment.
Given what’s already been said, I think this would be a good time to conclude our discussion. Thanks for your input; your comments have, at the very least, made me learn some interesting facts and rethink my opinions on the subject, even if I didn’t change them substantially at the end.
(Oh, and you’re right that I confused cost and value in that point from my above comment. I was indeed trying to be a bit too much of a smartass there.)
Yes, I think so. It’s not that I think that macroecoonmics has covered its self in glory, it hasn’t. But this really is literally the only way to learn for those guys. And I believe it’s worth it in the short run, though I’m less sure of that, than I was before we started this. Maybe those macro guys should go try micro or something.
Same here, it’s been fun.
How much did it cost a cave man to walk outside? Or are we including the time he spent digging renovations to put the sky-light in his roof?
Heh. Yeah, I’m going to go out on a limb and guess that Nordhaus didn’t subtract off the previously-free sunlight lost to global dimming and the attenuation of natural sources of nightlight due to interference from artificial light.
This is NOT to say I’m endorsing some kind of greenie move toward a pre-industrial time just so we can see the undimmed sky or have less “light pollution”. I’m just saying that ignoring natural and informal sources of wealth is a bad habit to get into.
Reading paper to see if I can guess them right...
ETA: Ohhhhhh! Can I call ’em or what?
James_K:
But now we’re back to square one. Since different things are produced in different times and places, to produce these “real” figures for comparison, we need to come up with a way to compare apples and oranges (sometimes literally!). Now, if economists just said that they would consider an apple equivalent to an orange for some simple Fermi problem calculation, I’d have no problem with that.
However, what economists use in practice are profoundly complicated methodologies that will tell us that an orange is presently equivalent to 1.138 of an apple, and then we get subtle arguments and policy prescriptions based on the finding that this means an increase in the orange-apple index of 2.31% relative to last year. Here we enter the realm of pure nebulosity, where the indexes and “real” figures stop being vague heuristics where even the order of magnitude is just barely meaningful, and acquire a metaphysical existence of their own, as “real” variables to be calculated to multiple digits of precision, fed into complex mathematical models and policy guidelines, and used to measure reified true, objective value.
So, here is a straightforward question then: how do we know that it is meaningful to do comparison across, say, between the U.S. in 2010 and the U.S. in 1960 or 1910? What argument supports the assumption that the differences between them are small enough?
Sometimes it’s safer to just leave things alone if you don’t know what you’re doing. Presenting dubious conclusions and questionable expertise as scientific insight leads to the equivalent of dilettante surgery being performed on entire countries by their governments, sometimes with awful consequences, and with even worse ones threatening in the future. (Prominent macroeconomists will in fact agree with me, it’s just that they’ll claim that their professional rivals are the dilettantes, and only they are true experts who should be listened to.)
I happen to agree that macroeconomists are overdoing it on the level of precision they can provide. Arnold Kilng (himself a macroeconomist) made this same point in a blog post last year: http://econlog.econlib.org/archives/2009/03/paragraphs_to_p.html
I would be careful about using a price index over that kind of time frame, I don’t actually know how macroeconomists treat it, but I have read books that point out the inherent difficulty of making comparisons over long time periods (where long means more than about 20 years), and that if you’re trying to capture differences in standard of living over a long period one should try to account for differences in product quality and product mix over time. Of course that’s incredibly hard to do, and I don’t know how seriously this issue is treated in macroeconomics, but it should be taken seriously.
I strongly agree. However, there are two limiting factors when applying to this logic to policy advice: 1) If you don’t give a politician any advice, their reaction won’t be to do nothing, it will be to do whatever they think is a good idea. The average macroeconomist may not know a lot, but they know enough that their advice will probably help a little. I do think that macroeconomists should be less willing to offer active advice, as opposed to “we don’t understand this problem, the best thing to do here is nothing”, but politicians have a strong aversion to doing nothing in the face of a crisis, and if their advisers keep telling them to do politically unpalatable things, they’ll find advisers that will tell them what to hear.
2) You can’t run experiments in macroeconomics, the only way to acquire data on how well an intervention works is to try it (multiple times in multiple countries) and find out how it goes, and even then you end up arguing what would have happened if you did nothing. That means that if you don’t try to fix and/or prevent macroeconomic problems you don’t get any better information on how to fix future ones. Maybe that’s an acceptable trade off, but I’m sure you can see why macroeconomists don’t think so. Also bear in mind that what brought macro into its own as a discipline was the Great Depression. Maybe it’s worth risking some bumps in the road to try to work out how to stop something like that happening again.
Yes, it’s depressing how much a macroeconomists’ opinion on what caused the recent troubles matches up with their political ideology. But it’s a function of the low quality of evidence available, in Bayesian terms when you only have access to weak evidence, your prior matters more than when the evidence is strong. The inevitable influence politics has on the discipline doesn’t help either. Politicians are all too keen to build up economists who are telling them to do things they want to do anyway.
If some price indexes are “clearly absurd”, then they apparently have some value to us—for if they were valueless, then why call any particular one “absurd”? If they yield different results, then so be it—let us simply be open about how the different indexes are defined and what result they yield. The absence of a canonical standard will of course not be useful to people primarily interested in such things as pissing contests between nations, but the results should be useful nonetheless.
We commonly talk about tradeoffs, e.g., “if I do this then I will benefit in one way but lose in another”. We can do the same thing with price indexes. “In this respect things have improved but in this other respect things have gotten worse.”
Constant:
Sure, but such an approach would deny the validity of all these “real” economic variables that are based on a scalar price index. In particular, it would definitely mean discarding the entire concept of “real GDP” as incoherent. This would mean conceding the criticisms I’ve been expounding in this thread, and admitting the fundamental unsoundness of much of what passes for science in the field of macroeconomics.
Moreover, disentangling the complete truth about what various price indexes reveal and what they hide is an enormously complex topic that requires lengthy, controversial, and subjective judgments. This is inevitable because, after all, value is subjective.
Take for example two identically built houses located in two places that greatly differ in various aspects of the natural environment, society, culture, technological development, economic infrastructure, and political system. (It can also be the same place in two different time periods.) It makes no sense to treat them as equivalent objects of identical value; you’d have a hard time finding even a single individual who would be indifferent between the two. Now, if you want to discuss what exactly has been neglected by treating them as identical (or reducing their differences to a single universally applicable scalar factor) for the purposes of constructing a price index, you can easily end up writing an enormous treatise that touches on every aspect in which these places differ.
I’ve heard that the trick works less well each time it’s used (perhaps within a limited time period). Is this plausible?
There could be indirect consequences of the decision in question, resulting from counter-intuitive effects on the existing economic process, on lives of other people not directly involved in the decision. The relevant question is about estimate of those indirect consequences. However imprecise economic indicators are, you can’t just replace them with presumption of total lack of consequences, and only consider the obvious.
I didn’t ignore the indirect consequences:
To the extent that the indirect effects go beyond this, standard mainstream metrics in economics don’t measure them, because they are essentially independent of how well off others have become as a result of these rental decisions.
Well, maybe there are no such consequences (which is not obvious to me), but that’s what I meant.
Really? Because I hear economists talk about the value of leisure time quite frequently.
IMO, most economists don’t fetishize GDP the way you suggest they do.
You seem to be denying the benefits of Keynesian stimulus in a downturn. That position is not indefensible, but you’re not defending it, you’re just claiming it.
Both of these are contradicted by the fact that no economist, in discussion of the recent economic troubles, has suggested that letting the economy adjust to a lower level of output/work would be an acceptable solution.
Yes, they recognize that leisure is good in the abstract, but when it comes to proposals for “what to do” about the downturn, the implicit, unquestioned assumption is that we must must must get GDP to keep going up, no matter how many make-work projects or useless degrees that involves.
I most certainly am defending it—by showing the errors in the classification of what counts as a benefit. If the argument is that stimulus will get GDP numbers back up, then yes, I didn’t provide counterarguments. But my point was that the effect of the stimulus is to worsen that which we really mean by a “good economy”.
The stimulus is getting people to do blow resources doing (mostly) useless things. Whether or not it’s effective at getting these numbers where they need to be, the numbers aren’t measuring what we really want to know about. Success would mean the useless, make-work jobs eventually lead to jobs satisfying real demand, yet no metric that they focus on captures this.
Downvote explanation requested. This looks like a reasoned reply to MichaelBishop’s criticism, and I’m interested in knowing how it errs and how Michael’s comment doesn’t, and how this is so obvious.
[Didn’t downvote.] This is silly. The ‘leisure’ of unemployment is concentrated on a few, and comes with elevated rates of low status, depression, suicide, divorce, degradation of employability, etc.
That’s a misinterpretation of what I was suggesting as the alternative. Lower output + more leisure doesn’t mean the “leisure” is concentrated entirely in a few workers, making them full-time leisurists who starve. Rather, it means that anyone who wants to work for money would work fewer hours and have a lower level of consumption, not zero consumption.
Furthermore, the lower consumption is only consumption of goods purchased with money; with significant restructuring, labor with predictable demand (like babysitting) can be handled by cooperatives that avoid the need to pay for it out of cash reserves.
I don’t deny that make-work programs allow workers to show off and practice their skills, retaining employability. I criticize economists who miss this benefit. But if you’re going to spend money to get this benefit, you should spend it in a way that directly targets the achievement of this benefit to the workers, rather than on make-work projects that only achieve this benefit as a site effect, and which waste capital goods and distort markets in the process.
Unfortunately, in the United States, you really would end up with much more of the former and less of the latter. Europe would be better off, though, thanks to different labor laws; would you suggest that the United States adopt something like France’s maximum 35 hour workweek, or Germany’s subsidies to part-time workers?
Currently, hours worked per week is positively correlated with hourly wages; one person working 80 hours a week usually makes more money than two people who both work 40 hours a week. Also, specifically wanting to do part-time work is a bad signal to employers. It signals that you’re not committed to your job, that you’re probably lazy, and that you’re weird. So, absent government intervention, you probably won’t see people voluntarily reducing their working hours.
This is because it isn’t. A “lower level of output/work” means that people, on average, are going to be poorer. And the way our economy is set up (in the United States at least), reducing output/work by 1% doesn’t mean that each person works 1% less, produces 1% less, and consumes 1% less, it means that 1 in 100 people lose their job, can’t find another one, and become poor, while the rest keep going on as they have been. So, when output/work falls, you don’t get more leisure, you get more poverty.
And I disagree that most stimulus spending ends up being directed to “worthless” projects. Maybe they’re not the best value for money, but even completely worthless make-work projects are still effective at wealth redistribution. Furthermore, if people are willing to lend the government money for really, really low interest rates (as demonstrated by prices of U.S Treasury securities) then isn’t that a signal that it’s an unusually good time for the U.S. government to borrow and spend—that the economy wants more of what the government produces and less of what private industry produces?
This I think reflects a status-quo bias. When the per capita GDP was lower in 2000, or 1990, the economy managed to employ a higher percentage of people. While you’re right that current institutions, inertia, and laws prevent shorter workweeks, that is an argument for removing these barriers, not an argument for trying to game the GDP numbers in the (false) hope that this will somehow translate into sustainable employment because of the historical correlation.
Okay, but that still looks like a case of lost purposes and fake utlity functions. If you’re spending money to redistribute, then spend the money to redistribute! Don’t spend it on a project that hogs up real resources just to get a small side-effect of transferring money to people you want to help. (“What’s your real objection” and all.) If it’s important that they feel they earn the paycheck, then require that they take job training.
And the reason I call the projects worthless is this (and it doesn’t require an ideological commitment to being against government projects): people couldn’t justify asking the government to provide these things before the recession. But if the recession is a contraction of productive capacity, then the projects we commit to should also contract—it should look like an even worse deal.
The fact that the government can issue debt cheaper doesn’t change this fact. The reduced productive capacity is a real (i.e. non-nominal) phenomenon. The greater ease with which government can procure resources does not mean our aggregate ability to produce them has increased; it just means the government can more easily increase its share of the shrinking pie. That still implies that our “choice set” is being reduced, and the newer, larger wastefulness of these projects will have to show up somewhere.
If the fundamental determinant of reduced unemployment is whether the economy has entered into (as Arnold Kling says) sustainable patterns of specialization and trade, then temporary stimulus projects can’t accelerate this, because they’re by definition not sustainable: after they’re over, we’ll just have to readjust again.
I must emphasize, as I did in this blog post, that this does not mean we should give suffering families the finger because “it would be inefficient and all”—the fact that they (under a stimulus project) are working, feeling productive, and getting a paycheck is very significant, and definitely counts as a benefit. It’s just that you should help them a way that doesn’t inhibit the economy’s search for efficient use of factors of production, nor (significantly) favor these families over the ones that are going to be screwed again when the projects have to stop, and the hunt for re-coordination starts anew.
Oh, definitely.
I basically agree with this; if you want to redistribute, then certainly it’s better to just redistribute than to “employ” people to do completely useless things. (For example, extending unemployment benefits is a form of redistribution.)
Well, what matters is the opportunity cost. A project that wasn’t worth doing before can become worth doing if the better alternatives aren’t there anymore; a contraction of productive capacity doesn’t have to affect all sectors of the economy equally. For example, people in a country experiencing an oil shortage may find that investing in more expensive, non-oil energy sources has become worthwhile; it’s worse than what used to be possible, but it’s the best remaining alternative. Given that people are willing to lend to the federal government more cheaply now than before the recession, the new equilibrium might end up involving more “investment in government”, not because government has become more productive, but because the alternative investments have gotten worse.
And I’m not necessarily sure that absolute productive capacity went down all that much in the current recession. During the Great Depression, the factories were still there, there were people willing and able to operate the factories, and there people who wanted the goods the factories could produce, yet the factories were idle, the would-be factory workers were unemployed, and the would-be consumers didn’t have the goods they wanted. (The Keynesian position is that there was a collapse in aggregate demand, leading to a general glut, followed by a reduced output level.)
Economists who argue for stimulus spending on Keynesian grounds understand that GDP is not a perfect measure and that the value produced by stimulus projects may be less than the value produced by ordinary spending. See, for instance, this Brad DeLong post, where he estimates the net benefit of the stimulus and counts the useful stuff produced using stimulus money as being only 80% as valuable as the dollar amount would suggest. Or, as he writes:
Well, at least that is 20% closer to the mark!
Nice to see this kind of thinking from a capitalistish.
I’ll accept that compliment, backhanded though it might be :-) (I canceled out the downmod you got for that comment—no offense taken.)
I would appreciate, though, if you could (as best you can) tell me what it was I said that led you to believe I’m capitalistish (in the sense that you meant), or that I would otherwise disagree with my above GDP rant. No need to dig up links, just tell me whatever you remember or can quickly find.
I’m not doing this to make you feel foolish for having said what you did (like I’ve been known to try with you …), but because I want to know what it is that gives of these impressions of my views, and whether I should be using different terms to describe them.
As I’ve said before, I have a love-hate relationship with libertarianism. I believe largely what I did ten years ago about the proper role of government, but much of what self-described libertarians advocate is sharply contrary to what I considered to be my libertarian view.
An interesting question. Here are some initial thoughts:
In terms of broad economic aggregates, it won’t make any difference. If you rent the room off your parents for a market rate, GDP is exactly unaffected, people are paying the same money to different people. If you rent it for less than market rate, GDP is lower, but this reflects deficiencies in measured GDP since GDP uses market prices as a proxy for the value of a transaction (this is fine for the most part, but doing your child a favour is an exception conventional methodology can’t deal with). So from a macroeconomic perspective I’d say it’s a wash either way.
Microeconomically, there could be some efficiencies in you renting from your parents. If they trust you more than a random stranger (and let’s hope they do) they will spend less time monitoring your behaviour (property inspections and the like) than they would a random stranger, but the value of your familial relationship should constrain you from taking advantage of that lax monitoring in the way a stranger would. This mean that your parents save time (which makes their life easier) and no one should be worse off (I assume the current tenant of their room would find adequate accommodation elsewhere).
However, one note of caution. If you were to get into a dispute of some sort with your parents over the tenancy, this could damage your relationship with your parents. If you value this relationship (and I assume you do), this is a potential downside that doesn’t exist under the status quo. Also, some people might see renting from your parents as little different to living with your parents which (depending on your age) may cost you status in your day-to-day life (even if you pay a market rate). If you value status, you should be aware of this drawback.
So in summary, the most efficient outcome depends on three variables: 1) How much time and effort do your parents spend monitoring their tenant at the moment? 2) How likely is it that your relationship with them could be strained as a result of you living there? 3) How many friends / acquaintances / colleagues do you have that would think less of your for renting from your parents (and how much do you care)?
I hope that helps.
I think that a majority of economists agree that in many downturns, it helps the economy if people, on the margin, spend a little more. This justifies Keynesian stimulus. Therefore, the economy would be helped if your choice increases the total amount of money changing hands, presumably if you rent the apartment for $X when X>Y. My impression is that in good economic times, marginal spending is not considered to improve economic welfare.
Imagine that the “economy” is sluggish, and that a widget maker currently profits $1 on each widget sale. Now, consider these two scenarios:
a) I buy 100 widgets that I don’t want, in order “to help the economy”.
b) I give the widget-maker $100. Then, I lie and say, “OMG!!! I just heard that demand for widgets is SURGING, you’ve GOT to make more than usual!” (Assume they trust me.)
In both cases, the widget-maker is $100 richer, the real resources in the economy are unchanged, and the widget-maker has gotten a false signal that more widgets should be produced. Yet one of those “helps the economy”, while the other doesn’t? How does that make sense?
If you believe that either one of those “helps the economy”, your whole view of “the economy” took a wrong turn somewhere.
I agree that both a) and b) would have a similar effect in that the widget manufacturer puts to work resources (labor, machines) which would otherwise not be utilized. I wouldn’t recommend either a) or b) because there are many more efficient ways to stimulate the economy. One that my father, who happens to be an economist, has promoted is a temporary tax credit for new hires. More detail. If there are some roads you were going to build a couple years from now, speeding up that investment is probably a good idea in an economic downturn. I’m not defending legislation that actually got passed… I try not to pay too much attention.
Then why did you say this, in the very comment I was replying to?
That’s the same as recommending a)!
It doesn’t matter that you can think of better ways; the problem is with a view of the economy that regards either of a) or b) as “good for the economy”. And you in fact hold that view.
We were asked a sort of odd question which was which apartment choice would help the economy when not taking into account the individuals preferences about apartments. Those preferences in fact dominate the overall effect on the economy. I wouldn’t recommend anyone personally attempting Keynesian stimulus.
Increasing the amount of money changing hands only helps in certain circumstances, and even then it is not necessarily the dominant effect.
What about the examples of intelligent stimulus I offered?
Coming back to this question after a few years, I was able to find a surprisingly simple Econ 101 answer in five minutes. To zeroth order, there’s no change because the amount of goods and services in the economy stays the same. To first order, allowing a deal to be freely made usually increases total value in the economy, not just the value for those making the deal; so this deal is good for the economy iff both sides agree to it.
That sidesteps all complications like “the parents are happy to help their child”, “the apartment might have facilities that the child doesn’t need”, etc. I guess reading an econ textbook has taught me to look for ways to estimate the total without splitting it up.
Here’s another question to chew on:
Suppose you’re in a country that grows and consumes lots of cabbages, and all the cabbages consumed are home-grown. Suppose that one year people suddenly, for no apparent reason, decide that they like cabbages a lot more than they used to, and the price doubles. But at least to begin with, rates of production remain the same throughout the economy. Does this help or harm the economy, or have no effect?
In one sense it ‘obviously’ has no effect, because the same quantities of all goods and services are produced ‘before’ and ‘afterwards’. So whether we’re evaluating them according to the ‘earlier’ or the ‘later’ utility function, the total value of what we’re producing hasn’t changed. (Presumably the prices of non-cabbages would decline to some extent, so it’s at least consistent that GDP wouldn’t change, though I still can’t see anything resembling a mathematical proof that it wouldn’t.)
What exact metric do you have in mind?
I’d be about equally happy if offered a solution in terms of GDP or some more abstract metric like “sum of happiness”.
Trouble is, all these macroeconomic metrics that can be precisely defined have only a vague and tenuous link to the actual level of prosperity and quality of life, which is impossible to quantify precisely in a satisfactory manner. Moreover, predicting the future consequences of economic events reliably is impossible, despite all the endless reams of macroeconomic literature presenting various models that attempt to do so.
Thus, if you want to ask how your choice will affect the nominal GDP for the current year or some such measure, that’s a well-defined question (though not necessarily easy to answer). However, if you want to interpret the result as “helping” or “hurting” the economy, it requires a much more difficult, controversial, and often inevitably subjective judgment.
Of course, gdp only measures goods and services sold, not “household production.”
That’s only one of the main problems with GDP. Here’s a fairly decent critique of the concept written from a libertarian perspective (but the main points hold regardless of whether you agree with the author’s ideological assumptions):
http://www.econlib.org/library/Columns/y2010/HendersonGDP.html
In addition to these criticisms, I would point out the impossibility of defining meaningful price indexes that would be necessary for sensible comparisons of GDP across countries, and even across different time periods in the same country. The way these numbers are determined now is a mixture of arbitrariness and politicized number-cooking masquerading as science.
It is certainly true that some people make too much of GDP, but those numbers can be pretty helpful for answering certain research questions. Let’s not throw the baby out with the bath water.
To continue on your metaphor, it’s not clear to me if there is a baby worth saving there at all. Even if there is, the baby is submerged in an enormous cesspool of filthy and toxic bathwater that’s been poisoning us in very nasty ways for a long time.
To be clear, you are suggesting we might not lose anything by giving up measuring and using GDP figures? I’ll side with the majority of the economics profession… they aren’t perfect but they mostly use GDP data in a reasonable way.
Just so we’re on the same page, could you explain what it would look like if economists’ collective wisdom were actually so bad that you would agree they use GDP data in an unreasonable way?
Because you can’t just look at the fact the top economists all agree—they’d do that even if the field were collectively garbage. There has to be some real-world entanglement which would reveal the failure of their ideas, and I want to know what you expect such a failure to look like.
I’m a sociologist*, and there is nothing sociologists like to do more than point out where economists go wrong. So if GDP was a worthless figure, I expect the real world entanglement that one of my fellow sociologists would have convinced me of that already.
I’m not saying economists never overinterpret GDP figures, and I’m not saying the consensus of macroeconomists is always correct.
Though I think we might both be better served by quitting conversation and reading actual experts (I don’t claim to be one) I would like to make sure we’re on the same page about the implications of your criticism. Are you not saying that it is essentially worthless to attempt to study economic growth or business cycles empirically because the data is so poor?
*if you can be one without having completed your dissertation yet.
This sounds to me like a case of mistakenly thinking “someone would have noticed!”. What exactly would sociologists have noticed and hasn’t happened? Remember, “my echo chamber in academia agrees with me” doesn’t count as evidence!
And, FWIW, sociologists (and a lot of the left in general) do complain about GDP—they’re the ones spearheading the push to use alternate metrics like “Gross National Happiness” and other things. I think a lot of them are nutty, but at least they’re identifying values that need to be looked at.
But I have read the experts! Top economists like Greg Mankiw, Paul Krugman, and Scott Sumner blog and lay out their arguments in detail, and their (economic basis for making their) arguments are exactly as I have portrayed them! Sumner in particular believes (mistakenly imo) that nominal GDP is a crucial measure.
Krugman certainly relies heavily on measuring real GDP growth and equates it with progress. And James_K, who claims to be an economist, just came out of the woodwork and endorsed exactly what I’ve accused economists of, though asserting (with a basis I’m shaking) that they don’t really make that big of a deal out of GDP.
With the currently studied data, yes, though with different measures, better progress could be made. In the past I’ve suggested measuring non-cash and non-market production, subtracting certain “bad” activities from GDP (i.e. things which represent a response to destruction, as it’s indicative of merely replacing some capital with other capital), measuring product degradation in calculating CPI, and using insulin as a better inflation gauge.
Hey, I’m fine with calling you one if you’re fine with calling me an engineer despite just having a bachelors and years of field work but not a P.E. license.
I agree that GDP is imperfect. If it were easy to perfect then it would have been done already. Should more resources be devoted to the issue? Probably. I support the use of multiple measures of wealth and well-being. But I do think that when GDP goes up, that usually indicates good things are happening. Other indicators usually track it.
I’m not trying to deny you’ve noticed a problem, I just think that you’re overstating it because even though GDP is imperfect, there is still a lot to be learned from empirical research that uses it.
Oh boy, we should bring Taleb in here.
If we’re going to do metaphors, then yes, you’re right, but we also have to make sure we’re not drinking the bathwater. The bathwater is for bathing, not for drinking. GDP should be used a very rough cross-country comparison, not as a measure of how well the economy’s general ability to satisfy wants changes over short intervals.
Interestingly enough, I was arguing roughly your position a few years ago. But now, seeing how economist deliberately prioritize GDP over the fundamentals it’s supposed to measure, I can’t even justify defending it for purposes other than, “The US economy is more productive than Uganda’s.”
The essay at the link talks about government waste. Is it meaningful to talk about waste in business, or should it all be considered to be at least educational?
Regarding the end-products, one essential difference is that if a business can find private consumers who will purchase its product with their own money and of their own free will, this constitutes strong evidence that these customers assign some positive value to this product, so it can’t be fairly described as “waste.” In contrast, for many things produced by the government, no such clear evidence exists, and even if one is not of particularly libertarian persuasion, it seems pretty clear that many of them are wasteful in every reasonable sense of the term. Yet all consumer and (non-transfer) government spending is added to the GDP as equivalent.
When it comes to waste generated by inefficiencies, miscalculations, employee misbehavior, and perverse incentives, some amount of wasteful efforts and expenses is obviously inevitable in the internal functioning of any large-scale operation. It does seem pretty clear that in most cases, the incentives to minimize them are much stronger in private businesses than in governments, though unlike the previous point, this one is a matter of degree, not essence. However, when it comes to the GDP accounting, there are important differences here.
The reason is that all non-transfer spending by the government will be added to the GDP, whereas spending by businesses is added only if it constitutes investment (as opposed to mere procuring of the inputs necessary for production). As far as I know, the exact boundary in the latter case is a matter of accounting conventions, though in most cases, it does seem clear which is which (e.g. for a trucking company, buying fuel is not an investment, but buying new trucks is). Therefore, whatever the actual amount of wasteful spending by businesses might be, not all of it will be added to the GDP, unlike the wasteful spending by governments.
Thanks for that link. I hadn’t realized Henderson had written that, let alone just a few months ago! Its recency means he could critique the stimulus arguments of the last two years, making basically the same arguments I do.
My only complaint is that he noted that leaving off non-market exchanges (i.e. maid becoming wife) causes GDP to be understated, when he should have discussed its impact on the rate of change in GDP, which is more important.
I recommend going to an econ textbook for good questions.
My (admittedly simple-minded) answer would be “other things being equal it has no effect at all”.
Each day you and your parents do whatever it is you do, creating a given amount of wealth (albeit perhaps in such a way that it’s impossible to say exactly how much of this wealth you personally created, rather than your colleagues, or the equipment you use). Then a bunch of wealth gets redistributed in a funny way (through wages and rents being paid). But changing the way that wealth is redistributed doesn’t affect the ‘total rate of wealth-generation’ which is what GDP is trying (sometimes unsuccessfully, as James_K says) to measure. In just the same way, getting a pay rise doesn’t in itself help the economy (but it may have been caused by you doing more valuable work, which does help).
I’m pretty sure this is wrong. If I have a spare apartment and start renting it out, I’m creating wealth, not just redistributing it. So changing the pattern of who rents from whom should influence the total amount of wealth created.
Though I should clarify that when I talk about “the size of the economy” I’m talking about something intangible—the ‘wealth of the nation’, or more precisely the ‘nation’s rate of wealth-creation’ - rather than simply GDP. Perhaps GDP will reflect the changing rents, perhaps not, depending on which type of GDP we’re talking about (I seem to recall that there are several, including a ‘spending’ measure and an ‘income’ measure.)
But we’re not talking about someone renting a previously empty apartment, we’re talking about a change of occupier. The ‘wealth’ of the apartment is merely being ‘consumed’ by someone else.
Suppose without loss of generality (?) that the person who was previously in your parents’ apartment is now in your old apartment. Then we can describe the change as follows:
Two people have swapped apartments.
They may be paying different rents from before.
Neither 1 nor 2 in itself changes the size of the economy. (Although, if a rent goes up because an apartment is more desirable then that changes the size of the economy.)
Apartments don’t have a single intrinsic “desirability” value. Different people assign different values to the same apartment. If you think about it, the fact that different people can value a thing differently is the only reason any deals happen at all. The sum you agree to pay is a proxy for the value you place on the thing.
No, you can’t assume without loss of generality that the person who was previously in my parents’ apartment will be willing or able to move to mine. It depends on the relationship between X and Y.
But the set of living spaces is the same as before. Can’t we assume for simplicity that, even if it’s not as simple as two people swapping places with each other, what we have is a ‘permutation’ such that all previously occupied houses and apartments remain occupied?
Then once again we can factor the change into (1) a permutation and (2) a change of rent, and ask whether either of them changes the wealth of the nation. I’m pretty sure that (2) in itself has no effect—it’s just a ‘redistribution’ between landlords and their tenants. Whether (1) has an effect depends on whether or not we’re including the fact that different people may make different assessments of desirability (i.e. whether different people have different preferences about the kind of apartment they’d like to live in.)
Of course you’re quite right that different people do have different preferences—I was merely ignoring this for simplicity—but in any case the statement of the problem says nothing explicit about your or anyone else’s preferences, it only talks about X and Y. Are your apartment-preferences supposed to change depending on the values of X and Y?
You’re right that (2) has no effect, but (1) probably does have effect. I thought we could somehow guess the effect of (1) by looking at X and Y, but now I see it’s not easy.
There is other information you want to consider. Tax rates for example, and whether or not the economy is in the sort of downturn that would benefit from stimulus or not.
Regardless, the effects on aggregate supply and demand will be tiny. How much you and your parents value these alternatives is what matters most.
I’m not asking about what I should decide, I’m asking about the sign of those tiny effects on the country as a whole. Is it actually a difficult question in disguise? Why? I know next to nothing about economics, but the question sounds to me like it should be really easy for anyone qualified.
I think the best way to measure it in any meaningful way would be to consider the same scenerio with millions of people doing it instead of just one, but even then it doesn’t look like it makes much of a difference.
This is a good point. What happens in this individual case would be dominated by random facts about the individuals directly involved. If you imagine the same situation repeated many times, 100 should be plenty, the randomness cancels out.
So you might think. Sensitivity to initial conditions!
care to explain why we should expect sensitivity to initial conditions to matter in the particular example being discussed here?
I am struggling to convey this, so I’ll have to think about it more.
For now, though: I do think that differences in the initial conditions would be propagated by adaptive individuals and institutions (rather than smoothed away). That should lead to bifurcations and path dependencies that would generate drastically different outcomes. Enough that averaging them would be meaningless.
Why do you think repeating it many times would converge? Are the statistical limit theorem conditions really met? I don’t think so..
None of this really explicitly says that you wouldn’t be able to at least figure out the sign of the change. It might be computationally intractable but qualitatively determinable in special cases.
Fascinating talk (Highly LW-relevant)
http://www.ted.com/talks/michael_shermer_the_pattern_behind_self_deception.html
These days, I sometimes get bumped into great new ideas[tm], that are at times well proven, or at least workable and useful—only to remember that I did already use that idea some years ago with great success and then dumped it for no good reason whatsoever. Simple example: in language learning write ups, I repeatedly find the idea of an SRS. That is a program which does spaced repetitions at nice intervals and consistently helps in memorizing not only language items, but also all other kinds of facts. Programs and data collections are now freely available—but I already programmed my own program for that about 14 years ago as a nice entry level programming exercise, and used it quite extensively and successfully for about 2 years in school, till I suddenly stopped. That made me wonder which other great ideas I already used and discarded, why former me would do such a thing and to make it a public question: which great things LWers might have tried and discarded for no particular reason.
Another obvious example from my own stack would be the use of checklists to pack for holidays. Worked great for years and still does.
That’s kind of hard—if they were so great, how could we remember they were great and also not immediately reinstate them?
Gawande on the need to develop competent systems for delivering medical care
Gawande on the need to develop competent systems for delivering medical care
(Closing parenthesis.)
Thanks.
Looks like LW briefly switched over to its backup server today, one with a database a week out of date. That, or a few of us suffered a collective hallucination. Or, for that matter, just me. ;)
Just in case you were wondering too.
I was wondering indeed. That was surreal.
The causal-set line of physics research has been (very lightly) touched on here before. (I believe it was Mitchel Porter that had linked to one or two things related to that, though I may be misremembering). But recently I came across something that goes a bit farther: rather than embedding a causal set in a spacetime or otherwise handing it the spacetime structure, it basically just goes “here’s a directed acyclic graph… we’re going to add on a teensy weensy few extra assumptions… and out of it construct the minkowski metric, and relativistic transformations”
I’m slowly making my way through this paper (partly slowed by the fact that I’m not all that familiar with order theory), but the reason I mention the paper (A Derivation of Special Relativity from Causal Sets) is because I can’t help but wonder if it might give us a hook to go in the other direction. That is, if this line of research might let us bring the mathematical machinery of much of physics to help us analyze stuff like Bayes nets and decision theory and give us a (potentially) really powerful mathematical tool.
Maybe I’m completely wrong and nothing interesting will come of trying to “reverse” the causal set line of research, (but causal set stuff is neat anyways, so at least I get some fun from reading and thinking about it) but does seem potentially worth looking into.
Besides, if this does end up being a useful tool, it would be perhaps one of the biggest and subtlest punchlines the universe pulled on us: since causal-sets are an approach to quantum gravity, if it ended up helping with the rationality/AI/etc stuff...
That would mean that Penrose was right about quantum gravity being a key to mind… BUT IN A WAY ENTIRELY DIFFERENT THAN HE INTENDED! bwahahahaha. :)
Whole Brain Emulation: The Logical Endpoint of Neuroinformatics? (google techtalk by Anders Sandberg)
I assume someone has already linked to this but I didn’t see it so I figured I’d post it.
Creeping rationality: I just heard a bit on NPR about a proposed plan to distribute the returns from newly found mineral wealth in Afghanistan to the general population. This wasn’t terribly surprising. What delighted and amazed me was the follow-up that it was hoped that such a plan would lead to a more responsive government, but all that was known was that such plans have worked in democratic societies, and it wasn’t known whether causality could be reversed to use such a plan to make a society more democratic.
Such plans work in societies with rule of law, and fail miserably in societies that are clan based and tribal. A quarter of Afghanistan’s GDP may go to bribes and shakedowns. A more honest description from NPR would be that historically, mineral wealth when controlled by deeply corrupt governments like Afghanistan’s, is primarily used for graft and nepotism, benefiting a few elites in government and industry while funding the oppression of everyone else.
In other words, Afghanistan is more like Nigeria than Norway.
Can anyone recommend a good book or long article on bargaining power? Note that I am NOT looking for biographies, how-to books, or self-help books that teach you how to negotiate. Biographies tend to be outliers, and how-to books tend to focus on the handful of easily changeable independent variables that can help you increase your bargaining power at the margins.
I am instead looking for an analysis of how people’s varying situations cause them to have more or less bargaining power, and possibly a discussion of what effects this might have on psychology, society, or economics.
By “bargaining power” I mean the ability to steer transactions toward one’s preferred outcome within a zone of win-win agreements. For example, if we are trapped on a desert island and I have a computer with satellite internet access and you have a hand-crank generator and we have nothing else on the island except that and our bathing suits and we are both scrupulously honest and non-violent, we will come to some kind of agreement about how to share our resources...but it is an open question whether you will pay me something of value, I will pay you something, or neither. Whoever has more bargaining power, by definition, will come out ahead in this transaction.
I’m currently reading Thomas Schelling’s Strategy of Conflict and it sounds like what you’re looking for here. From this Google Books Link to the table of contents you can sample some chapters.
Amanda Knox update: Someone claims he knows the real killer, and is being taken seriously enough to give Knox and Sollecito a chance of being released. Of course, he’s probably lying, since Guede most likely is the killer, and it’s not who this new guy claims. But what can you do against the irrational?
I found this on a Slashdot discussion as a result of—forgive me—practicing the dark arts. (Pretty depressing I got upmodded twice on net.)
Should be easy to test his claims...
I sometimes wonder, is the Italian judicial system really that lousy or is there some sort of linguistic or cultural barrier there.
Slashdot threads have a bad enough signal to noise ratio as is. Please don’t do that sort of thing.
Should I stop doing this too? Or at least wait until people start challenging the term “top theologian”?
Yes, as a regular reader of Slashdot, I’d prefer if you didn’t do that. I don’t see what you are accomplishing from these remarks. It really does come across as simple trolling.
You know what, bro? I’m not even going to ask your opinion about this.
Notable:
That’s at least humorous although I have to inquire if the other AC who replies to you about also being a Christian on Slashdot is also you.
Edit: Also, to be clear: My general response whenever these sorts of dark arts come up is very simple: If one needs to do this to get people convinced of you position that’s a cause to worry if one’s position is actually correct.
Um, I don’t believe the position I linked if that’s what you’re worried about...
No, I mean you are deliberately portraying an alternate position as stupid apparently hoping that people will think that reversed intelligence is stupidity. That’s a serious dark art. So if one is going to do that sort of thing one should worry that maybe one’s position is really not correct.
Hm, good point. I guess I am fake justifying. I’ll admit, I like to troll, and I’m kinda let down that no one has ever objected to the term “top theologian”, saying, “wait, what exactly do you have to do to count as a top theologian? What predictions, exactly?”
I actually participate as a “friendly troll” on a private board on gamefaqs.com. “Friendly troll” in that most everyone there knows I’m a troll and just makes fun of the people who make serious replies to my topics; and I casually chat with people there about what troll topics I should make. The easiest one is, “Isn’t evolution still basically just a theory at this point?”
In high school (late 90s), I would troll chatrooms and print transcripts to share with my friends the next day. One of them was a real “internet paladin” type and said, “people like you should be banned from the internet”. My crowning “achievement” was to say a bunch of offensive stuff in a gameroom on a card game site, which got a moderator called in; but by that point, everyone was yelling really offensive stuff at me, and got themselves banned. I was left alone because I made (mocking) apologies just in time, and the moderator couldn’t scroll up enough to see most of my earlier comments.
I’ve mostly toned it down and gotten away from it but I still do it here and there. Well, not here, but you get the point.
It can be fun, I will guiltily admit, but not nearly as much fun as trying to present what you actually believe in a clever enough way that somebody goes… click. (In which endeavour, by all means be sarcastic and use pathos).
You have to do some sort of calculus on what the upshot of this trolling is though… if the upshot is increased irrationality, well, there isn’t much functional difference between you and your alter ego.
And all the Anonymous_Coward arguments I’ve seen that you listed are BETTER arguments (sad as that is) than most sincere ones in support of similar conclusions. The Good Soldier Švejk isn’t actually supposed to be a good soldier. :P
Hm, so you’re saying I should use my clever trolling skills to promote rationality, instead of to unsuccessfully satirize irrationality?
Because I used to do the reverse: whenever someone was making irritatingly stupid arguments, I would just add that technique to my trolling arsenal.
Just to add to this: Goebbels was perfectly right about the phenomenon of the Big Lie. If you repeat an argument—even a TERRIBLE argument—enough times, people will start to believe it. Exempli gratia:
‘Evolution is just a theory.’ ‘Where are the transitional forms?’ ‘Hurricane in a junkyard.’
There are the partisans of evolution by n.s. and then there are the partisans of creationism, and then there are the other 85% of people who are too busy getting their GED or feeding their kids or trying to make partner in the firm, to bother really thinking about these issues. A few exposures to an unchallenged, vaguely plausible-sounding meme are enough to put them in the ID camp (say), politically, for life. You are contributing to that irrationalist background noise!
Point taken. When forming a troll post, I make the arguments with the lowest ratio of length to “confusions one needs to disentangle in order to refute”. I use “isn’t evolution still basically just a theory at this point?” because it’s a slightly improved variant by that metric.
As with my other response, perhaps I could find the good-rationalist analog of this technique and optimize for that? Perhaps minimize the ratio of argument length to “confusions one needs to detour into to refute”?
I think part of what made me stray from “the path” was a tendency to root for the rhetorical “underdog” and be intrigued—excessively—with brilliant arguments that could defend ridiculous positions. I think I can turn that around here.
Oh, don’t get me wrong, I enjoy arguing for the other side too, provided it’s disclaimed afterward. It’s a good way to see your rationalization machine shift into high gear. There is always a combination of lies, omissions, half-truths, special pleading and personal anecdotes that can convince at least a few people that you’re right—or, MUCH better, that your position should be respected.
But… rationality is usually the rhetorical underdog. Tssk! :P
Want a brilliant argument defending a silly position? Try Plantinga’s evolutionary argument against naturalism. To ascend such lofty heights of obfuscation, bring lots of pressurized oxygen.
To wit:
-Evolution optimizes for survival value, not truth value in beliefs
-Beliefs are therefore adaptive but not necessarily true (you could, conceivably, believe that you should run away from a tiger because tigers like friendly footraces).
-Therefore, on naturalism, we should expect the reliability of our cognition to be low
-This means we should, if we accept naturalism, also accept that our cognitive apparatus is too flawed to have good reasons to accept naturalism. QED, atheist.
More or less. I’m saying it’s a successful and rather amusing satire for people here. But by the standards of internet discourse among the teeming multitudes, you’re actually being fairly rhetorically effective. Case in point:
The “serious theologians” line makes me smile. But that is actually the tack taken by many of the more ‘sophisticated’ goddites. It works rhetorically when you think about it. They are saying we are avoiding our belief’s weak points.
Hm, maybe I’ll try to frame my real arguments as trolling, and see if that makes it easier to effectively convey them. Thanks for the idea.
It’s your call… I didn’t quite mean “troll with your actual beliefs,” so much as “use the considerable rhetorical skills you have to advance your sincere position.”
Right, I meant that by framing it as a troll exercise I could come with a better phrasing of my argument, not that I would necessarily slip in the angering jabs that make something a genuine troll post.
You were arguing against your real opinion as a 5th columner? May I ask why?
(Well done, by the way, in a technical sense. Just the right amount of character assassination: “Sollecito and Knox were known to be practitioners of dangerous sex acts.”)
Just don’t kill the younglings, Anakin!
I thought it would get modded down and then provoke someone as well-informed as komponisto to thoroughly refute it, and make people realize how stupid those arguments were.
Damn … now that’s starting to sound like a fake justification!
Eh, I guess I just like trolling too :-/
Internet, Silas. Silas, Internet. ;)
I think you will find an ample number of inspiringly bad arguments out there, without adding to their number. I believe this is called cutting one’s nose to spite one’s face.
FYI, this was discussed previously here
Lately I’ve been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would a FAI?
Calling them “dark arts” is itself a tactic for framing that only affects the less-rational parts of our judgement.
A purely rational agent will (the word “should” isn’t necessary here) of course use rhetoric, outright lies, and other manipulations to get irrational agents to behave in ways that further it’s goals.
The question gets difficult when there are no rational agents involved. Humans, for instance, even those who want to be rational most of the time, are very bad at judging when they’re wrong. For these irrational agents, it is good general advice not to lie or mislead anyone, at least if you have any significant uncertainty on the relative correctness of your positions on the given topic.
Put another way, persistent disagreement indicates mutual contempt for each others’ rationality. If the disagreement is resolvable, you don’t need the dark arts. If you’re considering the dark arts, it’s purely out of contempt.
If both parties are imperfectly rational, limited use of dark arts can speed things up. The question shouldn’t be whether it’s possible to present dry facts and logic with no spin, but whether it’s efficient. There are certain biases that tend to prevent ideas from even being considered. Using other biases and heuristics to counteract those biases—just to get more alternative explanations to be seriously considered—won’t impair or bypass the rationality of the listener.
Dark arts, huh? Sometime ago I put forward the following scenario:
Bob wants to kill a kitten. The FAI wants to save the kitten because it’s a good thing according to our CEV. So the FAI threatens Bob with 50 years of torture unless Bob lets the kitten go. The FAI has two distinct reasons why threatening Bob is okay: a) Bob will comply and there will be no need to torture him, b) the FAI is lying anyway. Expected utility reasoning says the FAI is doing the Right Thing. But do we want that?
(Yes, this is yet another riff on consequentialism, deontologism and lying. Should FAIs follow deontological rules? For that matter, should humans?)
Expected utility reasoning with a particular utility function says the FAI is right. If we disagree, our preferences might be described by some other utility function.
Is that actually the FAI’s only or best technique?
Off the top of my non-amplified brain:
Reward Fred for not torturing kittens.
Give Fred simulated kittens to torture and deny Fred access to real kittens.
Give Fred something harmless to do which he likes better than torturing kittens.
ETA Convince Fred that torturing kittens is wrong.
our CEV is (and has to be) detailed enough to answer the question of “do we want that?”. Saving a kitten is a good thing. Being truthful to Bob is a good thing. Not torturing Bob is a good thing. The relative weights of these good things determines the FAI’s actions.
I’d say that the FAI should calculate some game-theoretic chance of torturing Bob for 50 years based on relative pain of kitten death and of having to inflict the torture. Depending on Bob’s expected rationality level, we could tell him “you’ll be tortured”, or “you might be tortured”, or the actual mechanism of determining whether he is tortured.
Actually, strike that. Any competent AI will find ways aside from possible torture to make Bob not want that. Either agree with Bob’s reason for killing the kitten, or fix him so he only wants things that make sense. I’m not sure how friendly this is—I haven’t seen a good writeup or come to any conclusions myself of what FAI does with internal contradictions in a CEV (that is, when a population’s extrapolated volition is not coherent).
My thoughts about this problem are kind of a mess right now, but I feel there’s more than meets the eye.
Ignore the torture, “possible torture” and all that. It’s all a red herring. The real issue is lying, tricking humans into utility-increasing behaviors. It’s almost certain that some combination of “relative weights of good things” will make the FAI lie to humans. Maybe not the Bob+kitten scenario exactly, but something is bound to turn up. (Unless of course our CEV places a huge disutility on lies, which I’m pretty sure won’t be the case.) On the other hand, we humans quickly jump to distrusting anyone who has lied in the past, even if we know it’s for our own good. So now the FAI has huge incentive to conceal its lies, prevent the news from spreading among humans. I don’t have enough brainpower to model this scenario further, but it troubles me.
Lying is a form of manipulation, and humans don’t want/like to be manipulated. If the CEV works, then it will understand human concepts like “trust” and “lying” and hopefully avoid it. The only situations where it will intentionally manipulate people is when it is trying to do what is best for humanity. In these cases, you don’t have to worry because the CEV is smarter then you, but is still trying to do the “right thing” that you would do if you knew everything it knew.
Well… that depends...
Exactly.
Yes.
Yes. (When we say ‘rational agent’ or ‘rational AI’ we are usually referring to “instrumental rationality”. To a rational agent words are simply symbols to use to manipulate the environment. Speaking the truth, and even believing the truth are only loosely related concepts.
Almost certainly, but this may depend somewhat on who exactly it is ‘friendly’ to and what that person’s preferences happen to be.
That agrees with my intuitions. I had some series of ideas that ware developing around the idea that exploiting biases was sometimes necessary, and then I found:
Eliezer on Informers and Persuaders
It would seem that in trying to defend others against heuristic exploitation it may be more expedient to exploit heuristics yourself.
I’m not sure where Eliezer got the ‘just exactly as elegant as the previous Persuader, no more, no less” part from. That seems completely arbitrary. As though the universe somehow decrees that optimal informing strategies must be ‘fair’.
Q: What Is I.B.M.’s Watson?
http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?pagewanted=all
A: what is Skynet?
Sounds a little like Shalmaneser.
And now it’s time for the Daily Double!
In the video, I didn’t understand whether that series of wrong answers was staged or actually happened.
Very impressive though. Class.
Apologies for posting so much in the June Open Threads. For some reason I’m getting many random ideas lately that don’t merit a top-level post, but still lead to interesting discussions. Here’s some more.
How to check that you aren’t dreaming: make up a random number that’s too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.
How to check that you aren’t a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind’s workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.
Of course, both those arguments fall apart if the deception equipment is “unusually clever” at deceiving you. In that case both questions are probably hopeless.
The first one fails terribly. I’ve had dreams where I’ve thought I’ve proven some statement I’m thinking about and when waking up can remember most of the “proof” and it is clearly incoherent. No, subconscious, the fact that Martin van Buren was the 8th President of the United States does not tell me anything about zeros of L-functions. (I’ve had other proofs that were valid though so I don’t want the subconscious to stop working completely).
The second one seems more viable. May I suggest using something like electromagnetic stimulation of specific areas of the brain rather than deliberately damaging sections? For that matter, the fact that drugs can alter thought processes not just perception also strongly argues against being a brain in the vat by the same sort of logic.
I like your idea way better than mine. Smoke dope to prove you’re not in the Matrix!
Regarding the first point, yes, I guess dreams can hijack your reasoning in arbitrary ways. But maybe I’m atypical like that: whenever my dreams contain verse, music or math proofs, they always make perfect sense upon waking. They do sound “creatively weird”, and I must take care to repeat them in my mind to avoid amnesia, but they work fine on real world terms.
No, there’s no way of knowing that you’re not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.
The “brain in the vat” idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.
If you are a brain in a vat then that should alter sensory perception. It shouldn’t alter cognitive processes (say ability to add numbers, or to spell or the like). You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain, but the point is that we have data about how the external world relates to us that isn’t purely sensory.
You don’t seem to be familiar with this concept.
This is the entire point of the brain in the vat idea. It’s not that “you could posit it”, you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate “experienced brain damage” (in our world) with “reduced mental faculties”, that just means that the vat imposes that correlation on us through its brain life support system.
Although I don’t claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable. The only thing we can really know is that we are a thing that thinks. This is Descartes 101.
Hmm. Your comment has brought to my attention an issue I hadn’t thought of before.
Are you familiar with Aumann’s knowledge operators)? In brief, he posits an all-encompassing set of world states that describe your state of mind as well as everything else. Events are subsets of world states, and the knowledge operator K transforms an event E into another event K(E): “I know that E”. Note that the operator’s output is of the same type as its input—a subset of the all-encompassing universe of discourse—and so it’s natural to try iterating the operator, obtaining K(K(E)) and so on.
Which brings me to my question. Let E be the event “you are a thing that thinks”, or “you exist”. You have read Descartes and know how to logically deduce E. My question is, do you also know that K(E)? K(K(E))? These are stronger statements than E—smaller subsets of the universe of discourse—so they could help you learn more about the external world. The first few iterations imply that you have functioning memory and reason, at the very least. Or maybe you could take the other horn of the dilemma: admit that you know E but deny knowing that you know it. That would be pretty awesome!
When I was younger, a group of my friends started teasing others because they didn’t know the Hindu-Arabic number system. In reality, of course, they did know it, but they didn’t know that they knew it—that was the joke.
I have a sensory/gut experience of being a thinking being, or, as you put it, E.
Based on that experience, I develop the abstract belief that I exist, i.e., K(E).
By induction, if K(E) is reliable, then so is K(K(K(K(K(K(K(E)))))))). In other words, there is no particular reason to doubt that my self-reflective abstract propositional knowledge is correct, short of doubting the original proposition.
So I like the distinction between E and K(E), but I’m not sure what insights further recursion is supposed to provide.
I just saw this and realized I basically just expanded on this above.
I wasn’t familiar with this description of “world states”, but it sounds interesting, yes. I take it that positing “I am a think that things” is the same as asserting K(E). In asserting K(K(E)), I assert that I know that I know that I am a thing that thinks. If this understanding is incorrect, my following logic doesn’t apply.
I would argue that K(K(E)) is actually a necessary condition for K(E). Because if I don’t know that I know proposition A, then I don’t know proposition A.
Edit/Revised: I think all you have to do is realize that “K(K(A)) false” permits “K(A) false”. At first I had a little proof but now it seems just redundant so I deleted it.
So I guess I disagree, I think the iterations K(K...) are actually weaker statements, which are necessary for K(A) to be achieved. Consequentially I don’t see how you can learn anything beyond K(A).
K(A) is always a stronger statement than A because if you know K(A) you necessarily know A. (To get the terms clear: a “strong” statement corresponds to a smaller set of world states than a “weak” one.) It is debatable whether K(K(A)) is always equivalent to K(A) for human beings. I need to think about it more.
Format definition of K(E)={s \in S | P(s) \subset E }, where P is partition of S, ensures that K(K(E))=K(E). It’s easy to see: if s \in K(E) then P(s) \subset e, thus s \in K(K(E)), and similarly for s \notin K(E).
As for informal sence, I don’t see much use of K(K(E)) where E is a plain fact, if I aware that I know E, introspecting on that awareness will provide as much K-s as I like and little more. If I don’t aware that I know E (deep buried memory?), I will be aware of it when I remember it. But If I know that I know some class of facts or rules, that is useful for planning. However I can’t come up with useful example for K(K(K())) and higher.
Addition: Aumann’s formalization have limitations: it can’t represent false knowledge, memory glitches (when I know that I know something, but I can’t remember it), meta-knowledge, knowledge of rules of any kind (I’m not completely sure about rules).
When I’ve read about the brain-the-vat as an example before they normally just talk about sensory aspects. People don’t mention anything like altering the brain itself. So at minimum, cousin it has picked up a hole in how this is frequently described.
Considering how much philosophy is complete nonsense I’d think that LWers would be more careful about using the argument that something in philosophy is widely known to be not resolvable. I agree that if when people are talking about the brain-the-vat they mean one where the vat is able to alter the brain itself in the process then this is not resolvable.
Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The “human experiences” are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.
FWIW, my original comment talked about a realistic version of brain in a vat, not the philosophical idealized model. But now that I thought about it some more, the idealized model is seeming harder and harder to implement.
The robots who take care of my vat must possess lots of equipment besides electrodes! A hammer, boxing gloves, some cannabis extract, a faster-than-light transmitter so I can’t measure the round-trip signal delay… Think about this: what if I went to a doctor and asked them to do an MRI scan as I thought about stuff? Or hooked some electrodes to my head and asked a friend to stimulate my neurons, telling me which ones only afterward? Bottom line, I could be an actual human in an actual world, or a completely simulated human in a completely simulated world, but any in-between situations—like brains in vats—can be detected pretty easily.
Um, if you’re a brain in a vat, then any “brain” you perceive in the real world like on a “real world” MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you’re a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.
Do you have access to the computer software of your choice in your dreams? That sounds unusually vivid to me, maybe even lucid. I’m lucky if I can find a working pen and a desk that obeys the laws of physics in my dreams.
I know I do. In the last couple of years I have gone from almost never remembering a dream to having dreams that are sometimes even more vivid than my memories of real life. I even had to check my computer one day to see whether or not what I remembered doing was ‘real’ or not.
Heck, I’m lucky if I can find trousers in my dreams.
Depends on how you define ‘lucky’ I guess. ;)
A similar method was used by Solaris protagonist to check if he isn’t hallucinating.
Ouch! I read Solaris long ago. It seems the idea stuck in my head and I forgot its origin. And it does make much more sense if you substitute “hallucinating” for “dreaming”.
The trick, then, is to instill in yourself a habit of checking whether you are asleep regularly (ie. even when you are awake). A habit of thinking “am I awake, let me check” is the hard part and without that habit your sleeping mind isn’t likely to question itself. Literature on lucid dreaming talks a lot about such tests. In fact, combined with ‘write dreams down as soon as you wake up’ and ’consume X substance” it more or less summarizes the techniques.
The odd thing is that despite reading stuff about reality tests and trying to build a habit from doing them while awake, on the rare occasions I’ve had a lucid dream I’ve just spontaneously become aware that I’m presently dreaming. I don’t remember ever having a non-lucid dream where I’ve done a reality test.
Instead of fancy stuff like determining prime factors, one consistent dream sign I’ve had is utter incompetence in telling time from digital watches and clocks. This generally doesn’t tip me off that I’m dreaming though, and doesn’t occur often enough that I could effectively condition myself to recognize it.
There are also trance/self-hypnosis methods, like WILD, some people seem to be very successful with them.
Interesting. And personally I find experimenting with trance and self-hypnosis by themselves to be even more fascinating than vivid dreaming. If only I did not come with the apparent in built feature of inoculating myself to any particular method of trance or self hypnosis after a few successful uses.
I think “unusually clever” should be “sufficiently clever” in your caveat. I have very wide error bars on what I think would be usual, but I suspect that it’s almost guaranteed to defeat those tests if it’s defeated the overall test you’ve already applied of “have only memories of experiences consistent with a believable reality”.
In which case both questions are indeed hopeless.
Is anyone else concerned about the possibility of nuclear terrorist attacks? No, I don’t mean what you usually hear on the news about dirty bombs or Iran/North Korea. I mean an actual terrorists with an actual nuclear bomb. There are a suprising number of nuclear weapons on the bottom of the ocean. Has it occured to anyone that someone with enough funding and determination could actually retrieve one of them. Maybe they already have?
And here is a public list of known nuclear accidents
Notice that many of the incidents mentioned at your link don’t involve nuclear bombs at all: many involve leaks at research facilities and power stations. Here’s a chronological list of radiation incidents that caused injury from the start of the 20th century onwards. The vast majority don’t involve nuclear bombs.
Historically, unless you were in Hiroshima or Nagasaki, you would have been less likely to die from a nuclear bombing than you would have been to die from a radiation leak, picking up a lost radioactive source without recognizing it (or living with someone who’s brought one into your home), being poisoned with radiation by a coworker, or medical overexposure. (Note also that the list is surely incomplete.) It is possible that this trend will reverse in the future, but it’s not obvious that it will.
More generally, gwern sounds about right to me on the subject of terrorists putting together their own nuke. (Or hauling one up from the bottom of the ocean.)
Coincidentally I just the other day learned of the banana equivalent dose as a way of placing the risk of radiation leaks in context.
I am not. To even suggest that that this is a possibility anywhere near the level of a sovereign actor giving terrorists nukes is to dramatically overestimate terrorist groups’ technical competence, and also ascribe basic instrumental rationality to them (a mistake; see my Terrorism is not about Terror).
Even if a terrorist could marshal the interest, assemble in one place the millions necessary, and actually hire a world-class submersible and in the scant days they can afford, find the wreckage of a bomb, it would probably be useless. US nukes are designed to failsafe, so if the wiring has corroded, or the explosives are misaligned? And that’s ignoring issues with radioactive decay. (Was the bomb a tritium-pumped H-bomb? Well, given tritium’s extremely short half-life, I’m afraid that bomb is now useless.)
Maybe, although remember there are a lot more players interested in obtaining nuclear weapons then just a few terrorists. And the best crimes are the ones no one knew were commited. Unsucessful criminals are over represented as opposed to ones that got away. I suspect the same is true for terrorists. Blowing up a building isn’t going to achieve your goals, but blowing up a city might. After all, it’s ended a war once and just the threat stopped another from ever happening. Also, even if the bomb itself is useless, it is probably worth quite a bit of money, more then the millions it would take to retrieve it (maybe thousands as technology improves? There are some in shallower water. In 1958 the government was prepared to retrieve a lost bomb, but never located it.) I don’t honestly know a lot about nuclear weapons, but the materials in it, maybe even the design itself, would be worth something to somebody. Maybe said organization has the resources to salvage it, after all, they already had enough money to get it in the first place.
Even if no bombs go off, I wouldn’t be suprised if the government eventually gets around to searching for them and finds they’re not there. And there are other nuclear threats to. Although I can’t find anywhere to confirm it, it was floating around the internet that up to 80 “suitcase nukes” are missing. This quote from wikipedia particularly distrubed me:
I will leave it at that for now, I’m not one of those paranoid people that goes around ranting about nuclear proliferation or whatever. If there really is a problem, there’s not much we can do (except maybe try to get to those lost bombs first, or take anti-terrorism more seriously.)
I prefer spending my precious mental CPUs on worrying about the US government going really bad.
Admittedly, a terrorist nuke (especially if exploded in the US) would be likely to cause the US government to take a lot more control.
I don’t take Lunev seriously. Defectors are notoriously unreliable sources of information (as I think Iraq should have proven. Again.).
The problem with nuclear terrorism is that atomic bombs come with return addresses—the US has always collected isotopic samples (eg. with aerial collecting missions in international airspace) precisely to make sure this is the case. (Ironically, invading Afghanistan and Iraq may’ve helped deter nuclear terrorism: ‘If the US invaded both these countries over just a few thousand dead, then it’s plausible they will nuke us even if we cry to the heavens that we just carelessly lost that bomb.’)
P. Z. Myers discusses the relevance of gender as a proxy for intelligence.
Related: Argument Screens Off Authority.
I don’t know the ins and the outs of the Summers case, but that article has a smell of straw man. Especially this (emphasis mine) :
From what I understand (and a quick check on Wikipedia confirms this), what got Larry Summers in trouble wasn’t that he said we should use gender as a proxy for intelligence, but merely that gender differences in ability could explain the observed under-representation of women in science.
The whole article is attacking a position that, as far as I know, nobody holds in the West any more : that women should be discriminated against because they are less good at science.
Well, he also seems to be attacking a second group that does exist (those that say that there are less women in science because they are less likely to have high math ability), mostly by mixing them up with the first, imaginary, group.
Well, I think PZ Myers is a liar who has never heard of such people, but they do exist. Robin Hanson, for one. More representative is conchis’s claim early in the comments that
Rewritten: I’ve heard hints along these lines in America, where girls get better grades, in both high school and college, than boys with the same SATs. This is suggested to be about conscientiously doing homework. If American colleges don’t want to reward conscientiousness, they could change their grading to avoid homework.
That would make them be like my understanding of Oxford, where I believe grades are based on high-stakes testing, not on homework. But I also thought admissions was only based on high-stakes testing, too. That is, I don’t even know what the quoted claim means by “grades,” nor have I been able to track down people openly admitting anything like it.
Do British students get grades other than A-levels? Are there sex divergences between the grades and A-levels? A-levels and predictions? I hear that Oxbridge grades are lower variance for girls than boys. I also hear that boys do better on the math SATs than on the math A-levels, which seems like it should be a condemnation of one of the tests.
Which makes a kind of instrumental sense, in that advocacy of this position aids the first group by innocently explaining away gender inequalities. (I think it’s obvious that most people don’t distinguish well, in political situations, between incidental aid and explicit support.) Also, if evaluating individual intelligence is costly and/or inevitably noisy, it is (selfishly) rational for evaluators to give significant weight to gender, i.e. discriminate. And given how little people understand statistics, and the extent to which judgments of status/worth are tied to intelligence and to group membership, it seems inevitable that belief in group differences will lead people to discriminate far more than would be rational.
Can’t this be said of just about all straw men ? Yes, setting up a straw man may be instrumentally rational, but is it the kind of thing we should be applauding ?
Say we have two somewhat similar positions:
Position A, which is false and maybe evil (in this case “we should discriminate against women when hiring scientists, because they aren’t as likely to be very smart”)
Position B, which is maybe true (in this case (“the lack women female scientists could be due to the fact that they aren’t as likely to be very smart”)
A straw man is pretending that people arguing B are arguing A, or pretending that there’s no difference between the two—which seems to be what P.Z. Myers is doing.
You’re saying that position B gives support for position A, and, yes, it does. That can be a good reason to attack people who support position B (especially if you really don’t like position A), but that holds even if position B is true.
Agreed. I don’t necessarily approve of this sort of rhetoric, but I think it’s worth trying to figure out what causes it, and recognize any good reasons that might be involved. (I also don’t mean to say that people who use this rhetoric are calculating instrumental rationalists — mostly, I think they, as I alluded to, don’t recognize the possibility of saying things representative of and useful to an outgroup without being allied with it.)
Off That (Rationalist Anthem) - Baba Brinkman
More about skeptics than rationalists, but still quite nice. Enjoy
I could have sworn that I’d seen this posted somewhere before, for example in this thread. Maybe it was on Stumbelupon...
Sometimes I try to catch up on Recent Comments, but it seems as though the only way to do it is one page at a time. To make matters slightly worse, the link for the Next page going pastwards is at the bottom of the page, but the page loads at the top, so I have to scroll down for each page.
Is there any more efficient way to do it?
Hmm… I don’t know about recent comments, I just go to the posts I’m following. Hit control+F and then type (or copy/paste) “load more comments” and go through and hit each one. Then erase it and type the current date or yesterday’s date in the formate “date month” (18 June) and it will highlight all of those comments (if you use youtube a lot, you might already use this method on the “see all comments” page except you have to type “hour” or “minute” instead of an exact time which is actually more convenient.) When you’re done checking all of the new comments you can erase that and put in “continue this thread” (is that right, I forgot what it is exactly.)
Hope that helps.
Use the RSS feed that appears on the recent comments page. I use reader.google.com to read my RSS feeds. This will allow you to scroll back in bulk using just the scrollbar then read at leisure. It also shows comments as ‘read’ or ‘unread’ based on where you are up to.
The only measure I know of that might make it more efficient to catch up on recent comments is for you to go to your preferences page, and where it says “Display 50 comments by default,” change the “50″ to some larger number. I have been using “200” on a very slow (33.6 K bits/sec) connection.
Are there periods in your life when you read or at least skim every comment made on Less Wrong? The reason I ask is that I am a computer programmer, and every now and then I imagine ways of making the software behind Less Wrong easier to use. To do that effectively, I need to know things about how people use Less Wrong.
Here’s my wishlist:
As much trn functionality as it seems to be worth coding—in particular, the ability to default to only seeing unread comments (or at least a Recent Comments for posts as well as for the whole site) while reading comments to a post while having easy access to old comments. the ability to default to not seeing chosen threads and sub-threads, and tree navigation.
If you want to find out how people generally use the site, I think a top level post asking about it is the only way to get the questions noticed. If you post it, I’ll upvote it.
Seconded.
And in the absence of such a feature, my current compromise is to not look at a post until people have mostly stopped commenting on it, so that I can read the comments with threading and without redundancy. This is not very conducive to conversations, and as such I have mostly stopped commenting since adopting this strategy.
I also find this problem annoying and would like to see more recent comments on a page. I usually read through every comment on recent comments when I come to LW.
Thanks. I’ve got it set at 500 comments, but I don’t think it actually shows 500-- and in any case, I think it’s just for comment threads, not for recent comments.
It’s akrasia, but yeah, I’ve been using Recent Comments to read or at least skim everything.
I don’t even have clear ideas of the right questions to ask about how people use LW, but a survey would be interesting.
I never noticed that before, but you are right: all the /comments/ pages I have asked for have 100 comments on them regardless of how I try to change that. (I tried setting the number in prefs to a smaller value, logging out and in again, following a “Next” link.)
(Oddly, although it will show me a page with 100 comments on it if I click it, the URL in the “Next” link bottom of a /comments/ page contains the string “count=50″.)
Does anyone happen to know the status of Eliezer’s rationality book?
The first draft is in progress.
Second draft, technically. The first draft was a rough outline of the contents.
I wasn’t counting that as a “draft”.
Another idea for friendliness/containment: run the AI in a simulated world with no communication channels. Right from the outset, give it a bounded utility function that says it has to solve a certain math/physics problem, deposit the correct solution in a specified place and stop. If a solution can’t be found, stop after a specified number of cycles. Don’t talk to it at all. If you want another problem solved, start another AI from a clean slate. Would that work? Are AGI researchers allowed to relax a bit if they follow these precautions?
ETA: absent other suggestions, I’m going to call such devices “AI bombs”.
These ideas have already been investigated and documented:
Box: http://fragments.consc.net/djc/2010/04/the-singularity-a-philosophical-analysis.html
Stopping: http://alife.co.uk/essays/stopping_superintelligence/
If these precautions become necessary, end of the world will follow shortly (which is the only possible conclusion of “AGI research”, so I guess the researchers should rejoice at the work well done, and maybe “relax a bit”, as the world burns).
I don’t understand your argument. Are you saying this containment scheme won’t work because people won’t use it? If so, doesn’t the same objection apply to any FAI effort?
If my Vladimir-modelling heuristic is correct, he’s saying that you’re postulating a world where humanity has developed GAI but not FAI. Having your non-self-improving GAI solve stuff one math problem at a time for you is not going to save the world quickly enough to stop all the other research groups at a similar level of development from turning you and your boxed GAI into paperclips.
An AI in a simulated world isn’t prohibited from improving itself.
More to the point, I didn’t imagine I would save the world by writing one comment on LW :-) My idea of progress is solving small problems conclusively. Eliezer has spent a lot of effort convincing everybody here that AI containment is not just useless—it’s impossible. (Hence the AI-box experiments, the arguments against oracle AIs, etc.) If we update to thinking it’s possible after all, I think that would be enough progress for the day.
I don’t think it’s really an airtight proof—there’s a lot that a sufficiently powerful intelligence could learn about its questioners and their environment from a question; and when we can’t even prove there’s no such thing as a Langford Basilisk, we can’t establish an upper bound on the complexity of a safe answer. Essentially, researchers would be constrained by their own best judgement in the complexity of the questions and of the responses.
Of course, all that’s rather unlikely, especially as it (hopefully) wouldn’t be able to upgrade its hardware—but you’re right, software-only self-improvement would still be possible.
Yes, I agree. It would be safest to use such “AI bombs” for solving hard problems with short and machine-checkable solutions, like proving math theorems, designing algorithms or breaking crypto. There’s not much point for the AI to insert backdoors into the answer if it only cares about the verifier’s response after a trillion cycles, but the really paranoid programmer may also include a term in the AI’s utility function to favor shorter answers over longer ones.
What khafra said—also this sounds like propelling toy cars using thermonuclear explosions. How is this analogous to FAI? You want to let the FAI genie out of the bottle (although it will likely need a good sandbox for testing ground).
Yep, I caught that analogy as I was writing the original comment. Might be more like producing electricity from small, slow thermonuclear explosions, though :-)
Not small explosions. Spill one drop of this toxic stuff and it will eat away the universe, nowhere to hide! It’s not called “intelligence explosion” for nothing.
That’s right—I didn’t offer any arguments that a containment failure would not be catastrophic. But to be fair, FAI has exactly the same requirements for an error-free hardware and software platform, otherwise it destroys the universe just as efficiently.
Sure, prototypes of FAI will be similarly explosive.
Aaron Swartz: That Sounds Smart
I recently read a fascinating paper that argued based on what we know about cognitive bias that our capacity for higher reason actually evolved as a means to persuade others of what we already believe, rather than as a means to reach accurate conclusions. In other words, rationalization came first and reason second.
Unfortunately I can’t remember the title or the authors. Does anyone remember this paper? I’d like to refer to it in this talk. Thanks!
That would probably be “Why do humans reason” by Mercier and Sperber, which I covered in this post.
The very one. Thanks—and wow, that was swift!
Interview with Lloyd’s of London space underwriter.
http://www.lloyds.com/News_Centre/Features_from_Lloyds/News_and_features_2009/Market_news/60_seconds_with_David_Wade.htm
Feds under pressure to open US skies to drones
http://news.yahoo.com/s/ap/20100614/ap_on_bi_ge/us_drones_over_america
Looking through a couple of posts on young rationalists, it occurred to me to ask the question, how many murderers have a loving relationship with non-murderer parents?
Is there a way to get these kinds of statistics? Is there a way to filter them for accuracy? Accuracy both of ‘loving relationship’ and of ‘guilty of murder’ (i.e. plea bargains, false charges, etc.)
I started to write: The probabilities in my priors are so low that I don’t expect any update to occur, even if you could accurately measure. Then I thought: Wait, that’s what ‘prior’ means: of course I don’t expect any update to occur! Rationality is hard.
So instead, I’ll phrase my confusion this way: I have a hard time stating a belief for which even a surprising result to this measurement would matter. There are so many other reasons to recommend being raised by loving parents that “increased likelihood of murder from near-zero to still-near-zero” is unlikely to change such a preference.
And the overall murder rate is already so low that the reverse isn’t true either: you shouldn’t worry significantly less about an acquaintance murdering someone just because they have loving parents. Because in most cases you CANNOT worry less than you already should, which is near-zero.
I’m not really thinking in terms of particular issues, the more interesting questions in my mind are the issues that would arise in collecting such data.
Today is Autistic Pride Day, if you didn’t know. Celebrate by getting your fellow high-functioning autistic friends together to march around a populated area chanting “Aspie Power!” Preferably with signs that say “Neurotypical = manipulative”, “fake people aren’t real”, or something to that effect.
Kidding. (About everything after the first sentence, I mean.)
one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution converted to binary followed by eight five nine zero one digits of reflexive code. current explanation--
“Eric, you’ve got to come over and look at this!” Jerry explained excitedly into the phone. “It’s not those damn notebooks again, is it? I’ve told you, I could just write a computer program and you’d have all your damn results for the last year inside a week,” Eric explained sleepily for the umpteenth time. “No, no. Well… yes. But this is something new, you’ve got to take a look,” Jerry wheedled. “What is it this time? I know, it can calculate pi with 99.9% percent accuracy, yadda yadda. We have pi to billions of decimal places with total accuracy, Jerry. You’re fifty years too late.” “No, I’ve been trying something new. Come over.” Jerry hung up the phone, clearly upset. Eric rubbed his eyes. Fifteen minutes peering at the crackpot notebooks and nodding appreciatively would sooth his friend’s ego, he knew. And he was a good friend, if a little nuts. Eric took one last longing look at his bed and grabbed his house key.
“And you see this pattern? The ones that are nearly diagonal here?” “Jerry, it’s all a bunch of digits to me. Are you sure you didn’t make a mistake?” “I double check all my work, I don’t want to go back too far when I make a mistake. I’ve explained the pattern twice already, Eric.” “I know, I know. But it’s Saturday morning, I’m going to be a bit—let me get this straight. You decided to apply the algorithm to its old output.” “No, not its own output, that’s mostly just pi. The whole pad.” “Jerry, you must have fifty of these things. There’s no way you can—” “Yeah, I didn’t go very far. Besides, the scratch pads grow faster than the output as I work through the steps anyway.” “Okay, okay. So you run through these same steps with your scratch pad numbers, and you get correct predictions then too?” “That’s not the point!” “Calm down, calm down. What’s the point then?” “The point is these patterns in the scratch work—” “The memory?” “Yeah, the memory.” “You know, if you’d just let me write a program, I—” “No! It’s too dangerous.” “Jerry, it’s a math problem. What’s it going to do, write pi at you? Anyway, I don’t see this pattern...” “Well, I do. And so then I wondered, what if I just fed it ones for the input? Just rewarded it no matter what it did?” “Jerry, you’d just get random numbers. Garbage in, garbage out.” “That’s the thing, they weren’t random.” “Why the hell are you screwing around with these equations anyway? If you want to find patterns in the Bible or something… just joking! Oww, stop. I kid, kid!” “But, I didn’t get random numbers! I’m not just seeing things, take a look. You see here in the right hand column of memory? We get mostly zeros, but every once in a while there’s a one or two.” “Okaaay?” “And if you write those down we have 2212221...” “Not very many threes?” “Ha ha. It’s the perfect numbers, Eric. I think I stumbled on some way of outputting the perfect numbers. Although the digits are getting further spaced apart, so I don’t know how long it will stay faster than factoring.” “Huh. That’s actually kinda cool, if they really are the perfect numbers. You have what, five or six so far? Let’s keep feeding it ones and see what happens. Want me to write a program? I hear there’s a cash prize for the larger ones.” “NO! I mean, no, that’s fine, Eric. I’d prefer you not write a program for this, just in case.” “Geez, Jerry. You’re so paranoid. Well, in that case can I help with the calculations by hand? I’d love to get my claim to fame somehow.” “Well… I guess that’s okay. First, you copy this digit from here to here...”
Episode of the show Outnumbered that might appeal to this community. The show in general is very funny, smart and well acted, children’s roles in particular.
I’m looking for some concept which I am sure has been talked about before in stats but I’m not sure of the technical term for it.
Lets say you have a function you are trying to guess with a certain range and domain. How would you talk about the amount of data you would need to likely get the actual functions with noisy data? My current thoughts are the larger the cardinality of the domain the more data you would need (in a simple relationship) and the type of noise would determine how much the size of the range would affect the amount of data you would need.
First, I would specify what set my ‘function’ is in. Are there 2 possibilities? 10? A million? log2(x) tells me how many bits of information I need. Then I would treat the data as coming to me through a noisy channel. How noisy? I assume you already know how noisy. Now I can plug in the noise level to Shannon’s theorem, and that tells me how many noisy bits I need to get my log2(x) bits.
(This all seems like very layman information theory, which makes me wonder if something makes your problem harder than this.)
By data I meant training data (in a machine learning context), not information.
And it wasn’t really the math I was after, it is quite simple, just whether it had been discussed before.
My thoughts on the math. If the cardinality of the input is X and output is Y then the bound for the space of functions you are exploring is by X^Y. E.G. there are 4 possible functions from 1 binary bit to another. (Set to 0, Set to 1, Invert, Keep the Same). I’ve come across this in simple category theory.
However in order to fully specify which function it is (assuming no noise). You need a minimum of 2 pieces of training data (where training data is input ouput pairs). If you have the training pair (0,0) you don’t know if that means keep the same or set to 0. Fairly obviously you need as many unique samples of training data as the cardinality of the domain. You need more when you have noise.
This is a less efficient way of getting information about functions, than just getting the Turing number or similar.
So I’m really wondering if there are guidelines for people designing machine learning systems, that I am missing. So if you know you can only get 5000 training examples, you know you that a system which tries to learn from the entire space of functions with the domain much larger that 5000 is not going to be very accurate unless you have put a lot of information into the prior/hypothesis space.
The closest thing I can think of is identifiability, although that’s more about whether it’s possible to identify a function given an arbitrarily large amount of data.
Hmm, not quite what I was looking for but interesting none the less.
Thanks.
Physics question: Is it physically possible to take any given mass, like the moon, and annihilate the mass in a way that yields usable energy?
Fusion (mostly) does that. It works better with some elements, of course.
Yes, if you collide it with the same mass of antimatter. Edit: I don’t know enough to say if there are other ways.
This may not be very practical to do to the whole moon at once though :-)
Yeah but does it require a lot of energy/negentropy to get ahold of the necessary antimatter? I’m wondering whether the moon’s mass makes it analogous to a charged capacitor or an uncharged capacitor.
Antimatter is expensive to make. It would require the whole world GDP to make one anti-Liron. Conservation of energy says that to make an antiparticle, you need a collision with kinetic energy equal to the rest mass of the antiparticle you’re making. Solar flares make some antimatter as they punch through the solar atmosphere, but good luck getting hold of it before it annihilates.
The standard cosmological model says that shortly after the big bang, matter and antimatter existed in equal quantities, but there were interactions which favored the production of matter, and so all the antimatter was annihilated, leaving an excess of matter, which then in the next stage formed the first atomic nuclei. Antimatter is therefore rare in the universe. There are probably no natural antistars, for example. So it is expensive to come by, but (for a cosmic civilization) it might be a good way to store energy.
And if there are, we don’t know how to identify them from far away, do we?
BTW, can there be antimatter black holes? My limited understanding of physics is that matter/antimatter falling into a black hole passes the event horizon before it can interact with anything that fell into the hole in the past; and once it passes the event horizon, even if it mutually annihilates with something already in the black hole, the results can’t escape outside. So from the outside there’s no difference between matter, antimatter, and mixed black holes.
I saw this and immediately thought of the no hair theorem, which says that the only distinguishing (reference frame-independent) characteristics of black holes are their mass, their charge and their angular momentum. Turns out that Wikipedia uses matter v. antimatter black holes as an example of the theorem’s implications!
So if I find a natural antimatter star, and I’m afraid someone will use it as a weapon, the safest thing to do is to throw it into a black hole.
In other words, even if we collide a matter black hole and an antimatter black hole, we won’t see any evidence of mutual annihilation—we’ll just get a double-size black hole. Cool.
I’m sorry if I’m explaining the joke, but the rule of thumb is that this only saves you an order of magnitude of violence; 10% of the mass is released as radiation.
In fact, “throw it into a black hole” seems like a better answer to Liron’s question than “collide it with equally much antimatter.” It’s not as efficient, but it’s a lot easier to find black holes than antimatter. It may be easier in the annihilation case to actually use the energy, but I’m not sure.
Probably. If nothing else, for a given amount of energy released you will probably be able to stand closer to collect it in the antimatter case.
I’m sorry if I’m explaining the joke, but the rule of thumb is that this only saves you an order of magnitude of violence; 10% of the mass is released as radiation.
In fact, “throw it into a black hole” seems like a better answer to Liron’s question than “collide it with equally much antimatter.” It’s not as efficient, but it’s a lot easier to find black holes than antimatter. It may be easier in the annihilation case to actually use the energy, but I’m not sure.
If an antimatter star (of one solar mass) was thrown at a matter star how far away would they need to be for the ecosystem on earth not to be seriously damaged?
If throwing antimatter stars around is difficult we may be able to resort to playing ‘asteroids’. That is, throw actual asteroids at it, resulting in smaller blasts of annhialiation and probably in the star being fragmented, allowing further asteroids to finish the clean up process.
Wouldn’t we need asteroids of total mass comparable to the anti-star? Where would you we enough? Any planets or asteroid belts around that star would be antimatter too, almost certainly.
Yes, you would need that much matter to be annihilated. But finding one system’s worth of mass is a (relatively) trivial part of the problem. It is a whole order of plausibility easier than trying to throw an antimatter star into a black hole. Taking apart the nearby systems and throwing the planets and asteriods at the offending star is just an engineering problem once you have that sort of tech. I could probably do it myself if you gave me 30,000 years to work out the finer details. You either push on the asteroid while standing on something bigger or you launch tiny things off the asteroid at large fractions of the speed of light in a suitable direction.
Throwing a whole antimatter star into a suitable black hole? I can’t even do that one in principle (within 2 minutes of thought). Apart from being really big and too hot to put propulsion devices on… it’s made out of F@#% antimatter. The obvious options for accelerating it are gravity and photons, neither of which care about the ‘matter/antimatter’ distinction. If you have enough gravity hanging about in the vicinity then the star is probably already falling into the black hole. And if you are planning on pushing a star about using only photons.… well, you may end up using more than just one star worth of matter to pull that off.
Then there is the problem of finding a suitably large black hole to throw it at. They tend to have stuff in their orbit (often the rest of the galaxy). Navigating an antimatter star to the black hole without it annihilating itself of the way there would be tricky. It isn’t easy to steer these things.
What may be easier is to dedicate a year or two run time on a Jupiter Brain to work out just the right size rock to throw at just the right time at just the right place. The resulting explosion would be chosen to knock the star in the right direction, or in the right pieces in the right directions, or whatever it is that antimatter stars do when you throw rocks at them. Then most of the destruction would be from it hitting the other stars that you aimed for. You would dispose of the weapon by triggering it in a controlled manner.
Wait… black holes keep their electrical charge? As in… if I shoot enough electrons at a black hole it will start to repel any negatively charged matter rather than attract it? No, that’d allow me to scout out information past the event horizon. Hmmm...
… Apparently charged black holes have two horizons, an event horizon and a Chauchy horizon. But I am still not sure what would happen in the case of a constant stream of electrons. Could someone with physics knowledge fill me in? What does happen when the black hole reaches a critical charge?
two different black holes I can make them actually repel each other rather than attract? Hmmm. And reject No, that lets me get information
(Disclaimer: I’m not a physicist, so this may be BS.) This might not be a problem. If a black hole repels negative charges, all that tells you is the black hole’s position and net charge, and AFAIK that kind of information is allowed to ‘escape’ the black hole: position is OK because that’s frame-dependent, and the no-hair theorem says it’s OK to know the net charge.
I am just speaking BS too but:
If a black hole can be charged sufficiently that it repels an electron rather than attracts via gravity then:
There will be a point at which the gravity is perfectly balanced by the repulsion of the negative charges.
Just after that point there there will be a point where the electron is subject to a slight acceleration away from the center of the black hole.
If I shoot an electron at a suitable speed at such a black hole the electron will slow down and reverse in direction at a point determined by the initial speed and the acceleration. This point could be below the event horizon.
If such an election hit something inside the event horizon it would not return to me.
This tells me something about things inside the event horizon.
The teacher says I am not allowed to discover things about the inside of the event horizon.
Something in the above scenario must not be right.
I think you will find that the charge repulsion never exceeds the gravitational attraction in this way. The mass of a black hole places a bound on how much charge it can have; if the bound is exceeded, you get a naked singularity. You may actually be rediscovering this!
ETA: The two horizons you mentioned earlier merge when this bound is reached. I suppose this means that if you try to shoot a charge into one of these “extremal” black holes, the charge will be repelled outside the event horizon. That would be a consistent way for everything to work out, so that the bound can never be violated. But I will have to check.
You could be right, but how? I inject enough electrons into the black hole to maintain it at as high a charge as possible. Then I launch more electrons from a platform that is doing a slighshot pass right by the event horizon. And I dedicate the energy from a nearby star to shooting photons at it to force the extra particles in...
And even forgetting extreme options. Just why? If the black hole is not charged to a level that will repel electrons it will attract them. Add more and they will just hover there without accelerating. Add more still and they will be repelled. This works unless weird math comes in to play.
Red suggests discharge via Hawking radiation. I would not be able to rule out some sort of asymptotic increase in Hawking Radiation discharge toward my electron input rate. (Basically because I don’t know how Hawking Radiation works.)
I had never heard the term before but that is just where my thoughts were leading me.
Looking a bit more closely it would seem that ‘strong’ forces would ensure there is always at least a tiny horizon at which even electrons couldn’t escape no matter what the charge. (And if there wasn’t the thing would fall apart). It just doesn’t matter how big you make the charge. ‘Squared’ just doesn’t cut it. So while the electrons would return from where even photons could not escape they would still get stuck if they went deep enough. But I don’t know where things like strong forces start to break down...
BTW, it seems that charged black hole will discharge via Hawking radiation.
Yes. Not from the star itself but rather by the interstellar dust (hydrogen atoms floating about, etc). We would detect emissions from interactions at the boundary between ‘mostly empty but with bits of matter’ and ‘mostly empty but with bits of antimatter’.
So, uncharged capacitor?
The analogy is indeterminate. The energy is there, but in a matter-antimatter “capacitor” or “fuel cell”, you would need both ingredients to release it. So maybe it’s like half a charged capacitor.
There isn’t an answer to that unless we specify how we intend to consider using the moon. For most part it isn’t analogous to either kind of capacitor but we can construct scenarios for either case I expect.
We could, for example, use the moon to store either gravitational or kinetic energy. That would make it fairly charged (but leaking charge over time...)
We could use the moon to store heat energy --> it’s uncharged.
As for direct annihilation of the mass to release energy… would you consider that to be analogous to a ‘capacitor’? Sounds like more of a ‘battery’ to me.
Well, I shouldn’t speak before checking. Taking numbers from Wikipedia (ETA fixed numbers):
The moon has a mass of 7.36e22 Kg, converting it to energy would yield 6.624e39 J.
The Sun’s total output is about 3.86e26 J / s, so this is the equivalent of 3.17 million years of Sun energy (if you have a Dyson sphere).
A nova releases ~~ 1e34-1e37 J over a few days; only 1⁄100 as much as converting the moon to energy. A core-collapse supernova bursts 1e44-1e46 J of energy in 10 seconds—a lot more. (Range is according to different Google results.)
ETA: the numbers were completely wrong before and I corrected them.
Your numbers seem to be off: (e.g. 4.26e9 J/sec would be truly minsiscule) You probably meant 4.29e29 J/sec, but then 5.74e5 years are wrong. According to wikipedia, the Sun’s energy output is: 1.2e34 J/s which is still at odd with both of your numbers.
I’d like to pose a sort of brain-teaser about Relativity and Mach’s Principle, to see if I understand them correctly. I’ll post my answer in rot13.
Here goes: Assume the universe has the same rules it currently does, but instead consists of just you and two planets, which emit visible light. You are standing on one of them and looking at the other, and can see the surface features. It stays at the same position in the sky.
As time goes by, you gradually get a rotationally-shifted view of the features. That is, the longitudinal centerline of the side you see gradually shifts. This change in view could result from the other planet rotating, or from your planet revolving around it while facing it. (Remember, both planets emit light, so you don’t see a different portion being in a shadow like the moon’s phases.)
Question: What experiment could you do to determine whether the other planet is spinning, or your planet is revolving around it while facing it?
My answer (rot13): Gurer vf ab jnl gb qb fb, orpnhfr gurer vf ab snpg bs gur znggre nf gb juvpu bar vf ernyyl unccravat, naq vg vf yvgreny abafrafr gb rira guvax gung gurer vf n qvssrerapr. Gur bayl ernfba bar zvtug guvax gurer’f n qvssrerapr vf sebz orvat npphfgbzrq gb n havirefr jvgu zber guna whfg gurfr gjb cynargf, juvpu sbez n onpxtebhaq senzr ntnvafg juvpu bar bs gurz pbhyq or pbafvqrerq fcvaavat be eribyivat.
Imagine a simplified scenario: only one planet. Is the planet rotating or not? You could construct a Foucault pendulum and see. It will show you a definite answer: either its plane of oscillation moves relatively to the ground or not. This doesn’t depend on distant stars. If your planet is heavy and dense like hell, you could see the difference between a “rotating” Kerr metric and a “static” Schwarzschild metric.
Of course, general relativity is generally covariant, and any motion can be interpreted as a free fall in some gravitational field, and more, there is no absolute background spacetime with respect to which to measure acceleration. So you can likely find coordinates in which the planet is static and the pendulum movement explain by changing gravitational field. The price paid is that it will be necessary to postulate weird boundary conditions in the infinity. It is possible that more versions of boundary conditions are acceptable in the absence of distant objects and the question whether the planet is rotating is then less defined.
Carlo Rovelli in his Quantum Gravity (once I downloaded it from arXiv, now it seems unavailable, but probably it could still be found somewhere on the net) considers eight versions of Mach principle (MP). This is what he says (he has discussed the parabolic water surface of a rotating bucket before instead of two planets or Foucault pendula):
I think number 4 is especially relevant here. The boundary conditions or the global topology of the universe have to be taken into account, else the two-planet scenario is not entirely defined.
Edit: The last remark doesn’t make much sense after all. The planets aren’t thought to be too heavy and the dragging effect shouldn’t be too big, and its relation to boundary conditions isn’t straightforward. Nevertheless, the boundary conditions still play an important role (see my subcomment here).
Sure it does. If the rest of the objects in the universe were rotating in unison around the earth while the earth was still, that would be observationally indistinguishable from the earth rotating. The GR equations (so I’m told[1]) account for this in that, if the rest of the universe were treated as rotating, that would send gravitaitonal waves that would jointly cause the earth to be still in that frame of reference.
Remove that external mass, and you’ve removed the gravity waves. Nothing cancels the gravity wave generated by the motion of the planets.
Yes, I think that agrees with my answer to the question.
[1] See here:
Let me write one more reply since I think my first one wasn’t entirely clear.
Let’s put all this into a thought experiment like this: Universe A contains only a light observer with a round bottle half full of water. Universe B contains all that, and moreover a lot of uniformly isotropically distributed distant massive stars. In both universes the spacetime region around the observer can be described by Minkowski metric. At the beginning, the observer sees that the water is spread near the walls of the bottle with a round vacuum bubble in the middle; this minimises the energy due to surface tension. Now, the observer gives the bottle some spin. Will the observation in universe A be different from that in universe B?
If GR is right, then no, it wouldn’t. In both, the observers will see the water concentrated in regions most distant from a specific straight line, which is reasonable to call the axis of rotation. To see that, it is enough to realise that the distant stars influence the bottle only by means of the gravitational field, and it remains almost the same in both cases—approximately Minkowskian, assumed that the bottle and the observer aren’t of black hole proportions.
Of course one can then change the coordinates to those in which the bottle is static. With respect to these coordinates, the stars in universe B would rotate, and in universe A, well, nothing much can be said. But in both universes, we will find a gravitational field which creates precisely the effects of the rotation of the now static bottle. The stars are there only to distract the attention.
We can almost do the coordinate change in the Newtonian framework: it amounts to use of centrifugal force, which can be thought of as a gravitational force (it is universal in the same way as the gravitational force; of course, this is the equivalence principle). There are only two “minor” problems in Newtonian physics: first, orthodox Newtonianism recognises only gravitational force emanating from massive objects in the way described by Newton’s gravitational law, which is why the centrifugal force has to be treated differently, and second, there is the damned velocity dependent Coriolis force.
Edit: some formulations changed
Okay, I give up. I don’t know the math well enough to speak confidently on this issue. I was just taking the Machian principles in the article I linked and extrapolating them to the scenario I envisioned, using some familiarity with frame-dragging effects.
Still, I think it’s an interesting exercise in finding the implications of a universe without the background mass, and not as easy to answer as some initially assumed.
Yes, it’s interesting, I was confused for quite a while, still the answer is simpler than what I initially assumed, which makes it a good brain teaser.
This is not so simple. The force of the gravitational waves depends on the mass of the rest of the universe. One can easily imagine the same observable rest of the universe with a very different mass (just remove all the dark matter or so). Both can’t generate the same gravitational waves, but there would be no significant observable effect on Earth. The metric around here would be still more or less Schwarzschild (or Kerr). The fact that steady state can be interpreted as rotation whose effects are cancelled by gravitational waves has not necessarily much to do with the existence of other objects in the universe. Even in empty space, the gravitational waves can come from infinity.
So, while it’s true that there is no absolute space with respect to which one measures the acceleration, there are still Foucault pendula. Because there is no absolute space, to define what constitutes rotation using any particular coordinates would be absurd. But we can still quite reasonably define rotation (extend our present definition of rotation) by use of the pendulum, or bucket, or whatever similar device. Even in single-planet universes, there can be buckets with both flat and parabolic surfaces.
I have only a superficial understanding of GR, but nevertheless, your question seems a bit unclear and/or confused. A few important points:
Whether GR is actually a Machian theory is a moot point, because it turns out that Mach’s principle is hard to formulate precisely enough to tackle that question. See e.g. here for an overview of this problem: http://arxiv.org/abs/gr-qc/9607009
According to the Mach’s original idea—whose relation with GR is still not entirely clear, and which is certainly not necessarily implied by GR—a necessary assumption for the “normal” behavior of rotational and other non-inertial motions is the large-scale isotropy of the universe, and the fact that enormous distant masses exist in every direction. If the only other mass in the universe is concentrated nearby, you’d see only weak inertial forces, and they would behave differently in different directions.
The geometry of spacetime in GR is not uniquely determined by the distribution of matter. You can have various crazy spacetime geometries for any distribution of matter. (As a trivial example, imagine you’re living in the usual Minkowski or Schwarzschild metric, and then a powerful gravitational wave passes by.) In this sense, GR is deeply anti-Machian.
That said, assuming nothing funny’s going on, in the scenario you describe, the classical limit applies, and the planets would move pretty much according to Newton’s laws. This means they’d both be orbiting around their common center of mass, so it’s not clear to me that the observations you listed would be possible. [ETA: please ignore this last point, my typing was faster than my thinking here. See the replies below.]
Therefore, the only way I can make sense of your example would be to assume that the other planet is much heavier than yours, and that the Schwarzschild metric applies and gives approximately Newtonian results, so we get something similar to the Moon’s rotation around the Earth. Is that what you had in mind?
I don’t understand. The listed observations are in accordance with Newton, whatever the masses of the planets.
Yes, you’re right. It was my failure of imagination. I thought about it again, and yes, even with similar or identical masses, the rotations of individual planets around their own axes could be set so as to provide the described view.
Couldn’t you tell whether your planet is revolving or rotating using a Foucault’s pendulum? I’m not sure whether you can get all the information about the planets’ relations with a complex set of Foucault’s pendula or not, but you could get some.
Also, I think your answer is a map-territory confusion. While GR does not distinguish certain types of motion from each other, and while GR seems to be the best model of macroscopic behavior we have, to claim that this means that there is really no fact of the matter seems a little overconfident.
The Foucault pendulum is able to measure earth’s rotation in part because of the frame established by the rest of the universe. But in the scenario I described, the frame dragging effect of one or both planets blows up your ability to use the standard equations. Would the corrections introduced by including frame-dragging show a solution that varies depending on which of the planets is “really” moving?
It’s the other way around. The fact that there is no test that would distinguish your location along a dimension means that no such dimension exists, and any model requiring such a distinction is deviating from the territory.
Yes, GR could be wrong, but for it to be wrong in a way such that e.g. you actually can distinguish acceleration from gravity would require more than just a refinement of our models; it would mean the universe up to this point was a lie.
SilasBarta:
This isn’t really true. In GR, you can in principle always distinguish acceleration from gravity over finite stretches of spacetime by measuring the tidal forces. There is no distribution of mass that would produce an ideally homogeneous gravitational field free of tidal forces whose effect would perfectly mimic uniform acceleration in flat spacetime. The equivalence principle holds only across infinitesimal regions of spacetime.
See here for a good discussion of what the equivalence principle actually means, and the overview of various controversies it has provoked:
http://www.mathpages.com/home/kmath622/kmath622.htm
Yes, I was just listing an offhand example of an implication of GR and I didn’t bother to specify it to full precision. My point was just that in order for a certain implication to be falsified (specifically, that there is no fact of the matter as to e.g. what the velocity of the universe is), you would need the laws of the universe to change, not just a refinement in the GR model.
I must admit I’m a little baffled by this. I’m pretty ignorant of GR, but I was strongly under the impression that
(a) the frame dragging effect was miniscule, and
(b), that Foucault’s pendulum works simply because there is no force acting on the pendulum to change the plane of its rotation. Thus, a perfect polar pendulum on a planet in a universe with no other bodies in it will never have any force exerted on it other than gravity and will continue to swing in the same plane. If the planet is rotating, an observer on the planet will be able to tell this by observing the pendulum, even in the absence of any other body in the universe. Similarly, in the above paradox, an observer can tell whether their planet is revolving around the other planet while remaining oriented towards it because the pendulum will rotate over the course of a “year”.
To appreciate how differently things are when you remove the rest of the universe, consider this: what if the universe is just one planet with the people on it? How will a Foucault pendulum behave in that universe? Shouldn’t it behave quite differently, given that the rotation of the planet means the rotation of the entire universe, which is meaningless?
As Prase said above, that depends on the boundary conditions. As the clearest example, if you imagine a flat empty Minkowski space and then add a lightweight sphere into it, then special relativity will hold and observers tied to the sphere’s surface would be able to tell whether it’s rotating by measuring the Coriolis and centrifugal forces. There would be a true anti-Machian absolute space around them, telling them clearly if they’re rotating/accelerating or not. This despite the whole scenario being perfectly consistent with GR.
Rotation of the planets doesn’t mean rotation of the universe, don’t forget there are not only the planets, but also the gravitational field.
If the two planets aren’t revolving around each other, wouldn’t gravity pull them together? But maybe space is expanding at precisely the rate necessary to keep them at the same distance despite gravity? To test that, build a rocket on your planet and push it (the planet) slightly, either toward the other planet or away from it. If the planets are revolving around each other, you’ve just changed a circular orbit into an elliptical one, so you should see an oscillation in the distance between the two planets. If they are not revolving around each other, then they’ll either keep getting closer together or further apart, depending on which direction you made the push.
(This is all based on my physics intuition. Somebody who knows the math should write down the two equations and check if they’re isomorphic. :)
Gravity would pull, yes, but the rotation of a body also distorts space in such a way to produce another effect you have to consider.
ETA: Look at a similar scenario. Same as the one I proposed, but you always see the same portion of the other planet. How do you know how fast the two planets are revolving around each other? Isn’t this the same as asking how fast the entire universe is rotating?
Exactly as fast as needed to keep them on cyclical orbit (assuming you don’t experience change of the distance to the second planet). For this, you can quite safely use the Newton laws.
In general-relativistic language, what exactly do you mean by “how fast the entire universe is rotating”?
I mean nothing. In GR, the very question is nonsense. The universe does not have a position, just relative positions of objects.
The universe does not have a velocity, just relative velocities of various objects.
The universe does not have an acceleration, just relative accelerations of various objects.
The universe does not have a rotational orientation, just relative rotational orientations of various objects.
The universe does not have a rotational velocity, just relative rotational velocities of various objects.
There is no way in this universe to distinguish between a bucket rotating vs. the rest of the universe rotating around the bucket. There is also no such thing as how fast the universe “as a whole” is rotating.
I’m not sure if what you write makes sense. Take one simple example: a flat Minkowski spacetime, empty except for a few light particles (so that their influence on the metric is negligible). This means that special relativity applies, and it’s clearly consistent with GR.
Accelerated motions are not going to be relative in this universe, just like they aren’t in Newton’s theory. You can of course observe an accelerating particle and insist on using coordinates in which it remains in the origin (which is sometimes useful, as in e.g. the Rindler coordinates), but in this coordinate system, the universe will not have the above listed properties in any meaningful sense.
You write “In GR, the very question is nonsense. [0] The universe does not have a position, just relative positions of objects. [1] The universe does not have a velocity, just relative velocities of various objects. [2] The universe does not have an acceleration, just relative accelerations of various objects.” This passage incorrectly appeals to GR to lump together three statements that GR doesn’t lump together.
See http://en.wikipedia.org/wiki/Inertial_frames_of_reference and note the distinction there between “constant, uniform motion” and various aspects of acceleratedness. Your [0] and [1] describe changes within an inertial frame of reference, while [2] gets you to a non-inertial frame. Not coincidentally, your [0] and [1] are predicted by GR and are consistent with centuries of careful experiment, while [2] is not predicted by GR and is inconsistent with everyday observation with Mark I eyeballs. (With modern vehicles it’s common to experience enough acceleration in the vicinity of some low-friction system to notice that acceleration causes conservation of momentum to break down in ways that a constant displacement and/or uniform motion doesn’t.)
I ask, in return, that you read this. Eliezer Yudkowsky had argued that GR implies it’s impossible to measure the acceleration of the universe, and no one had objected. Now, EY is not the pope of rationality, but I suggest things aren’t as simple as you’re making them.
Your point just seems to be a version of the bucket argument: “acceleration must be real, because it has real, detectable, frame-independent consequences like breakage and pain and ficticious forces”. I think I posed the same challenge in an open thread a month or two ago. And as the link you gave says,
But under Mach’s principle (the version that says only relative motion is meaningful, and which GR agrees with), these consequences of acceleration you describe only exist because of the frame against which to describe the acceleration, which is formed by the (relatively!) non-accelerating the rest of the universe. Therefore, if all of the universe were to accelerate uniformly, there would be no relative motion and therefore no experimental consequences, and we should regard the very idea as nonsense.
So if the universe were only you and your vehicle, you would not be able notice joint accelerations of you and the vehicle, only acceleration of yourself relative to the vehicle.
Now, you can disagree with this application of Mach’s principle, but the observations you describe do not contradict it.
I should also add one of the great insights I got out of Barbour’s book The End of Time (from which EY got his love of Mach’s principle and timelessness). The insight is that the laws of physics do not change in a rotating reference frame. Rather, there is a way you can determine if any given object is not in uniform motion relative to the rest of the universe, and this method also allows you to define an “inertial clock” which gives you an appropriate measure of time.
Most importantly, if you are spinning around, and there’s some other object accelerating relative to the rest of the universe, this method allows you to detect its acceleration, no matter how much or in what way your own frame is moving!
Perhaps the root of our disagreement is that you think (?) that the GR field equations constrain their solutions to conform to Mach’s principle, while I think they admit many solutions which don’t conform to Mach’s principle, and that furthermore that Vladimir_M is probably correct in his sketch of a family of non-Mach-principle solutions.
EY’s article seems pretty clear about claiming not that Mach’s principle follows deductively from the equations of GR, but that there’s a sufficiently natural fit that we might make an inductive leap from observed regularity in simple cases to an expected identical regularity in all cases. In particular EY writes “I do not think this has been verified exactly, in terms of how much matter is out there, what kind of gravitational wave it would generate by rotating around us, et cetera. Einstein did verify that a shell of matter, spinning around a central point, ought to generate a gravitational equivalent of the Coriolis force that would e.g. cause a pendulum to precess.” I think EY is probably correct that this hasn’t been verified exactly—more on that below. I also note that from the numbers given in Gravitation, if you hope to fake up a reasonably fast rotating frame by surrounding the experimenter with a rotating shell too arbitrarily distant to notice, you may need a very steep quantity discount at your nonlocal Black-Holes-R-Us (Free Installation At Any Velocity), and more generally that apparently solutions which locally hide GR’s preferred rotational frame seem to be associated with very extreme boundary conditions.
You write “under Mach’s principle (the version that says only relative motion is meaningful, and which GR agrees with), these consequences of acceleration you describe only exist because of the frame against which to describe the acceleration, which is formed by the (relatively!) non-accelerating the rest of the universe.” I think it would be more precise to say not “which GR agrees with” but “which some solutions to the GR field equations agree with.” Similarly, if I were pushing a Newman principle which requires that the number of particles in the universe be divisible by 2, I would not say “which GR agrees with” if there were any chance that this might be interpreted as a claim that “the equations of GR require an even number of particles.” Solutions to the GR field equations can be consistent with Mach’s principle, but I’m pretty sure that they don’t need to be consistent with it. The old Misner et al. Gravitation text remarks on how a point of agreement with Mach’s principle “is a characteristic feature of the Friedman model and other simple models of a closed universe.” So it seems pretty clear that as of 1971, there was no known requirement that every possible solution must be consistent with Mach’s principle. And (Bayes FTW!) if no such requirement was known in 1971, but such a requirement was rigorously proved later, then it’s very strange that no one has brought up in this discussion the name of the mathematical physicist(s) who is justly famous for the proof.
(I’m unlikely to look at The End of Time ’til the next time I’m at UTDallas library, i.e., a week or so.)
See also the conversational thread which runs through http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kb3 http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kb8 http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kba
Refer to the Rovelli paper mentioned in this discussion:
This is a much stronger claim than the one you pretended I was making, that GR agrees my selected Mach’s principle—rather, the pure relativity of universe is the basic idea of GR, not something simply shared between Mach’s principle and GR (like with your modulo 2 example).
I did—Barbour.
Here’s another possible experiment. Send a robot to the other planet, cut it in half, and then build a beam to push the two halves apart. If that planet is rotating, then due to conservation of angular momentum, this should cause its rotation to slow down, and you’d see that. If the two planets are just revolving around each other, then you won’t observe such a slowdown in the apparent rotation of the other planet.
ETA: I’m pretty curious what the math actually says. Do we have any GR experts here?
Also, if you’ve asked the right question, would the stresses that would push the halves apart also show up as geological stresses?
check whether you are experiencing a centrifugal force.
Regarding your answer, standard physics seems to indicate that you can tell the difference, unless the laws of physics change to violate newton’s laws when there are fewer than 3 bodies. Mach proposed this (I think) but people seem to doubt him.
The universe adheres to General Relativity, not Newton’s laws. What does GR say about the effect of spinning and revolving bodies?
Relativity says that as motion becomes very much slower than the speed of light, behavior becomes very similar to Newton’s laws. Everyday materials (and planetary systems) and energies give rise to motions very very much slower than the speed of light, so it tends to be very very difficult to tell the difference. For a mechanical experimental design that can accurately described in a nontechnical blog post and that you could reasonably imagine building for yourself (e.g., a Foucault-style pendulum), the relativistic predictions are very likely to be indistinguishable from Newton’s predictions.
(This is very much like the “Bohr correspondence principle” in QM, but AFAIK this relativistic correspondence principle doesn’t have a special name. It’s just obvious from Einstein’s equations, and those equations have been known for as long as ordinary scientists have been thinking about (speed-of-light, as opposed to Galilean) relativity.)
Examples of “see, relativity isn’t purely academic” tend to involve motion near the speed of light (e.g., in particle accelerators, cosmic rays, or inner-sphere electrons in heavy atoms), superextreme conditions plus sensitive instruments (e.g., timing neutron stars or black holes in close orbit around each other), or extreme conditions plus supersensitive instruments (e.g., timing GPS satellites, or measuring subtle splittings in atomic spectroscopy).
And the example I posited is a superextreme condition: the two bodies in question make up the entire universe, which amplifies the effects that are normally only observable with sensitive instruments. See frame-dragging.
Amplifies? The Schwarzschild spacetime (which behaves like Newtonian gravitational field in large distance limit) needs only one point-like massive object. What do you expect as a non-negligible difference made by (non-)existence of distant objects?
The fact that there’s no longer a frame against which to measure local rotation in any sense other than its rotation relative to the frame of the other body. So it makes a big difference what counts as “the rest of the universe”.
People believed for a quite long period of time that the distant stars don’t provide a stable reference frame. That it is the Earth which rotates was shown by Foucault pendulum or similar experiments, without refering to outer stellar frame.
(two points, one about your invocation of frame-dragging upstream, one elaborating on prase’s question...)
point 1: I’ve never studied the kinds of tensor math that I’d need to use the usual relativistic equations; I only know the special relativistic equations and the symmetry considerations which constrain the general relativistic equations. But it seems to me that special relativity plus symmetry suffice to justify my claim that any reasonable mechanical apparatus you can build for reasonable-sized planets in your example will be practically indistinguishable from Newtonian predictions.
It also seems to me that your cited reference to wikipedia “frame-dragging” supports my claim. E.g., I quote: “Lense and Thirring predicted that the rotation of an object would alter space and time, dragging a nearby object out of position compared with the predictions of Newtonian physics. The predicted effect is small—about one part in a few trillion. To detect it, it is necessary to examine a very massive object, or build an instrument that is very sensitive.”
You seem to be invoking the authority of standard GR to justify an informal paraphrase of version of Mach’s principle (which has its own wikipedia article). I don’t know GR well enough to be absolutely sure, but I’m about 90% sure that by doing so you misrepresent GR as badly as one misrepresents thermodynamics by invoking its authority to justify the informal entropy/order/whatever paraphrases in Rifkin’s Entropy or in various creationists’ arguments of the form “evolution is impossible because the second law of thermo prevents order from increase spontaneously.”
point 2: I’ll elaborate on prase’s “What do you expect as a non-negligible difference made by (non-)existence of distant objects?” IIRC there was an old (monastic?) thought experiment critique of Aristotelian “heavy bodies fall faster:” what happens when you attach an exceedingly thin thread between two cannonballs before dropping them? Similarly, what happens to rotational physics of two bodies alone in the universe when you add a single neutrino very far away? Does the tiny perturbation cause the two cannonballs discontinously to have doubly-heavy-object falling dynamics, or the rotation of the system to discontinously become detectable?
How would you measure the centrifugal force?
ETA: I’m not asking because I don’t know the standard ways to measure cetrifugal force, I’m asking because the standard measurement methods don’t work when the universe is just two planets.
Calculate the gravitational force on the surface of a planet of the same size and mass as yours and compare with what you actually measure.
What do you calibrate your equipment against?
The equipment is already calibrated. You have said that everything works in the same way as today, except the universe consists of two planets. Which I have interpreted like that the observer already knows the value of the gravitational constant in units he can use. If the gravitational constant has to be independently measured first, then it is more complicated, of course.
Right: you know the laws of physics. You don’t know your mass though, and you don’t know any object that has a known mass. I posit this because, in the history of science, they made certain measurements that aren’t possible in a two-planet universe, and to assume you can calibrate to those measurements would assume away the problem.
But still, in the rotating scenario the attractive force wouldn’t be perpendicular to the planet’s surface, and this can be established without knowing the gravitational constant. If the planet is spherical and you already know what is perpendicular, of course.
If you’re revolving about the other planet, the direction of tidal forces on your planet should rotate as well. If both planets are fixed, the gradient on your planet should be constant.
edit: Nevermind, after seeing that you specified that the orbit is synchronous.
Kids experiment with ‘video playdates’
http://www.cnn.com/2010/TECH/innovation/06/11/video.playdate/index.html?hpt=Sbin
Looking forward to the inevitable ‘Could video playdates be making your child vulnerable to cyberpredators?’ follow-up.
Chatrouletteforkids.com
Lately I’ve been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would an FAI?