Good catch, it should be Beta(991, 11). The prior is uniform = Beta(1,1 ) and the data is (990 successes, 10 fails)
snarles
The trouble with Bayes (draft)
How do you get the top portion of the second payoff matrix from the first? Intuitively, it should be by replacing the Agent A’s payoff with the sum of the agents’ payoffs, but the numbers don’t match.
Most people are altruists but only to their in-group, and most people have very narrow in-groups. What you mean by an altruist is probably someone who is both altruistic and has a very inclusive in-group. But as far as I can tell, there is a hard trade-off between belonging to a close-knit, small in-group and identifying with a large, diverse but weak in-group. The time you spend helping strangers is time taken away from potentially helping friends and family.
Unbounded linear utility functions?
Like V_V, I don’t find it “reasonable” for utility to be linear in things we care about.
I will write a discussion topic about the issue shortly.
EDIT: Link to the topic: http://lesswrong.com/r/discussion/lw/mv3/unbounded_linear_utility_functions/
I’ll need some background here. Why aren’t bounded utilities the default assumption? You’d need some extraordinary arguments to convince me that anyone has an unbounded utility function. Yet this post and many others on LW seem to implicitly assume unbounded utility functions.
Let’s talk about Von Neumann probes.
Assume that the most successful civilizations exist digitally. A subset of those civilizations would selfishly pursue colonization; the most convenient means would be through Von Neumann machines.
Tipler (1981) pointed out that due to exponential growth, such probes should already be common in our galaxy. Since we haven’t observed any, we must be alone in the universe. Sagan and Newman countered that intelligent species should actually try to destroy probes as soon as they are detected. This counterargument, known as “Sagan’s response,” doesn’t make much sense if you assume that advanced civilizations exist digitally. For these civilizations, the best way to counter another race of Von Neumann probes is with their own Von Neumann probes.
Others (who have not been identified by the Wikipedia article) have tried to explain the visible absence of probes by theorizing how civilizations might deliberately limit the expansion range of the probes. But why would any expansionist civilization even want to do so? One explanation would be to avoid provoking other civilizations. However, it still remains to be explained why the very first civilizations, which had no reason to fear other alien civilizations, would limit their own growth. Indeed, any explanation of the Fermi paradox has to be able to explain why the very first civilization would not have already colonized the universe, given that the first civilization was likely to be aware of their uncontested claim to the universe.
The first civilization either became dominated by a singleton, or remained diversified into the space age. For the following theory, we have to assume the latter—besides, we should hope for our own sake that singletons don’t always win. If the civilization remains diverse, at least some of the factions transition to a digital existence, and given the advantages provided for civilizations existing in that form, we could expect the digitalized civilizations to dominate.
Digitalized civilizations still have a wide range of possible value systems. There exist hedonistic civilizations, which gain utility from having immense computational power for recreational simulations or proving useless theorems, and there also exist civilizations which are more practically focused on survival. But any type of civilization has to act in self-preservation.
Details of the strategic interactions of the digitalized civilizations depend on speculative physics and technology: particularly in the economics of computation. Supposing dramatic economies of scale in computation (for example, supposing that quantum computers provide an exponential scaling of utility by cost), then it becomes plausible that distinct civilizations would cooperate. However, all known economies of scale have limits, in which case the most likely outcome is for distinct factions to maintain control of their own computing resources. Without such an incentive for cooperation, the civilizations would have to be wary of threats from the other civilizations.
Any digitalized civilization has to protect itself from being compromised from within. Rival civilizations with completely incompatible utility functions could still exploit each other’s computing resources. Hence, questions about the theoretical limitations of digital security and data integrity could be relevant to predicting the behavior of advanced civilizations. It may turn out to be easy for any civilization to protect a single computational site. However, any civilization expanding to multiple sites would face a much trickier security problem. Presumably, the multiple sites should be able to interact in some way, since otherwise, what is the incentive to expand? However, any interaction between a parent site and a child site opens the parent site (and therefore the entire network) to compromise.
Colonization sites near any particular civilization quickly become occupied, hence a civilization seeking to expand would have to send a probe to a rather distant region of space. The probe should be able to independently create a child site, and then eventually this child site should be able to interact with the parent site. However, this then requires the probe to carry some kind of security credentials which would allow the child site to be authenticated by the parent site in the future. These credentials could potentially be compromised by an aggressor. The probe has a limited capacity to protect itself from compromise, and hence there is a possibility that an aggressor could “capture” the probe, without being detected by the probe itself. Thus, even if the probe has self-destruction mechanisms, they would be circumvented by a sufficiently sophisticated approach. A compromised probe would behave exactly the same as a normal probe, and succeed in creating a child site. However, after the compromised child site has started to interact with the parent, at some point, it can launch an attack and capture the parent network for the sake of the aggressor.
Due to these considerations, civilizations may be wary of sending Von Neumann probes all over the universe. Civilizations may still send groups of colonization probes, but the probes may delay colonization so as to hide their presence. One might imagine that a “cold war” is already in progress in the universe, with competing probes lying hidden even within our own galaxy, but lying in stalemate for billions of years.
Yet, new civilizations are basically unaffected by the cold war: they have nothing to lose from creating a parent site. Nevertheless, once a new civilization reaches a certain size, they have too much to lose from making unsecured expansions.
But some civilizations might be content to simply make independent, non-interacting “backups” of themselves, and so have nothing to fear if their probes are captured. It still remains to explain why the universe isn’t visibly filled with these simplistic “backup” civilizations.
Sociology, political science and international politics, economics (graduate level), psychology, psychiatry, medicine.
Undergraduate mathematics, Statistics, Machine Learning, Intro to Apache Spark, Intro to Cloud Computing with Amazon
Thanks—this is a great analysis. It sounds like you would be much more convinced if even a few people already agreed to tutor each other—we can try this as a first step.
That’s OK, you can get better. And you can use any medium which suits you. It could be as simple as assigning problems and reading, then giving feedback.
Peer-to-peer “knowledge exchanges”
This is an interesting counterexample, and I agree with Larry that using priors which depend on pi(x) is really no Bayesian solution at all. But if this example is really so problematic for Bayesian inference, can one give an explicit example of some function theta(x) for which no reasonable Bayesian prior is consistent? I would guess that only extremely pathological and unrealistic examples theta(x) would cause trouble for Bayesians. What I notice about many of these “Bayesian non-consistency” examples is that they require consistency over very large function classes: hence they shouldn’t really scare a subjective Bayesian who knows that any function you might encounter in the real world would be much better behaved.
In terms of practicality, it’s certainly inconvenient to have to compute a non-parametric posterior just to do inference on a single real parameter phi. To me, the two practical aspects of actually specifying priors and actually computing the posterior remain the only real weakness of the subjective Bayesian approach (or the Likelihood principle more generally.)
PS: Perhaps it’s worth discussing this example as its own thread.
EDIT: Edited my response to be more instructive.
On some level it’s fine to make the kinds of qualitative arguments you are making. However, to assess whether a given hypothesis really robust to parameters like ubiquity of civilizations, colonization speed, and alien psychology, you have to start formulating models and actually quantify the size of the parameter space which would result in a particular prediction. A while ago I wrote a tutorial on how to do this:
http://lesswrong.com/lw/5q7/colonization_models_a_tutorial_on_computational/
which covers the basics, but to incorporate alien psychology you would have formulate the relevant game-theoretic models as well.
The pitfall of the kinds of qualitative arguments you are making is that you risk confusing the fact that “I found a particular region of the parameter space where your theory doesn’t work” with the conclusion that “Your theory only works in a small region of the parameter space.” It is true that under certain conditions regarding ubiquity of civilizations, colonization speed, and alien diplomatic strategy, that Catastrophe Engines end up being built on every star. However, you go on to claim that in most of the parameter space, such an outcome occurs, and that the Fermi Paradox is only observed in a small exceptional part of the parameter space. Given my experience with this kind of modeling, I predict that Catastrophe Engines actually are robust to all but the most implausible assumptions about ubiquity of intelligent life, colonization speed, and alien psychology, but you obviously don’t need to take my word on it. On the other hand, you’d have to come up with some quantitative models to convince me of the validity of your criticisms. In any case, continuing to argue on a purely philosophical level won’t serve to resolve our disagreement.
The second civ would still avoid building them too close to each other. This is all clear if you do the analysis.
Thanks for the references.
I am interested in answering questions of “what to want.” Not only is it important for individual decision-making, but there are also many interesting ethical questions. If a person’s utility function can be changed through experience, is it ethical to steer it in a direction that would benefit you? Take the example of religion: suppose you could convince an individual to convert to a religion, and then further convince them to actively reject new information that would endanger their faith. Is this ethical? (My opinion is that it depends on your own motivations. If you actually believed in the religion, then you might be convinced that you are benefiting others by converting them. If you did not actually believe in the religion, then you are being manipulative.)
Ordinarily, yes, but you could imagine scenarios where agents have the option to erase their own memories or essentially commit group suicide. (I don’t believe these kinds of scenarios are extreme beyond belief—they could come up in transhuman contexts.) In this case nobody even remembers which action you chose, so there is no extrinsic motivation for signalling.
The second civilization would just go ahead and build them anyways, since doing so maximizes their own utility function. Of course, there is an additional question of whether and how the first civilization will try to stop this from happening, since the second civ’s Catastrophe Engines reduce their own utility. If the first civ ignores them, the second civ builds Catastrophe Engines the same way as before. If the first civ enforces a ban on Catastrophe Engines, then the second civ colonizes space using conventional methods. But most likely the first civ would eliminate the second civ (the “Berserker” scenario.)
For the original proposal:
Explain:
A mechanism for explosive energy generation on a cosmic scale might also explain the Big Bang.
Invalidate:
Catastrophe engines should still be detectable due to extremely concentrated energy emission. A thorough infrared sky survey would rule them out along with more conventional hypotheses such as Dyson spheres.
If it becomes clear there is no way to exploit vacuum energy, this eliminates one of the main candidates for a new energy source.
A better understanding of the main constraints for engineering Matrioshka brains: if heat dissipation considerations already limit the size of a single brain, then there is no point in considering speculative energy sources.
There are some (including myself and presumably some others on this board) who see this practice as epistemologically dubious. First, how do you decide which aspects of the problem to incorporate into your model? Why should one only try to model E[Y|f(X)=1] and not the underlying function g(x)=E[Y|x]? If you actually had very strong prior information about g(x), say that “I know g(x)=h(x) with probability 1⁄2 or g(x) = j(x) with probability 1/2” where h(x) and j(x) are known functions, then in that case most statisticians would incorporate the underlying function g(x) in the model; and in that case, data for observations with f(X)=0 might be informative for whether g(x) = h(x) or g(x) = j(x). So if the prior is weak (as it is in my main post) you don’t model the function, and if the prior is strong, you model the function (and therefore make use of all the observations)? Where do you draw the line?
I agree, most statisticians would not model g(x) in the cancer example. But is that because they have limited time and resources (and are possibly lazy) and because using an overcomplicated model would confuse their audience, anyways? Or because they legitimately think that it’s an objective mistake to use a model involving g(x)?