(For thoroughness, noting that the other approach was also wondered about a little earlier. Surface action is an alternative to look at if projectile-launching would definitely be ineffective, but if the projectile approach would in fact be better then there’d no reason not to focus on it instead.)
Multipartite
A fair point. On the subject of pulling vast quantities of energy from nowhere, does any one country currently possess the knowledge and materials to build a bomb that detonated on the surface could {split the Earth like a grape}/{smash the Earth like an egg}/{dramatic verb the Earth like a metaphorical noun}?
And yes, not something to try in practice with an inhabited location. Perhaps a computer model, at most… actually, there’s a thought regarding morbid fascination. I wonder what would be necessary to provide a sufficiently-realistic (uninhabited) physical (computer) simulation of a planet’s destruction when the user pulled meteors, momentum, explosives et cetera out of nowhere as it pleased. Even subtle things, like fiddling with orbits and watching the eventual collision and consequences… hm. Presumably/Hopefully someone has already thought of this at some point, and created such a thing.
- 14 Mar 2012 3:40 UTC; 0 points) 's comment on 60m Asteroid currently assigned a .022% chance of hitting Earth. by (
Not directly related, but an easier question: Do we currently have the technology to launch projectiles out of Earth’s atmosphere into a path such that, in a year’s time or so, the planet smashes into them from the other direction and sustains significant damage?
(Ignoring questions of targeting specific points, just the question of whether it’s possible to arrange that without the projectiles falling into the sun or just following us eternally without being struck or getting caught in our gravity well too soon… hmm, if we could somehow put it into an opposite orbit then it could hit us very strongly, but in terms of energy… hmmm. Ah, and in the first place there’s the issue that even that probably wouldn’t hit with energy comparable to that of a meteor, though I am not an astrophysicist. In any case, definitely not something to do, but (as noted) morbidly fascinating if it turned out to be fairly easy to pull off. Just the mental image of all the ‘AUGH’ faces… again, not something one would actually want to do. )
- 14 Mar 2012 22:49 UTC; 0 points) 's comment on 60m Asteroid currently assigned a .022% chance of hitting Earth. by (
()
In practice, this seems to break down at a specific point: this can be outlined, for instance, with the hypothetical stipulation ”...and possesses the technology or similar power to cross universe boundaries and appear visible before me in my room, and will do so in exactly ten seconds.”.
As with the fallacy of a certain ontological argument, the imagination/definition of something does not make it existential, and even if a certain concept contains no apparent inherent logical impossibilities that still does not mean that there could/would exist a universe in which it could come to pass.
‘All possible worlds’ does not mean ‘All imaginable worlds’. ‘All possible people’ does not mean ‘All imaginable people’. Past a certain threshold of specificity, one goes from {general types of people who exist almost everywhere, universally speaking} to {specific types of people who only exist in the imaginations of people like you who exist almost everwhere, universally speaking}.
(As a general principle, for instance/incidentally, causality still needs to apply.)
Edit:
(Absent(?) thought after reading: one can imagine someone, through a brain-scanner or similar, controlling a robot remotely. One can utter, through the robot, “I’m not actually here.”, where ‘here’ is where one is doing the uttering through the robot, and ‘I’ (specifically ‘where I am’) is the location of one’s brain. The distinction between the claim ‘I’m not actually here’ and ‘I’m not actually where I am’ is notable. Ahh, the usefulness of technology. For belated communication, the part about intention is indeed significant, as with whether a diary is written in the present tense (time of writing) or in the past tense (‘by the time you read this[ I will have]’...).) enjoyed the approach
To ask the main question that the first link brings to mind: What prevents a person from paying both a life insurance company and a longevity insurance company (possible the same company) relatively-small amounts of money each in exchange for either a relatively-large payout from the life insurance if the person dies early and a relatively-large payout from the longevity insurance if the person dies late?
To extend, what prevents a hypothetically large number of people to on average create this effect (even if each is disallowed from having both instead of just one or the other) and so creating a guaranteed total loss overall on the part of an insurance company?
Thank you!
To answer the earlier question, an alteration which halved the probability of failure would indeed change an exactly-0% probability of success into a 50% probability of success.
If one is choosing between lower increases for higher values, unchanged increases for higher values, and greater increases for higher values, then the first has the advantage of not quickly giving numbers over 100%. I note though that the opposite effect (such as hexing a foe?) would require halving the probability of success instead of doubling the probability of failure.
The effect you describe, whereby a single calculation can give large changes for medium values and small values for extreme values, is of interest to me: starting with (for instance) 5%, 50% and 95%, what exact procedure is taken to increase the log probability by log(2) and return modified percentages?
Edit: (A minor note that, from a gameplay standpoint, for things intended to have small probabilities one could just have very large failure-chance multipliers and so still have decreasing returns. Things decreed as effectively impossible would not be subject to dice rolling or similar in any case, and so need not be considered at length. In-game explanation for the function observed could be important; if it is desirable that progress begin slow, then speed up, then slow down again, rather than start fast and get progressively slower, then that is also reasonable.)
For what it’s worth, I’m reminded of systems which handle modifiers (multiplicatively) according to the chance of failure:
[quote]
For example, the first 20 INT increases magic accuracy from 80% to
(80% + (100% − 80%) * .01) = 80.2%
not to 81%. Each 20 INT (and 10 WIS) adds 1% of the remaining distance between your current magic accuracy and 100%. It becomes increasingly harder (technically impossible) to reach 100% in any of these derived stats through primary attributes alone, but it can be done with the use of certain items.
[/quote]
A clearer example might be that of a bonus which halves your chance of failure changing 80% success likelihood to 90% success (20% failure to 10% failure), but another bonus of the same type changing that 90% success to 95% success (10% failure to 5% failure). Notable that one could combine the bonus first in calculation to get a quarter of 20% as 5% with no end change.
The Turing machine doing the simulating does not experience pain, but the human being being simulated does.
Similarly, the waterfall argument found in the linked paper seems as though it could as-easily be used to argue that none of the humans in the solar system have intelligence unless there’s an external observer to impose meaning on the neural patterns.
A lone mathematical equation is meaningless without a mind able to read it and understand what its squiggles can represent, but functioning neural patterns which respond to available stimuli causally(/through reliable cause-and-effect) are the same whether emboided in cell weights or in tape states. (So, unless one wishes to ignore one’s own subjective consciousness and declare oneself a zombie...)
For the actual-versus-potential question, I am doubtful regarding the answer, but for the moment I imagine a group of people in a closed system (say, an experiment room), suddenly (non-lethally) frozen in ice by a scientist overseeing the experiment. If the scientist were to later unfreeze the room, then perhaps certain things would definitely happen if the system remained closed. However, if it were never unfrozen, then they would never happen. Also, if they were frozen yet the scientist decided to interfere in the experiment and make the system no longer a closed system, then different things would happen. As with the timestream in normal life, ‘pain’ (etc.) is only said to take place at the moment that it is actually carried out. (And if one had all states laid out simultaneously, like a 4D person looking at all events in one glance from past to present, then ‘pain’ would only be relevant for the one point/section that one could point to in which it was being carried out, rather than in the entire thing.)
Now though, the question of the pain undergone by the models in the predicting scientist’s mind (perhaps using his/her/its own pain-feeling systems for maximum simulation accuracy) by contrast… hmm.
(Assuming that it stays on the line of ‘what is possible’, in any case a higher Y than otherwise, but finding it then according to the constant X--1 - ((19/31) * (1/19)), 30⁄31, yes...)
I confess I do not understand the significance of the terms mixed outcome and weighted sum in this context, I do not see how the numbers 11⁄31 and 20⁄31 have been obtained, and I do not presently see how the same effect can apply in the second situation in which the relative positions of the symmetric point and its (Pareto?) lines have not been shifted, but I now see how in the first situation the point selected can be favourable for Y! (This representing convincing of the underlying concept that I was doubtrful of.) Thank you very much for the time taken to explain this to me!
Rather than X or Y succeeding at gaming it by lying, however, it seems that a disinterested objective procedure that selects by Pareto optimalness and symmetry would then output a (0.6, 0.6) outcome in both cases, causing a −0.35 utility loss for the liar in the first case and a −0.1 utility loss for the liar in the second.
Is there a direct reason that such an established procedure would be influenced by a perceived (0.95, 0.4) option to not choose an X=Y Pareto outcome? (If this is confirmed, then indeed my current position is mistaken. )
I may be missing something: for Figure 5, what motivation does Y have to go along with perceived choice (0.95, 0.4), given that in this situation Y does not possess the information possessed (and true) in the previous situation that ‘(0.95, 0.4)’ is actually (0.95, 0.95)?
In Figure 2, (0.6, 0.6) appears symmetrical and Pareto optimal to X. In Figure 5, (0.6, 0.6) appears symmetrical and Pareto optimal to Y. In Figure 2, X has something to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6) and Y has something to gain by choosing/{allowing the choice of} (0.95, 0.95) over (0.6, 0.6), but in Figure 5, while X has something to gain by choosing/{allowing the choice of} (0.6, 0.4) over (0.5, 0.5), Y has nothing to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6).
Is there a rule(/process) that I have overlooked?
Going through the setup again, it seems as though in the first situation (0.95, 0.95) would be chosen while looking to X as though Y was charitably going with (0.95, 0.4) instead of insisting on the symmetrical (0.6, 0.6), and that in the second situation Y would insist on the seemingly-symmetrical-and-(0.6, 0.6) (0.4, 0.6) instead of going along with X’s desired (0.6, 0.4) or even the actually-symmetrical (0.5, 0.5) (since that would appear {non-Pareto optimal}/{Pareto suboptimal} to Y).
A very interesting perspective: Thank you!
′ I am still mystified by the second koan.’: The novice associates {clothing types which past cults have used} with cults, and fears that his group’s use of these clothing types suggests that the group may be cultish.
In practice (though the clothing may have an unrelated advantage), the clothing one wears has no effect on the validity of the logical arguments used in reasoning/debate.
The novice fears a perceived connection between the clothing and cultishness (where cultishness is taken to be a state of faith over rationality, or in any case irrationality). The master reveals the lack of effect of clothing on the subjects under discussion with the extreme example of the silly hat, pointing out the absurdity of wearing it affecting one’s ability to effectively use probability theory (or any practical use of rationality for that matter).
This is similar to the first koan, {in which}/{in that} what matters is whether the (mental/conceptual) tools actually /work/ and yield useful results.
The student, more-or-less enlightened by this, takes it to heart and serves as an example to others by always discussing important concepts in absurd clothing, to get across to his own students(, others whom he interacts with, et cetera) that the clothing someone wears has nothing to do with the validity/accuracy of their ideas.
(Or, at least, that’s my interpretation.)
Edit: A similar way of describing this may be to imagine that the novice is treating clothing-cult correlation as though it were causation, and the master points out with use of absurdity that there cannot be clothing->cult causation for the same reason that there cannot be silly_hat->comprehension causation. (What counts being the usefulness of the hammer, the validity of the theories used, rather than unrelated things which coincide with them.)
Depending on the cost, it at least seems to be worth knowing about. If one doesn’t have it then one can be assured on that point, whereas if one does have it then one at least has appropriate grounds on which to second-guess oneself.
(I have been horrified in the past by tales of {people who may or may not have inherited a dominant gene for definite early disease-related death} who all refused to be tested, thus dooming themselves to a lives of fear and uncertainty. If they were going to have entirely healthy lives then they would have lived in fear and uncertainty instead of being able to enjoy them, and if they were giong to die early then they would have lived in fear and uncertainty (and stressful, gradually-increasing denial/acceptance) rather than quickly getting used to the idea, resetting their baseline, getting their loose ends in order and living as appropriate for their expected remaining lifespan. Whether or not one does (or can do) anything about one’s state doesn’t change that oneself having more information about oneself can (in most circumstances?) only be helpful.)
‘I haven’t seen a post on LW about the grue paradox, and this surprised me since I had figured that if any arguments would be raised against Bayesian LW doctrine, it would be the grue problem.’:
If of relevance, note http://lesswrong.com/lw/q8/many_worlds_one_best_guess/ .
‘The second AI helped you more, but it constrained your destiny less.’: A very interesting sentence.
On other parts, I note that the commitment to a range of possible actions can be seen as larger-scale than to a single action, even before which one is taken is chosen.
A particular situation that comes to mind, though:
Person X does not know of person Y, but person Y knows of person X. Y has an emotional (or other) stake in a tiebreaking vote that X will make; Y cannot be present on the day to observe the vote, but sets up a simple machine to detect what vote is made and fire a projectile through the head of X if X makes one vote rather than another (nothing happening otherwise).
Let it be given that in every universe that X votes that certain way, X is immediately killed as a result. It can also safely be assumed that in those universes Y is arrested for murder.
In a certain universe, X votes the other way, but the machine is later discovered. No direct interference with X has taken place, but Y who set up the machine (pointed at X’s head, X’s continued life unknowingly dependent on X’s vote) presumably is guilty of a felony of some sort (which though, I wonder?).
Regardless of motivation, to have committed to potentially carry out a certain thing against X is treated as similarly serious to that of in fact having it carried out (or attempted to be carried out).
(This, granted, may focus on a concept within the above article without addressing the entire issue of planning another entity’s life.)
Thought 1: If hypothetically one’s family was going to die in an accident or otherwise (for valid causal wish-unrelated reasons), the added mental/emotional effect on oneself would be something to avoid in the first place. Given that one is infallible, one can never assert absolute knowledge of non-causality (direct or indirect), and that near-infinitesimal consideration could haunt one. Compare this possibility to the ease, normally, of taking other routes and thus avoiding that risk entirely.
...other thoughts are largely on the matter of integrity… respect and love felt for family members, thus not wishing to badmouth them or officially express hope for their death even given that neither they nor anyone else could hear it… hmm.
Pragmatically, one could cite a concern regarding taken behaviours influencing ease of certain thoughts: I do not particularly want to become someone who can more easily write a request that my family members die.
There are various things that I might wish that I would not carry out if I had the power to directly (and secretly) do so, but generally if doing such a thing I would prefer to wish for something I actually wanted (/would carry out if I had the power to do so myself), on the off-chance that some day if I do such to the knowledge of another the other is inclined to help me reach it in some way.
Given the existence of compensation, there is yet the question of what compensation would be sufficient to make me do something that made me feel sullied. Incidentally notable that I note there are many things that would make others feel sullied that I would do with no discomfort at all.
...a general practice of acting in a consistent way… a perception of karma not as something which operates outside normal causality, but instead similar-to-luck just those parts of normal causality that one cannot be aware of… ah, I’ve reached the point of redundancy were I to continue typing.
Running through this to check that my wetware handles it consistently.
Paying −100 if asked:
When the coin is flipped, one’s probability branch splits into a 0.5 of oneself in the ‘simulation’ branch, 0.5 in the ‘real’ branch. For the 0.5 in the real branch, upon awaking a subjective 50% probability that on either of the two possible days, both of which will be woken on. So, 0.5 of the time waking in simulation, 0.25 waking in real 1, 0.25 waking in real 2.
0.5 x (260) + 0.25 x (-100) + 0.25 x (-100) = 80. However, this is the expected cash-balance change over the course of a single choice, and doesn’t take into account that Omega is waking you multiple times for the worse choice.
An equation for relating choice made to expected gain/loss at the end of the experiment doesn’t ask ‘What is my expected loss according to which day in reality I might be waking up in?‘, but rather only ‘What is my expected loss according to which branch of the coin toss I’m in?’ 0.5 x (260) + 0.5 x (-100-100) = 30.
Another way of putting it: 0.5 x (260) + 0.25 x (-100(-100)) + 0.25 x (-100(-100)) = 30 (Given that making one choice in a 0.25 branch guarantees the same choice made, separated by a memory-partition; either you’ve already made the choice and don’t remember it, or you’re going to make the choice and won’t remember this one, for a given choice that the expected gain/loss is being calculated for. The ‘-100’ is the immediate choice that you will remember (or won’t remember), the ‘(-100)’ is the partition-separated choice that you don’t remember (or will remember).)
--Trying to see what this looks like for an indefinite number of reality wakings: 0.5 * (260) + n x (1/n) x (1/2) x (-100 x n) = 130 - (50 x n), which of the form that might be expected.
(Edit: As with reddit, frustrating that line breaks behave differently in the commenting field and the posted comment.)