All this data says is that between 90% and 94% of people who are convinced not to jump did not go on to successfully commit suicide at a later date. It would be a big mistake to assume that whether or not you would come to regret your choice is 100% independent of whether or not you can be convinced not to jump and that therefore the fraction of people who came to regret commiting suicide is the same as the fraction who would have come to regret commiting suicide if they had failed their attempt.
quinsie
Survey = taken.
For the newton question, I got the thousands, tens and ones place correct, but flubbed the hundreds place. 60% confidence. Not sure if I should feel bad about that.
I feel it would be.
Well, it could mean that you think the climate is going to get colder, or that the mean temperature will remain constant while specific regions will grow unusually hot/cold, or that the planet will undergo a period of human-caused warming followed by ice sheets melting and then cooling or any number of other theories. Most of them are fairly unlikely of course, but P(any climate change at all) > P(global warming).
There’s two components to it, really:
People perceive exposure to a bad medicine as being much harder to correct than exposure to a bad idea. It feels like you can always “just stop beleiving” if you decided something was false, even though this has been empericially been demonstrated to be much more difficult than it feels like it should be.
Further, there’s an unspoken assumption (at least for ideas-in-general) that other people will automatically ignore the 99% of the ideaspace that contains uniformly awful or irrelevant suggestions, like recomending that you increase tire pressure in your car to make it more likely to rain and other obviously wrong ideas like that. Medicine doesn’t get this benefit of the doubt, as humans don’t naturally prune their search space when it comes to complex and technical fields like medicine. It’s outside our ancestoral environment, so we’re not equiped to be able to automatically discard “obviously” bad drug ideas just from reading the chemical makeup of the medicine in question. Only with extensive evidence will a laymen even begin to entertain the idea that ingesting an unfamiliar drug would be benefical to them.
You make breaks in the comment box with two returns.
Just one will not make a line.
As to your actual question, you should probably check your state’s laws about wills. I don’t know if Louisiana allows minors to write a will for themselves, and you will definately want one saying that your body is to be turned over to the cryonics agency of your choice (usually either the Cryonics Institute or Alcor) upon your death. You’ll also probably want to get a wrist bracelet or dog tags informing people to call your cryonicist in the event that you’re dead or incapacitated.
It all depends on why you decide to torrent/not torrent:
Are you more likely to torrent if the album is very expensive, or if it is very cheap? If you expect it to be of high quality, or of low quality? If the store you could buy the album at is far away, or very close? If you like the band that made it, or if you don’t like them? Longer albums or shorter? Would you torrent less if the punishment for doing so was increased? Would you torrent more if it was harder to get caught? What if you were much richer, or much poorer?
I’m confident that if you were to analyze when you torrent vs. when you buy, you’d notice trends that, with a bit of effort, could be translated into a fairly reasonable “Will I Torrent or Buy?” function that predicts whether you’ll torrent or not with much better accuracy than random.
Yep, definitely needs some clarification there.
Humans don’t distinguish between the utility for different microscopic states of the world. Nobody cares if air molecule 12445 is shifted 3 microns to the right, since that doesn’t have any noticable effects on our experiences. As such, a state (at least for the purposes of that definition of utility) is a macroscopic state.
“~X” means, as in logic, “not X”. Since we’re interested in the negative utility of the floor being clear, in the above case X is “the airplane’s floor being clear” and ~X is “the airplane’s floor being opaque but otherwise identical to a human observer”.
In reality, you probably aren’t going to get a material that is exactly the same structurally as the clear floor, but that shouldn’t stop you from applying the idea in principle. After all, you could probably get reasonably close by spray painting the floor.
To steal from Hofstadter, we’re interested in the positive utility derived from whatever substrate level changes would result in an inversion of our mind’s symbol level understanding of the property or object in question.
A thing has negative utility equal to the positive utility that would be gained from that thing’s removal. Or, more formally, for any state X such that the utility of X is Y, the utility of the state ~X is -Y.
Don’t we all?
Changing “matrix” to “light cone changes little, since I still don’t expect to ever interact with them. The light cone example is only different insofar as I expect more people in my light cone to (irrationally) care about people beyond it. That might cause me to make some token efforts to hide or excuse my apathy towards the 3^^^3 lives lost, but not to the same degree as even 1 life lost here inside my light cone.
If you accept that someone making a threat in the form of “I will do X unless you do Y” is evidence for “they will do X unless you do ~Y”, then by the principle of conservation of evidence, you have evidence that everyone who ISN’T making a threat in the form of “I will do X unless you do Y” will do X unless you Y. For all values of X and Y that you accept this trickster hypothesis for. And that is absurd.
Maybe I’m missing the point here, but why do we care about any number of simulated “people” existing outside the matrix at all? Even assuming that such people exist, they’ll never effect me, nor effect anyone in the world I’m in. I’ll never speak to them, they’ll never speak to anyone I know and I’ll never have to deal with any consequences for their deaths. There’s no expectation that I’ll be punished or shunned for not caring about people from outside the matrix, nor is there any way that these people could ever break into our world and attempt to punish me for killing them. As far as I care, they’re not real people and their deaths or non-deaths do not factor into my utility function at all. Unless Pascal’s mugger claims he can use his powers from outside the matrix to create 3^^^3 people in our world (the only one I care about) and then kill them here, my judgement is based soley on that fact that U(me loosing 5$) < 0.
So, let’s assume that we’re asking about the more interesting case and say that Pascal’s mugger is instead threatening to use his magic extra-matrix powers to create 3^^^3 people here on Earth one by one and that they’ll each go on international television and denounce me for being a terrible person and ask me over and over why I didn’t save them and then explode into chunks of gore where everyone can see it before fading back out of the matrix (to avoid black hole concerns) and that all of this can be avoided with a single one time payment of 5$. What then?
I honestly don’t know. Even one person being created and killed that way definitely feels worse than imagining any number of people outside the matrix getting killed. I’d be tempted on an emotional level to say yes and give him the money, despite my more intellectual parts saying this is clearly a setup and that something that terrible isn’t actually going to happen. 3^^^3 people, while I obviously can’t really imagine that many, is only worse since it will keep happening over and over and over until after the stars burn out of the sky.
The only really convincing argument, aside from the argument from absurdity (“That’s stupid, he’s just a philosopher out trying to make a quick buck.”) is Polymeron’s argument here
“When is pain worst?” an is important and deeply related question which is, fortunately for us, much easier to examine directly. I feel worse to have a papercut than it is to have an equally painful, but ultimately more damaging cut elsewhere. I feel worse to have a chronic pain that will not go away than I do when I feel a brief pain that goes away shortly after. I feel worse if I am unable to fix the injury that is causing me pain. It feels unfair, awful and even unbearable to ache for days during a bad bout of the flu. I know that the pain doesn’t serve me any useful purpose, it isn’t going to make me any less likely to catch the flu, which makes it all the worse. Likewise, if I’ve hurt my foot and it keeps on hurting even as I walk over to the cabinet to get out a bandage and some disinfectant, that is worse than a pain that hurts just as bad for a second and then stops once I’ve stopped doing the thing that injured me.
This seems to indicate that pain is worst when there is a conflict between my objective assessment of my injuries and what my “gut” tells me about my injuries through the proxy of pain. There was a post somewhere on this site, perhaps by Yvain and I’d thank anyone that could find it and link it, about how there is not a single unitary self, but rather many seperate selves. I suspect that the main reason why pain is bad is due to a conflict between these many parts of me. Pain is at its worst when “rational, far-mode assessor of injury” me is at odds with “hindbrain, near-mode tabulator of pain nerves” me and the former has no way to get the near-mode brain to stop sending pain signals even after my initial panic at being injured has been overridden and all useful steps towards recovery and preventing future injuries have been taken, while the latter can’t keep the far-mode brain from constantly ignoring these dire warnings about how my nerves are reporting bad things and how my skin is cut and how I’m probably bleeding and how it hasn’t stopped and I need to lie still and wait to heal. The two argue in circles somewhere in the back of my mind leaving me with a certain unease that I can’t but at rest by either laying still and resting or by ignoring my pain and going on with my day.
That is why pain is bad. Because, on some level or another, it causes different parts of me to come into conflict, neither being able to overcome the other (and neither should, for I shudder to think of what would happen if one did win out) and neither being able to let me rest.
One method I’ve seen no mention of is distraction from the essence of an argument with pointless pedantry. The classical form is something along the lines of “My opponent used X as an example of Y. As an expert in X, which my opponent is not, I can assure you that X is not an example of Y. My opponent clearly has no idea how Y works and everything he says about it is wrong.” which only holds true of X and Y are in the same domain of knowledge.
A good example: Eliezer said in the first paragraph that a geologist could tell a pebble from the beach from a driveway. As a geologist, I know that most geologists, myself included, honestly couldn’t tell the difference. Most pebbles found in concrete, driveways and so forth are taken from rivers and beaches, so a pebble that looks like a beach pebble wouldn’t be suprising to find in someone’s driveway. That doesn’t mean that Eliezer’s point is wrong, since he could have just as easily said “a pebble from a mountaintop” or “a pebble from under the ocean” and the actual content of this post wouldn’t have changed a bit.
In a more general sense, this an example of assuming an excessively convenient world to fight the enemy arguments in, but I think this specific form bears pointing out, since it’s a bit less obvious than most objections of that sort.
- Dark Side Epistemology by 17 Oct 2008 23:55 UTC; 123 points) (
- 4 Oct 2011 5:38 UTC; 3 points) 's comment on Knox and Sollecito freed by (
The correct moral response to the king’s sadistic choice (in any of the 4 forms mentioned) is not sacrifice yourself OR to let the other 10 die instead. The correct answer is that you, knowing the king was doing this, should have founded/joined/assisted an organization devoted to deposing the evil king and replacing him with someone who isn’t going to randomly kill his subjects.
So to with charity. The answer isn’t to sacrifice all of your comforts and wealth to save the lives of others, but to assist with, petition for and otherwise attempt to inact sanctions, reforms and revolutions to force the leaders of the world’s most impoverished nations to end the policies that are leading to their populations starving to death. There is already enough food to feed everyone in the world twice over, it is simply a matter of making sure that nobody is prevented from obtaining it by a cruel or uncarring outside institution.
True enough, but once we step outside of the thought experiment and take a look at the idea it is intended to represent, “button gets pressed” translates into “humanity gets convinced to accept the machine’s proposal”. Since the AI-analogue device has no motives or desires save to model the universe as perfectly as possible, P(A bit flips in the AI that leads to it convincing a human panel to do something bad) necessarily drops below P(A bit flips anywhere that leads to a human panel deciding to do something bad) and is discountable for the same reason why we ignore hypothesises like “Maybe a cosmic ray flipped a bit to make it do that?” when figuring out the source of computer errors in general.
The answer to that depends on how the time machine inside works. If it’s based on a “reset unless a message from the future is received saying not to” sort of deal, then you’re fine. Otherwise, you die. And neither situation has an analoge in the related AI design.
Hello! quinesie here. I discovered LessWrong after being linked to HP&MoR, enjoying it and then following the links back to the LessWrong site itself. I’ve been reading for a while, but, as a rule, I don’t sign up with a site unless I have something worth contributing. After reading Eliezer’s Hidden Complexity of Wishes post, I think I have that:
In the post, Eliezer describes a device called an Outcome Pump, which resets the universe repeatedly until the desired outcome occurs. He then goes on to describe why this is a bad idea, since it can’t understand what it is that you really want, in a way that is analogous to unFriendly AI being programed to naively maximize something (like paper clips) that humans say they want maximized even when they really something much more complex that they have trouble articulating well enough to describe to a machine.
My idea, then, is to take the Outcome Pump and make a 2.0 version that uses the same mechanism as the orginal Outcome Pump, but with a slightly different trigger mechanism: The Outcome Pump resets the universe whenever a set period of time passes without an “Accept Outcome” button being pressed to prevent the reset. To convert back to AI theory, the analogous AI would be one which simulates the world around it, reports the projected outcome to a human and then waits for the results to be accepted or rejected. If accepted, it implements the solution. If rejected, it goes back to the drawing board and crunches numbers until it arrives at the next non-rejected solution.
This design could of course be improved upon by adding in parameters to automatically reject outcomes with are obviously unsuitable, or which contain events, ceteris paribus, we would prefer to avoid, just as with the standard Outcome Pump and its analogue in unFriendly AI. The chief difference between the two is that the failure mode for version 2.0 isn’t a catastrophic “tile the universe with paper clips/launch mother out of burning building with explosion” but rather the far more benign “submit utterly inane proposals until given more specific instructions or turned off”.
This probably has some terrible flaw in it that I’m overlooking, of course, since I am not an expert in the field, but if there is, the flaws aren’t obvious enough for a layman to see. Or, just as likely, someone else came up with it first and published a paper describing exactly this. So I’m asking here.
The part where you read exerts from HP Lovecraft and the sequences makes my cult sensors go off like a foghorn. All of the rest of it seems perfectly beneign. It’s just the readings that make me think “CULT!” in the back of my mind. It seems like they’re being used as a replacement for a sacred text and being used to sermonize with. If it weren’t for that, this would seem much less cultish and more like a university graduation or a memorial or other secular-but-accepted-and-important ritual.