It wasn’t just roko, although roko’s post was what finally irritated me enough to make a post about it.
PlaidX
How about this guy a couple comments down?
Actually I’m not sure he’s even serious, but I’ve certainly seen that argument advanced before, and the parent post’s “1% chance” thing is I’m pretty sure a parody of the idea that you have to give anything at least a 1% chance, because it’s all so messy and how can you ever be sure?! which has certainly shown up on this site on several occasions recently, particularly in relation to the very extreme fringe scenarios you say help people think more clearly.
Torture scenarios have LONG being advanced in this community as more than a trolly problem with added poor-taste hyperbole. Even if you go back to the SL4 mailing list, it’s full of discussions where someone says something about AI, and someone else replies “what, so an AI is like god in this respect? What if it goes wrong? What if religious people make one? What if my mean neighbor gets uploaded? What if what if what if? WE’LL ALL BE TORTURED!”
If you ask me, the prevalence of torture scenarios on this site has very little to do with clarity and a great deal to do with a certain kind of autism-y obsession with things that might happen but probably won’t.
It’s the same mental machinery that makes people avoid sidewalk cracks or worry their parents have poisoned their food.
A lot of times it seems the “rationality” around here simply consists of an environment that enables certain neuroses and personality problems while suppressing more typical ones.
This is an excellent point. Somebody should make a chain letter sort of thing about something really GOOD happening, and then get it posted everywhere, and maybe with enough comments we can increase the probability of that by orders of magnitude!
And if you don’t repost it, you’ll be trapped in ohio for 7000000000000000.2 years.
Or does it only work with bad things?
Certain patterns of input may be dangerous, but knowledge isn’t a pattern of input, it can be formatted in a myriad of ways, and it’s not generally that hard to find a safe one. There’s a picture of a french fry that crashes AOL instant messenger, but that doesn’t mean it’s the french fry that’s the problem. It’s just the way it’s encoded.
I find even monogamous relationships burdonsomely complicated, and the pool of people I like enough to consider dating is extremely small. I have no moral objections to polyamory, but it makes me tired just thinking about it.
I don’t think you’re right… isn’t it broken down into plank lengths or something?
Add a payoff and the answer becomes clear, and it also becomes clear that the answer depends entirely on how the payoff works.
Without a payoff, this is a semantics problem revolving around the ill-defined concept of expectation and will continue to circle it endlessly.
Reading my RSS feeds, cuz I’m bored.
- Apr 20, 2010, 12:08 AM; 8 points) 's comment on The Fundamental Question by (
I think you have it completely backwards, SIA isn’t based on egotism, but precisely the reverse. You’re more likely, as a generic observer, to exist in a world with more generic observers, because you AREN’T special, and in the sense of being just a twinkle of a generic possible person, could be said to be equally all 99 people in a 99 people world.
You are more likely to be in a world with more people because it’s a world with more of YOU.
Here’s the problem. YOU’RE the egoist, in the sense that you’re only tallying the score of one random observer out of 99, as though the other 98 don’t matter. We have a possible world where one person is right or wrong, and a possible world where 99 people are right or wrong, but for some reason you only care about 1 of those 99 people.
EDIT: more talking
Under anthropic reasoning, if we flip a coin, and create 5 observers if it’s heads, or 95 observers if it’s tails, and if all you know is that you are an observer created after the coin flip, the way you guess which of the 100 possible observers you are is to pick randomly among them, giving you a 5% chance of being a heads observer and a 95% chance of being a tails observer.
Under nonanthropic reasoning, it’s a little more complicated. We have to stretch the probabilities of being the 5 tails-world observers so that they take up as much probability space as the 95 heads-world observers. Because, so the thinking goes, your likelihood to be in a possible world doesn’t depend on the number of observers in that world. Unless the number is zero, then it does. Please note that this special procedure is performed ONLY when dealing with situations involving possible worlds, and not when both worlds (or hotels, or whatever) actually exist. This means that nonanthropic reasoning depends on the many-worlds interpretation of quantum mechanics being false, or at least, if it’s false, coin flips go back to being covered by anthropic reasoning and we have to switch to situations that are consequent on some digit of pi or something.
This smells a little fishy to me. It seems like there’s a spanner in the works somewhere, ultimately based on a philosophical objection to the idea of a counterfactual observer, which results in a well-hidden but ultimately mistaken kludge in which certain data (the number of observers) is thrown out under special circumstances (the number isn’t zero and they only exist contingent on some immutable aspect of the universe which we do not know the nature of, such as a particular digit of pi).
I don’t think addiction IS a form of akrasia.
I said basically the same thing in the sl4 IRC room ages ago, and yudkwosky disagreed and said he still believed in the thread of conciousness, and I asked why, and he said he was too busy to explain.
On the other hand, many issues really do seem to boil down to such a simple narrative, something best stated in quite stark terms. Individuals who are making an effort to be measured and rational often seem to reject out of hand the possibility that such simple, clearcut conclusions could possibly be valid, leading to the opposite bias—a sort of systemic “fallacy of moderation”. This can cause popular acquiescence to beliefs that are essentially wrong, such as the claim that “the Democrats do it too” when pointing out evils committed by the latest generation of Republicans.
A good way to begin an argument is by asking questions about the other person’s position, to get it nailed down.
When an airplane crashes, the wreckage is preserved in painstaking detail, often re-assembled in warehouses in exactly the configuration it was found at the crash site, in order to determine exactly what went wrong.
You would think that when a 47 story skyscraper spontaneously collapses, a wholly unprecedented event, that this engineering failure would be investigated even MORE thoroughly. But instead, it’s simply melted down in blast furnaces, over the objections of the victims’ families and, among others, fire engineering magazine, which said something like “this destruction of evidence must stop immediately”.
I’ve attempted to simply reply to people’s questions and objections as they’re made, thus visiting the weaker parts of my position. The evidence for controlled demolition, the starting point of this argument, is far from unimpeachable. I certainly wouldn’t call it a “slam-dunk”, and there are many truthers who think it’s misinformation, as yudkowsky jokingly proposes.
The best evidence of complicity, at least in my opinion, is the behavior of the administration following the attacks. Their efforts to hinder the investigation are a matter of public record, and quite inarguable.
http://video.google.com/videoplay?docid=111797990720729032&hl=en&emb=1#49m22s
I am sorry for linking to things rather than making my arguments in my own words, but I’m arguing with about a dozen people at this point and I’m spread pretty thin.
As the video explains, it later came to light that flight 77 did not HAVE airphones.
Lesswrong: All torture, all the time.