It’s pretty standard to respond to the suicides of Y victims by rallying to reduce Y.
Making a commitment not to notice when something drives a person to suicide seems like it would probably be a monumental mistake.
It’s pretty standard to respond to the suicides of Y victims by rallying to reduce Y.
Making a commitment not to notice when something drives a person to suicide seems like it would probably be a monumental mistake.
I don’t think so—I think Eliezer’s just being sloppy here. “God did a miracle” is supposed to be an example of something that sounds simple in plain English but is actually complex:
One observes that the length of an English sentence is not a good way to measure “complexity”. [...] An enormous bolt of electricity comes out of the sky and hits something, and the Norse tribesfolk say, “Maybe a really powerful agent was angry and threw a lightning bolt.” The human brain is the most complex artifact in the known universe. [...] The complexity of anger, and indeed the complexity of intelligence, was glossed over by the humans who hypothesized Thor the thunder-agent.
To a human, Maxwell’s Equations take much longer to explain than Thor.
Will this “Arbital 2.0” be an entirely unrelated microblogging platform, or are you simply re-branding Arbital 1.0 to focus on the microblogging features?
Off the top of my head: Fermat’s Last Theorem, whether slavery is licit in the United States of America, and the origin of species.
It’s almost like having a third sex. In fact the winged males look far more like females than they look like wingless males.
That sounds like exactly the kind of situation Eliezer claims as the exception—the adaptation is present in the entire population, but only expressed in a subset based on the environmental conditions during development, because there’s a specific advantage to polymorphism.
There’s the whole phenomenon of frequent-dependent selection. Most people are familiar with this from blood types, and sickle-cell anaemia.
Those are single genes, not complex adaptations consisting of multiple mutually-dependant genes. Exactly the “froth” he describes.
Psy-Kosh: Hrm. I’d think “avoid destroying the world” itself to be an ethical injunction too.
The problem is that this is phrased as an injunction over positive consequences. Deontology does better when it’s closer to the action level and negative rather than positive.
Imagine trying to give this injunction to an AI. Then it would have to do anything that it thought would prevent the destruction of the world, without other considerations. Doesn’t sound like a good idea.
No more so, I think, than “don’t murder”, “don’t steal”, “don’t lie”, “don’t let children drown” etc.
Of course, having this ethical injunction—one which compels you to positive action to defend the world—would, if publicly known, rather interfere with the Confessor’s job.
Well, that and the differences in the setting/magic (there’s no Free Transfiguration in canon, for instance, and the Mirror is different—there are less Mysterious Ancient Artefacts generally—and Horcruxes run on different mechanics … stuff like that.)
And Voldemort is just inherently smarter than everyone else, too, for no in-story reason I can discern; he just is, it’s part of the conceit. (Although maybe that was Albus’ fault too, somehow?)
To be fair, we don’t know when he wrote the note.
I don’t like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.
But I see we agree on this.
That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can’t. This is another reason I’d prefer that the capability continue not to exist.
But is it possible to impersonate intelligence? Isn’t anything that can “fake” problem-solving, goal-seeking behaviour sufficiently well intelligent (that is, sapient; but potentially not sentient, which could be a problem.)
I could argue about the likely consequences, but the logic chain behind my arguments is quite short and begins with postulates about individual rights that you probably don’t accept.
When it comes down to it, ethics are entirely a matter of taste (though I would assert that they’re a unique exception to the old saw “there’s no accounting for taste” because a person’s code of ethics determines whether he’s trustworthy and in what ways).
I strongly disagree with this claim, actually. You can definitely persuade people out of their current ethical model. Not truly terminal goals, perhaps, but you can easily obfuscate even those.
What makes you think that “individual rights” are a thing you should care about? If you had to persuade a (human, reasonably rational) judge that they’re the correct moral theory, what evidence would you point to? You might change my mind.
One can’t really have a moral code (or, I believe, self-awareness!) without using it to judge everyone and everything one sees or thinks of. This more or less demands one take the position that those who disagree are at least misguided, if not evil.
Oh, everyone is misguided. (Hence the name of the site.) But they generally aren’t actual evil strawmen.
Actually, they mention every so often that the Cold War turned hot in the Star Trek ’verse and society collapsed. They’re descended from the civilization that rebuilt.
I’m no expert, but even Kurzweil—who, from past performance, is usually correct but over-optimistic by maybe five, ten years—doesn’t expect us to beat the Turing Test until (checks) 2030, with full-on singularity hitting in 2045.
2020 is in five years. The kind of progress that would seem to imply—from where we are now to full-on human-level AI in just five years—seems incredible.
We rolled randomly the ethics of the AI, rolled random events with dice and the AI offered various solutions to those problems… You lost points if you failed to deal with the problems and lost lots of points if you freed the AI and they happened to have goals you disagreed with like annihilation of everything.
many now believe that strong AI may be achieved sometime in the 2020s
Yikes, but that’s early. That’s a lot sooner than I would have said, even as a reasonable lower bound.
Yikes, you’re right. I had noticed something odd, but forgot to look into it. Dangit.
I’m pretty sure this is somebody going to the trouble of downvoting every comment of mine, which has happened before.
It’s against the rules, so I’ll ask a mad to look into it; but obviously, if someone cares enough about something I’m doing or wrong about this much, please, PM me. I can’t interpret you through meaningless downvotes, but I’ll probably stop whatever is bothering you if I know what it is.
I can give you a little more data—this has happened before, which is why I’m in the negatives. Which I guess makes it more likely to happen again, if I’m that annoying :/
It turned out to be a different person to the famous case, they were reasonable and explained their (accurate) complaint via PM. Probably not the same person this time, but if it happened once …
Yup, definitely. Interested amateur here.
There’s also the problem of people taking things meant to be metaphorical as literal, simply because, well, it’s right there, right?
For example (just ran into this today):
Early in the morning, as Jesus was on his way back to the city, he was hungry. Seeing a fig tree by the road, he went up to it but found nothing on it except leaves. Then he said to it, “May you never bear fruit again!” Immediately the tree withered. Matthew 21:18-22 NIV
This is pretty clearly an illustration. “Like this tree, you’d better actually give results, not just give the appearance of being moral”. (In fact, I believe Jesus uses this exact illustration in a sermon later.)
And yet, I saw this on a list of “God’s Temper Tantrums that Christians Never Mention”, presumably interpreted as “Jesus zapped a tree because it annoyed him.”
Except that I think another reasonable interpretation is: whoever edited the text into a form that contains both stories did notice that they are inconsistent, didn’t imagine that somehow they are both simultaneously correct, but did intend them to be taken at face value—the implicit thinking being something like “obviously at least one of these is wrong somewhere, but both of them are here in our tradition; probably one is right and the other wrong; I’ll preserve them both, so that at least the truth is in here somewhere”.
Ooh, I hadn’t thought of that.
I don’t see how this is a good example: if anything this is one where the fundamentalists are actually reading the text closer to what a naive reading means, without any stretched attempts to claim a metaphorical intent that is hard to see in the text. The problem of trying to read the Genesis text in a way that is consistent with the evidence is something that smart people have been trying for a very long time now, so that leads to a lot of very well done apologetics to choose from, but that doesn’t mean it is actually what the text intended.
Well, I’m a Christian, so I might be biased in favour of interpretations that make that seem reasonable. But even so, I find it hard to believe a text that includes two mutually-contradictory creation stories (right next to each other in the text, at that) intended them to be interpreted literally.
both groups are convinced that this applies to the other group.
Oh, it does apply, generally. That’s mindkilling for you.
USian fundamentalist-evangelical Christianity, however, is … exceptionally bad at reading their supposedly all-important sacred text, though. And, indeed, facts in general. We’re talking about the movement that came up with and is still pushing “creationism”, here.
I’m Irish, and we seem to have pretty much no equivalent movement in Europe; our conservative Christians follow a different, traditionalist-Catholic model. The insanity that (presumably) sparked this article is fairly American in nature, but the metaphor is general enough that it presumably applies to all traditions? The conflict is still largely liberal-vs-conservative here, albeit based on different (and usually more obscure) doctrinal arguments.
Any suicide in general, and this one in particular, definitely has multiple causes. I’m really sorry if I gave the opposite impression.
But I think it’s reasonable and potentially important to respond to a suicide by looking into those causes and trying to reduce them.
To be more object-level:
Kathy was obviously mentally ill, and her particular brand of mental illness seems to have been well-known. I don’t know what efforts were made to help her with that (I do get the impression some were made), but I’ve seen people claim her case was an example of the ways our community habitually fails to help people with mental illness and it certainly seems worth looking into that.
Kathy publicly attributed her suicide to the fact that she had been sexually assaulted. Whatever else was in play, it’s certainly true that sexual assault is a risk factor for suicide and she really does seem to have been assaulted. It behooves us to check for flaws in our protections against this sort of thing when they fail this dramatically.
In particular, it seems she felt she didn’t know how to avoid inevitably getting assaulted again. I get the impression this was part of a paranoid/depressive spiral on her part. But it’s true that this is a real phenomenon and I’ve talked to other rationalists who have been concerned with this as well.
To return to the meta level, I’m also very concerned by the fact that this has been taken up by the anti-rationalist crowd and this may be making some people defensive. I don’t recall anyone saying that we should be so concerned about suicide contagion as to ignore the object-level issues raised completely when Aaron Swartz committed suicide, for example. Maybe we should have been! But the fact that we as a community potentially failed or simply could have done better here means that we should be more careful about dismissing this, not less.