Are you claiming that a being must be a moral agent in order to be a moral patient?
fubarobfusco
I was on an Android tablet, which I use in a laptop-like fashion (landscape mode, with keyboard) but which usually gets the mobile version of sites that try to be mobile-friendly.
The section presumes that the audience agrees wrt veganism. To an audience who isn’t on board with EA veganism, that line comes across as the “arson, murder, and jaywalking” trope.
Advanced rationality techniques, at least when applied to one’s self-conception and life choices, are basically therapy. “Failures of basic rationality” are often better described as “mental health issues”. Therapy is how you deal with mental health issues. People with mental health issues need more therapy/advanced rationality, not less! I’ve seen it hypothesized that one reason we have so many mentally ill rationalists is because people with mental health issues must learn rationality in order to function, at least to some degree that is more than most people need.
This reminds me of Romeo’s comment over here:
http://lesswrong.com/lw/oym/how_id_introduce_lesswrong_to_an_outsider/dryk
I’m curious if there’s much record of intentional communities that aren’t farming communes.
Oneida comes to mind. They had some farming (it was upstate New York in the 1850s, after all) but also a lot of manufacturing — most famously silverware. The community is long gone, but the silverware company is still around.
We should increase awareness of old fairy tales with a jinn who misinterprets wishes.
The most popular UFAI story I’m aware of is “The Sorcerer’s Apprentice”.
Sticking with European folktales that were made into classic Disney cartoons, maybe the analogy to be made is “AI isn’t Pinocchio. It’s Mickey’s enchanted brooms. It doesn’t want to be a Real Boy; it just wants to carry water. The danger isn’t that it will grow up to be a naughty boy if it doesn’t listen to its conscience. It’s that it cannot care about anything other than carrying water; including whether or not it’s flooding your home.”
Thing is, much of the popular audience doesn’t really know what code is. They’ve never written a bug and had a program do something unintended … because they’ve never written any code at all. They’ve certainly never written a virus or worm, or even a script that accidentally overwrites their files with zeroes. They may have issued a bad order to a computer (“Oops, I shouldn’t have sent that email!”) but they’ve never composed and run a non-obviously bad set of instructions.
So, aside from folklore, better CS education may be part of the story here.
Why do you care if you’re running in a simulation? Well, if you’re a piece of malware, like a computer virus or Internet worm, you care whether you’re being analyzed by an anti-malware researcher. If you’re running in a researcher’s lab instead of actually infecting target systems, you’d want to shut down so the researcher can’t observe your real behavior and find out how to exterminate you. However, this means that if you can be fooled into believing you’re running in a simulation when you’re not, you can be made to shut down in the wild.
Yep, that happened. It’s not even the first time.
The WannaCrypt worm contained logic that basically said, “Look for the following properties in the Internet. If you observe them, that means you’re not running in the real Internet; you’re running in a simulation.” But the researcher was able to cause those properties to become true in the real Internet, thereby convincing the live malware that was infesting the actual Internet to believe it was in a simulation and shut down.
Anti-analysis or anti-debugging features, which attempt to ask “Am I running in a simulation?”, are not a new thing in malware, or in other programs that attempt to extract value from humans — such as copy-protection routines. But they do make malware an interesting example of a type of agent for which the simulation hypothesis matters, and where mistaken beliefs about whether you’re in a simulation can have devastating effects on your ability to function.
Harry Frankfurt’s “On Bullshit” introduced the distinction between lies and bullshit. The liar wants to deceive you about the world (to get you to believe false statements), whereas the bullshitter wants to deceive you about his intentions (to get you to take his statements as good-faith efforts, when they are merely meant to impress).
We may need to introduce a third member of this set. Along with lies told by liars, and bullshit spread by bullshitters, there is also spam emitted by spambots.
Like the bullshitter (but unlike the liar), the spambot doesn’t necessarily have any model of the truth of its sentences. However, unlike the bullshitter, the spambot doesn’t particularly care what (or whether) you think of it. But it optimizes its sentences to cause you to do a particular action.
Thank you.
Caution: This is not just a survey. It is also a solicitation to create a public online profile.
In the future, please consider separating surveys from solicitations; or disclosing up front that you are not just conducting a survey.
When I got to the part of this that started asking for personally identifying information to create a public online profile, it felt to me like something sneaky was going on: that my willingness to help with a survey was being misused as an entering-wedge to push me to do something I wouldn’t have chosen to do.
I considered — for a moment — putting bogus data in as a tit-for-tat defection in retribution for the dishonesty. I didn’t do so, because the problem isn’t with the survey aspect; it’s with the not saying up front what you are up to aspect. Posting this comment seemed more effective to discourage that than sticking a shoe in your data.
Just a few groups that have either aimed at similar goals, or have been culturally influential in ways that keep showing up in these parts —
The Ethical Culture movement (Felix Adler).
Pragmatism / pragmaticism in philosophy (William James, Charles Sanders Peirce).
General Semantics (Alfred Korzybski).
The Discordian Movement (Kerry Thornley, Robert Anton Wilson).
The skeptic/debunker movement within science popularization (Carl Sagan, Martin Gardner, James Randi).
General Semantics is possibly the closest to the stated LW (and CFAR) goals of improving human rationality, since it aimed at improving human thought through adopting explicit techniques to increase awareness of cognitive processes such as abstraction. “The map is not the territory” is a g.s. catchphrase.
Maybe starting the Church of the Frost Giants and declaring cryonic suspension to be a religiously mandated funerary practice would work to that end.
I think actually reviving some ice mice might be a bigger step, though.
What does “successful” look like here? Number of patients in cryonic storage? Successfully revived tissues or experimental animals?
In many towns in the US, high school sports (especially football) are not just a recreational activity for students, but rather a major social event for the whole community.
This is an algorithm for producing filter bubbles, rather than for discovering or implementing community norms.
String substitution isn’t truth-preserving; there are some analogies and some disanalogies there.
One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.
Composing a comment and then deciding not to post it can be a good form of rubber-ducking.
I think this idea is worth seeking.
We do really conspire! Conspiring is at best a handy social and economic coordination activity. At worst it is a big bunch of no fun, where people have to pretend to be conspiring while they’d really rather be working on personal projects, flirting, or playing video games; and everyone comes out feeling like they need to hide their freakish incompetence at pursuing the goals of the conspiracy.
We usually call it “having meetings” though.