I think this post cleanly and accurately elucidates a dynamic in conversations about consciousness. I hadn’t put my finger on this before reading this post, and I noe think about it every time I hear or participate in a discussion about consciousness.
Eli Tyre
Short, as near as I can tell, true, and important. This expresses much of my feeling about the world.
Perhaps one of the more moving posts I’ve read recently, of direct relevance to many of us.
I appreciate the simplicity and brevity in expressing a regret that resonate strongly with.
The general exercised of reviewing prior debate, now that ( some of ) the evidence is come in, seems very valuable, especially if one side of the debate is making high level claims that their veiw has been vindicated.
That said, I think there were several points in this post where I thought the author’s read of the current evidence is/was off or mistaken. I think this overall doesn’t detract too much from the value of the post, especially because it prompted discussion in the comments.
I don’t remember the context in detail, so I might be mistaken about Scott’s specific claims. But I currently think this is a misleading characterization.
Its conflating two distinct phenomena, namely non-mystical cult leader-like charisma / reality distortion fields, on the one hand, and metaphysical psychic powers, on the other, under the label “spooky mind powers”, to imply someone is reasoning in bad faith or at least inconsistently.
It’s totally consistent to claim that the first thing is happening, while also criticizing someone for believing that the second thing is happening. Indeed, this seems like a correct read of the situation to me, and therefore a natural way to interpret Scott’s claims.
I think about this post several times a year when evaluating plans.
(Or actually, I think about a nearby concept that Nate voiced in person to me, about doing things that you actually believe in, in your heart. But this is the public handle for that.)
I don’t understand how the second sentence follows from the first?
Disagreed insofar by “automatically converted” you mean “the shortform author has no recourse against this’”.
No. That’s why I said the feature should be optional. You can make a general default setting for your shortform, plus there should and there should be a toggle (hidden in the three dots menu?) to turn this on and off on a post by post basis.
I agree. I’m reminded of Scott’s old post The Cowpox of Doubt, about how a skeptics movement focused on the most obvious pseudoscience is actually harmful to people’s rationality because it reassures them that rationality failures are mostly obvious mistakes that dumb people make instead of hard to notice mistakes that I make.
And then we get people believing all sorts of shoddy research – because after all, the world is divided between things like homeopathy that Have Never Been Supported By Any Evidence Ever, and things like conventional medicine that Have Studies In Real Journals And Are Pushed By Real Scientists.
Calling groups cults feels similar, in that it allows one to write them off as “obviously bad” without need for further analysis, reassures one that their own groups (which aren’t cults, of course) are obviously unobjectionable.
Read ~all the sequences. Read all of SSC (don’t keep up with ACX).
Pessimistic about survival, but attempting to be aggresively open-minded about what will happen instead of confirmation biasing my views from 2015.
your close circle is not more conscious or more sentient than people far away, but you care about your close circle more anyways
Or, more specifically, this is a non-sequitor to my deonotology, which holds regardless of whether I personally like or privately wish for the wellbeing of any particular entity.
Well presumably because they’re not equating “moral patienthood” with “object of my personal caring”.
Something can be a moral patient, who you care about to the extent you’re compelled by moral claims, or who’s rights you are deontologically prohibited from trampling on, without your caring about that being in particular.
You might make the claim that calling something a moral patient is the same as saying that you care (at least a little bit) about its wellbeing, but not everyone buys that calim.
An optional feature that I think LessWrong should have: shortform posts that get more than some amount of karma get automatically converted into personal blog posts, including all the comments.
It should have a note at the top “originally published in shortform”, with a link to the shortform comment. (All the copied comments should have a similar note).
What would be the advantage of that?
There’s some recent evidence that non-neural cells have memory like functions. This doesn’t, on its own, entail that non-neural cell are maintaining personality-relevant or self-relevant information.
I got it eventaully!
Shouldn’t we expect that ultimately the only thing selected for is mostly caring about long run power?
I was attempting to address that in my first footnote, though maybe it’s too important a consideration to be relegated to a footnote.
To say it differently, I think we’ll see selection evolutionary fitness, which can take two forms:Selection on AIs’ values, for values that are more fit, given the environment.
Selection on AIs’ rationality and time preference, for long-term strategic VNM rationality.
These are “substitutes” for each other. An agent can either have adaptive values, adaptive strategic orientation, or some combination of both. But agents that fall below the Pareto frontier described by those two axes[1], will be outcompeted.
Early in the singularity, I expect to see more selection on values, and later in the singularity (and beyond), I expect to see more selection on strategic rationality, because I (non-confidently) expect the earliest systems to be myopic and incoherent in roughly similar ways to humans (though probably the distribution of AIs will vary more on those traits than humans).
The fewer generations there are before strong, VNM agents with patient values / long time preferences, the less I expect small amounts of caring for human in AI systems will be eroded.- ^
Actually, “axes” are a bit misleading since the space of possible values is vast and high dimensional. But we can project it onto the scalar of “how fit are these values (given some other assumptions)?”
[I can imagine this section being mildly psychologically info-hazardous to some people. I believe that for most people reading this is fine. I don’t notice myself psychologically affected by these ideas, and I know a number of other people who believe roughly the same things, and also seem psychologically totally healthy. But if you are the kind of person who gets existential anxiety from thought experiments, like from thinking about being a Boltzmann-brain, then you should consider skipping this section, I will phrase the later sections in a way that they don’t depend on this part.]
Thank you for the the warning!
I wasn’t expecting to read an argument that the very fact that I’m reading this post is reason to think that I (for some notion of “I”) will die, within minutes!
That seems like a reasonable thing to have a content warning on.
To whom are are you talking?
If your takeaway is only that you should have fatter tails on the outcomes of an aspiring rationality community, then I don’t object.
If “I got some friends together and we all decided to be really dedicatedly rational” is intended as a description of Ziz and co, I think it is a at least missing many crucial elements, and generally not a very good characterization.