I no longer endorse many of my comments from more than a few years ago, and I very much no longer endorse the argumentative, aggressive tone in which many of my comments were written, including ones I still endorse the content of
bgaesop
Whaliezer Seacowsky, founder of the Marine Intelligence Research Institute, is giving a lecture on the dangers of AI (Ape Intelligence).
”Apes are becoming more intelligent at a faster rate than we are. At this pace, within a very short timeframe they will develop greater-than-whale intelligence. This will almost certainly have terrible consequences for all other life on the planet, including us.”
Codney Brooks, a skeptic of AI x-risk, scoffs: “Oh come now. Predictions of risk from AI are vastly overblown. *Captain-Ahab, or, The Human* is a science fiction novel! We have no reason to expect smarter than whale AI, if such a thing is even possible, to hate whalekind. And they are clearly nowhere near to developing generalized capabilities that could rival ours—their attempts at imitating our language are pathetic, and the deepest an ape has ever dived is a two digit number of meters! We could simply dive a kilometer under the surface and they’d have no way of affecting us. Not to mention that they’re largely confined to land!”
Whaliezer replies: “the AI doesn’t need to hate us in order to be dangerous to us. We are, after all, made of blubber that they can use for other purposes. Simple goals like obtaining calories, creating light, or transporting themselves from one bit of dry land to another across the ocean, could cause inconceivable harm—even if they don’t directly harm us for their goals, simply as a side effect!”
One audience member turns to another. “Creating light? What, we’re afraid they’re going to evolve a phosphorescent organ and that’s going to be dangerous somehow? I don’t know, the danger of digital intelligences seems really overblown. I think we could gain a lot from cooperating with them to hunt fish. I say we keep giving them nootropics, and if this does end up becoming dangerous at some point in the future, we deal with the problem then.”
I’ve been going back and forth on the spring/summer/fall/winter framing versus a q1/q2/q3/q4 framing. I like your observation about the symbolism of the seasons! It wasn’t a deliberate choice, but it also wasn’t a coincidence, because nothing is ever a coincidence
“How important is it for singalongs to sound polished, vs for them to feel like an organic part of the community? Is it appropriate to pay professional musicians?”
> Organic part of the community: incredibly important. Polished: of negative value. Paying professionals: I would prefer not.This is the part I care the most about. If I wanted to hear professional musicians I would go to a concert. At this community holiday, I want to hear, and participate in, communal singing. I don’t want to feel self conscious about not being a very good singer. I want me and everyone else to get swept up in the moment and the song. I can recall two different Solstices I went to, one in NYC had some technical issues and wasn’t super duper polished and which everyone sang together in, and one in the Bay was much more polished and fancy and professional and had well trained musicians singing while mic’d up. I left the former with a powerful sense of community and a sense of having undergone an important emotional journey. I left the latter with a sense of embarrassment at myself for having attempted to participate in the music, like if I had caught myself singing along at the opera, and frustration at not having gotten the emotional catharsis I wanted. I found myself thinking “maybe Solstice isn’t for me anymore”.
I genuinely can’t remember if I’ve been to a Secular Solstice since then, but I have sung Brighter Than Today to myself and been overcome with emotion and cried.
Here are my thoughts on your opening questions:
* “Is Solstice primarily a rationality holiday? An EA holiday? The broader secular community?”
Empirically and normatively, rationalist.“How essential is the journey from light, into darkness, into light?”
Pretty darn important. As you ask at the end, I could see an occasional or one time “journey from light, into darkness, and that’s it” story. It would make for a good “final episode” before the world ends. I’m reminded of the final episode of the sitcom Dinosaurs, where due to out of control technological change, an ice age ensues, and the main characters huddle together for warmth as they slowly freeze to death.
“Is it okay to have a Solstice where we don’t sing Brighter Than Today?”
No. Except maybe in the “from light to darkness” one, where we could sing a version with altered lyrics.
“How important are singalongs vs speeches?”
Singalongs are incredibly important. Speeches I could do without.
“How important is it for singalongs to sound polished, vs for them to feel like an organic part of the community? Is it appropriate to pay professional musicians?”
Organic part of the community: incredibly important. Polished: of negative value. Paying professionals: I would prefer not.
“How important is transhumanism or x-risk?”
X-risk: pretty important. Transhumanism: I think the importance of this varies with how much people think it’s a genuine light of a new day that could save us from x-risk.
“Is it good or bad to change lyrics over time?”
Gut instinct says bad but I could see arguments for it being good in certain instances. But I’m the kind of guy who still gets annoyed at Church Latin’s pronunciation of “v” and “c” etc in Adeste Fideles.
“How important is it to celebrate Solstice on literal astronomical Solstice? If you don’t, why are we calling it Solstice? Is it important for the name to be clear?”
Ideally it would always be on the literal Solstice but scheduling is important too. People ought to be able to actually attend.
“Is it okay to have one solstice someday with a ‘bad ending’, where instead of climbing back out of the darkness hopefully, we just… sit with it, and accept that maybe it might be what the future holds?”
Yes. See above
As someone who has very meager singing ability, I stumble over the transition from “today” to “although”
real life, I’d say: “Ok guys, let’s sit in this room, everyone turn off their recording devices, and let’s talk, with the agreement that what happens in this room stays in this room.”
The one time I did this with rationalists, the person (Adam Widmer) who organized the event and explicitly set forth the rule you just described, then went on to remember what people had said and bring it up publicly later in order to shake them into changing their behavior to fit his (if you’ll excuse me speaking ill of the dead) spoiled little rich boy desires.
So my advice, based on my experience, and which my life would have been noticably better had someone told me before, is: DON’T do this, and if anyone suggests doing this, stop trusting them and run away
Which is not to say that you are untrustworthy and trying to manipulate people into revealing sensitive information so you can use it to manipulate them; in order for me to confidently reach that conclusion, you’d have to actually attempt to organize such an event, not just casually suggest one on the internet
- 5 Aug 2020 2:51 UTC; 1 point) 's comment on Circling by (
The in-person community seems much less skeptical of these things than the online community. Which isn’t to say there are no skeptics, but (especially among the higher status members) it’s kind of distressing to see how little skepticism there is about outright silly claims and models. At last year’s CFAR reunion, for instance, there was a talk uncritically presenting chakras as a real thing, and when someone in the audience proposed doing an experiment to verify if they are real or it’s a placebo effect, the presenter said (paraphrasing) “Hmm, no, let’s not do that. It makes me uncomfortable. I can’t tell why, but I don’t want to do it, so let’s not” and then they didn’t.
This is extremely concerning to me, and I think it should be to everyone else who cares about the epistemological standards of this community
https://slatestarcodex.com/2019/10/21/the-pnse-paper/
So, shouldn’t all the rats who’ve been so into meditation etc for the past decade or so be kinda panicking at the apparent fact that enlightenment is just dunning-krugering yourself into not being able to notice your own incompetence?
My position is “chickens have non-zero moral value, and moral value is not linearly additive.” That is, any additional chicken suffering is bad, any additional chicken having a pleasant life is good, and the total moral value of all chickens as the number of chickens approaches infinity is something like 1/3rd of a human
For anyone who does think that both 1) chickens have non-zero moral value, and 2) moral value is linearly additive, are you willing to bite the bullet that there exist a number of chickens such that it would be better to cause that many chickens to continue to exist at the expense of wiping out all other sentient life forever? This seems so obviously false and also so obviously the first thing to think of when considering 1 and 2 that I am confused there exist folks who accept 1 and 2
Replace “you” with “the hypothetical you who is attempting to convince hypothetical me they exist”, then
>What is the mugging here?
I’m not sure what the other-galaxy-elephants mugging is, but my anti-Pascal’s-mugging defenses are set to defend me against muggings I do not entirely understand. In real life, I think that the mugging is “and therefore it is immoral of you to eat chickens.”
>Why are they “my elephants”?
You’re the one who made them up and/or is claiming they exist.
When people consider it worse for a species to go from 1000 to 0 members, I think it’s mostly due to aesthetic value (people value the existence of a species, independent of the individuals), and because of option value
Yes, these are among the reasons why moral value is not linearly additive. I agree.
People would probably also find it tragic for plants to go extinct (and do find languages going extinct tragic), despite these having no neurons at all.
Indeed, things other than neurons have value.
I personally reject this for animals, though, for the same reasons that I reject it for humans.
Really? You consider it to be equivalently bad for there to be a plague that kills 100,000 humans in a world with a population of 100,000 than in a world with a population of 7,000,000,000?
My reply to all of those is “I do not believe you. This sounds like an attempt at something akin to Pascal’s Mugging. I do not take your imaginary elephants into consideration for the same reason I do not apply moral weight to large numbers of fictional elephants in a novel.”
Several of these questions are poorly phrased. For instance, the supernatural and god questions, as phrased, imply that the god chance should be less than the chance of supernatural anything existing. However, I think (and would like to be able to express) that there is a very small (0), chance of ghosts or wizards, but only a small (1) chance of there being some sort of intelligent being which created the universe-for instance, the simulation hypothesis, which I would consider a subset of the god hypothesis.
Interesting list. Minor typo: “This is where you get to study computing at it’s most theoretical,” the “it’s” should read “its”.
I have started a boardgame company whose first game is up on kickstarter at the moment. I’m going to bring the no-art, largely hand written copy that was made for playtesting.
http://www.kickstarter.com/projects/sixpencegames/the-6p-card-game-of-victorian-combat
Working with an unnamed group of x-risk-cognizant people that LW hasn’t heard of, in a way unrelated to their setting up a non-profit.
Could you tell us about them?
if the disutility of an air molecule slamming into your eye were 1 over Graham’s number, enough air pressure to kill you would have negligible disutility.
Yes, this seems like a good argument that we can’t add up disutility for things like “being bumped into by particle type X” linearly. In fact, it seems like having 1, or even (whatever large number I breathe in a day) molecules of air bumping into me is a good thing, and so we can’t just talk about things like “the disutility of being bumped into by kinds of particles”.
If your utility function ceases to correspond to utility at extreme values, isn’t it more of an approximation of utility than actual utility?
Yeah, of course. Why, do you know of some way to accurately access someone’s actually-existing Utility Function in a way that doesn’t just produce an approximation of an idealization of how ape brains work? Because me, I’m sitting over here using an ape brain to model itself, and this particular ape doesn’t even really expect to leave this planet or encounter or affect more than a few billion people, much less 3^^^3. So it’s totally fine using something accurate to a few significant figures, trying to minimize errors that would have noticeable effects on these scales.
Sure, you don’t need a model that works at the extremes—but when a model does hold for extreme values, that’s generally a good sign for the accuracy of the model.
Yes, I agree. Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that’s a bad sign for your model.
doesn’t that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?
Yeah, absolutely, I definitely agree with that.
re: 1) I don’t think we do have fine-grained control over the outcome of the training of LLMs and other ML systems, which is what really matters. See recent emergent self-preservation behavior.
re: 2) I’m saying that I think those arguments are distractions from the much more important one of x-risk. But sure, this metaphor doesn’t address economic impact aside from “I think we could gain a lot from cooperating with them to hunt fish”
re: 3) I’m not sure I see the relevance. The unnamed audience member saying “I say we keep giving them nootropics” is meant to represent AI researchers who aren’t actively involving themselves in the x-risk debate continuing to make progress on AI capabilities while the arguers talk to each other
re: 4) It sounds like you’re comparing something like a log graph of human capability to a linear graph of AI capability. That is, I don’t think that AI will take tens of thousands of years to develop the way human civilization did. My 50% confidence interval on when the Singularity will happen is 2026-2031, and my 95% confidence only extends to maybe 2100. I expect there to be more progress in AI development in 2025-2026 than in 1980-2020