Rob B’s Shortform Feed
This is a repository for miscellaneous short things I want to post. Other people are welcome to make top-level comments here if they want. (E.g., questions for me you’d rather discuss publicly than via PM; links you think will be interesting to people in this comment section but not to LW as a whole; etc.)
Shared with permission, a google doc exchange confirming Eliezer still finds the arguments for alignment optimism, slower takeoffs, etc. unconvincing:
Caveat: this was a private reply I saw and wanted to share (so people know EY’s basic epistemic state, and therefore probably the state of other MIRI leadership). This wasn’t an attempt to write an adequate public response to any of the public arguments put forward for alignment optimism or non-fast takeoff, etc., and isn’t meant to be a replacement for public, detailed, object-level discussion. (Though I don’t know when/if MIRI folks plan to produce a proper response, and if I expected such a response soonish I’d probably have just waited and posted that instead.)
FWIW, I think Yudkowsky is basically right here and would be happy to explain why if anyone wants to discuss. I’d likewise be interested in hearing contrary perspectives.
Which of the “Reasons to expect fast takeoff” from Paul’s post do you find convincing, and what is your argument against what Paul says there? Or do you have some other reasons for expecting a hard takeoff?
I’ve seen this post of yours, but as far as I know, you haven’t said much about hard vs soft takeoff in general.
It’s a combination of not finding Paul+Katja’s counterarguments convincing (AI Impacts has a slightly different version of the post, I think of this as the Paul+Katja post since I don’t know how much each of them did), having various other arguments that they didn’t consider, and thinking they may be making mistakes in how they frame things and what questions they ask. I originally planned to write a line-by-line rebuttal of the Paul+Katja posts, but instead I ended up writing a sequence of posts that collectively constitute my (indirect) response. If you want a more direct response, I can put it on my list of things to do, haha… sorry… I am a bit overwhelmed… OK here’s maybe some quick (mostly cached) thoughts:
1. What we care about is point of no return, NOT GDP doubling in a year or whatever.
2. PONR seems not particularly correlated with GDP acceleration time or speed, and thus maybe Paul and I are just talking past each other—he’s asking and answering the wrong questions.
3. Slow takeoff means shorter timelines, so if our timelines are independently pretty short, we should update against slow takeoff. My timelines are independently pretty short. (See my other sequence.) Paul runs this argument in the other direction I think; since takeoff will be slow, and we aren’t seeing the beginnings of it now, timelines must be long. (I don’t know how heavily he leans on this argument though, probably not much. Ajeya does this too, and does it too much I think.) Also, concretely, if crazy AI stuff happens in <10 years, probably the EMH has failed in this domain and probably we can get AI by just scaling up stuff and therefore probably takeoff will be fairly fast (at least, it seems that way extrapolating from GPT-1, GPT-2, and GPT-3. One year apart, significantly qualitatively and quantitatively better. If that’s what progress looks like when we are entering the “human range” then we will cross it quickly, it seems.)
4. Discontinuities totally do sometimes happen. I think we shouldn’t expect them by default, but they aren’t super low-prior either; thus, we should do gears-level modelling of AI rather than trying to build a reference class or analogy to other tech.
5. Most of Paul+Katja’s arguments seem to be about continuity vs. discontinuity, which I think is the wrong question to be asking. What we care about is how long it takes (in clock time, or perhaps clock-time-given-compute-and-researcher-budget-X, given current and near-future ideas/algorithms) for AI capabilities to go from “meh” to “dangerous.” THEN once we have an estimate of that, we can use that estimate to start thinking about whether this will happen in a distributed way across the whole world economy, or in a concentrated way in a single AI project, etc. (Analogy: We shouldn’t try to predict greenhouse gas emissions by extrapolating world temperature trends, since that gets the causation backwards.)
6. I think the arguments Paul+Katja makes aren’t super convincing on their own terms. They are sufficient to convince me that the slow takeoff world they describe is possible and deserves serious consideration (more so than e.g. Age of Em or CAIS) but not overall convincing enough for me to say “Bostrom and Yudkowsky were probably wrong.” I could go through them one by one but I think I’ll stop here for now.
Thanks! My understanding of the Bostrom+Yudkowsky takeoff argument goes like this: at some point, some AI team will discover the final piece of deep math needed to create an AGI; they will then combine this final piece with all of the other existing insights and build an AGI, which will quickly gain in capability and take over the world. (You can search “a brain in a box in a basement” on this page or see here for some more quotes.)
In contrast, the scenario you imagine seems to be more like (I’m not very confident I am getting all of this right): there isn’t some piece of deep math needed in the final step. Instead, we already have the tools (mathematical, computational, data, etc.) needed to build an AGI, but nobody has decided to just go for it. When one project finally decides to go for an AGI, this EMH failure allows them to maintain enough of a lead to do crazy stuff (conquistadors, persuasion tools, etc.), and this leads to DSA. Or maybe the EMH failure isn’t even required, just enough of a clock time lead to be able to do the crazy stuff.
If the above is right, then it does seem quite different from Paul+Katja, but also different from Bostrom+Yudkowsky, since the reason why the outcome is unipolar is different. Whereas Bostrom+Yudkowsky say the reason one project is ahead is because there is some hard step at the end, you instead say it’s because of some combination of EMH failure and natural lag between projects.
Ah, this is helpful, thanks—I think we just have different interpretations of Bostrom+Yudkowsky. You’ve probably been around before I was and read more of their stuff, but I first got interested in this around 2013, pre-ordered Superintelligence and read it with keen interest, etc. and the scenario you describe as mine is what I always thought Bostrom+Yudkowsky believed was most likely, and the scenario you describe as theirs—involving “deep math” and “one hard step at the end” is something I thought they held up as an example of how things could be super fast, but not as what they actually believed was most likely.
From what I’ve read, Yudkowsky did seem to think there would be more insights and less “just make blob of compute bigger” about a decade or two ago, but he’s long since updated towards “dear lord, people really are just going to make big blobs of inscrutable matrices, the fools!” and I don’t think this counts as a point against his epistemics because predicting the future is hard and most everyone else around him did even worse, I’d bet.
Ok I see, thanks for explaining. I think what’s confusing to me is that Eliezer did stop talking about the deep math of intelligence sometime after 2011 and then started talking about big blobs of matrices as you say starting around 2016, but as far as I know he has never gone back to his older AI takeoff writings and been like “actually I don’t believe this stuff anymore; I think hard takeoff is actually more likely to be due to EMH failure and natural lag between projects”. (He has done similar things for his older writings that he no longer thinks is true, so I would have expected him to do the same for takeoff stuff if his beliefs had indeed changed.) So I’ve been under the impression that Eliezer actually believes his old writings are still correct, and that somehow his recent remarks and old writings are all consistent. He also hasn’t (as far as I know) written up a more complete sketch of how he thinks takeoff is likely to go given what we now know about ML. So when I see him saying things like what’s quoted in Rob’s OP, I feel like he is referring to the pre-2012 “deep math” takeoff argument. (I also don’t remember if Bostrom gave any sketch of how he expects hard takeoff to go in Superintelligence; I couldn’t find one after spending a bit of time.)
If you have any links/quotes related to the above, I would love to know!
(By the way, I was was a lurker on LessWrong starting back in 2010-2011, but was only vaguely familiar with AI risk stuff back then. It was only around the publication of Superintelligence that I started following along more closely, and only much later in 2017 that I started putting in significant amounts of my time into AI safety and making it my overwhelming priority. I did write several timelines though, and recently did a pretty thorough reading of AI takeoff arguments for a modeling project, so that is mostly where my knowledge of the older arguments comes from.)
For all I know you are right about Yudkowsky’s pre-2011 view about deep math. However, (a) that wasn’t Bostrom’s view AFAICT, and (b) I think that’s just not what this OP quote is talking about. From the OP:
It’s Yudkowsky/Bostrom, not Yudkowsky. And it’s WFLLp1, not p2. Part 2 is the one where the AIs do a treacherous turn; part 1 is where actually everything is fine except that “you get what you measure” and our dumb obedient AIs are optimizing for the things we told them to optimize for rather than for what we want.
I am pretty confident that WFLLp1 is not the main thing we should be worrying about; WFLLp2 is closer, but even it involves this slow-takeoff view (in the strong sense, in which economy is growing fast before the point of no return) which I’ve argued against. I do not think that the reason people shifted from “yudkowsky/bostrom” (which in this context seems to mean “single AI project builds AI in the wrong way, AI takes over world” and to WFLLp1 is that people rationally considered all the arguments and decided that WFLLp1 was on balance more likely. I think instead that probably some sort of optimism bias was involved, and more importantly win by default (Yud + Bostrom stopped talking about their scenarios and arguing for them, whereas Paul wrote a bunch of detailed posts laying out his scenarios and arguments, and so in the absence of visible counterarguments Paul wins the debate by default). Part of my feeling about this is that it’s a failure on my part; when Paul+Katja wrote their big post on takeoff speeds I disagreed with it and considered writing a big point-by-point response, but never did, even after various people posted questions asking “has there been any serious response to Paul+Katja?”
Re (a): I looked at chapters 4 and 5 of Superintelligence again, and I can kind of see what you mean, but I’m also frustrated that Bostrom seems really non-committal in the book. He lists a whole bunch of possibilities but then doesn’t seem to actually come out and give his mainline visualization/”median future”. For example he looks at historical examples of technology races and compares how much lag there was, which seems a lot like the kind of thinking you are doing, but then he also says things like “For example, if human-level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level without even touching the intermediary rungs.” which sounds like the deep math view. Another relevant quote:
Re (b): I don’t disagree with you here. (The only part that worries me is, I don’t have a good idea of what percentage of “AI safety people” shifted from one view to the other, whether were were also new people with different views coming in to the field, etc.) I realize the OP was mainly about failure scenarios, but it did also mention takeoffs (“takeoffs won’t be too fast”) and I was most curious about that part.
I also wish I knew what Bostrom’s median future was like, though I perhaps understand why he didn’t put it in his book—the incentives all push against it. Predicting the future is hard and people will hold it against you if you fail, whereas if you never try at all and instead say lots of vague prophecies, people will laud you as a visionary prophet.
Re (b) cool, I think we are on the same page then. Re takeoff being too fast—I think a lot of people these days think there will be plenty of big scary warning shots and fire alarms that motivate lots of people to care about AI risk and take it seriously. I think that suggests that a lot of people expect a fairly slow takeoff, slower than I think is warranted. Might happen, yes, but I don’t think Paul & Katja’s arguments are that convincing that takeoff will be this slow. It’s a big source of uncertainty for me though.
I’d personally like to find some cruxes between us some time, though I don’t yet know the best format to do that. I think I’ll wait to see your responses to Issa’s question first.
Likewise! I’m up for a video call if you like. Or we could have a big LW thread, or an email chain. I think my preference would be a video call. I like Walled Garden, we could do it there and invite other people maybe. IDK.
Did this ever happen?
I don’t think so? It’s possible that it did and I forgot.
A belief can be a negation in the sense of a contradiction , whilst not being a negation in the sense of a disproof. I dont think EY disproved RH’s position. I dont think he is confident he did himself, since his summary was called “what I believe if not why I believe it”. And I dont think lack of time was the problem, since the debate was immense.
Interesting, yeah I wonder why he titled it that. Still though it seems like he is claiming here to have disproved RH’s position to some extent at least. I for one think RH’s position is pretty implausible, for reasons Yudkowsky probably mentioned (I don’t remember exactly what Yud said).
Why the “seems”? A master rationalist should be able to state things clearly , surely?
Rolf Degen, summarizing part of Barbara Finlay’s “The neuroscience of vision and pain”:
From the paper:
Finlay’s original article on the topic: “The pain of altruism”.
[Epistemic status: Thinking out loud]
If the evolutionary logic here is right, I’d naively also expect non-human animals to suffer more to the extent they’re (a) more social, and (b) better at communicating specific, achievable needs and desires.
There are reasons the logic might not generalize, though. Humans have fine-grained language that lets us express very complicated propositions about our internal states. That puts a lot of pressure on individual humans to have a totally ironclad, consistent “story” they can express to others. I’d expect there to be a lot more evolutionary pressure to actually experience suffering, since a human will be better at spotting holes in the narratives of a human who fakes it (compared to, e.g., a bonobo trying to detect whether another bonobo is really in that much pain).
It seems like there should be an arms race across many social species to give increasingly costly signals of distress, up until the costs outweigh the amount of help they can hope to get. But if you don’t have the language to actually express concrete propositions like “Bob took care of me the last time I got sick, six months ago, and he can attest that I had a hard time walking that time too”, then those costly signals might be mostly or entirely things like “shriek louder in response to percept X”, rather than things like “internally represent a hard-to-endure pain-state so I can more convincingly stick to a verbal narrative going forward about how hard-to-endure this was”.
[Epistemic status: Piecemeal wild speculation; not the kind of reasoning you should gamble the future on.]
Some things that make me think suffering (or ‘pain-style suffering’ specifically) might be surprisingly neurologically conditional and/or complex, and therefore more likely to be rare in non-human animals (and in subsystems of human brains, in AGI subsystems that aren’t highly optimized to function as high-fidelity models of humans, etc.):
1. Degen and Finlay’s social account of suffering above.
2. Which things we suffer from seems to depend heavily on mental narratives and mindset. See, e.g., Julia Galef’s Reflections on Pain, from the Burn Unit.
Pain management is one of the main things hypnosis appears to be useful for. Ability to cognitively regulate suffering is also one of the main claims of meditators, and seems related to existential psychotherapy’s claim that narratives are more important for well-being than material circumstances.
Even if suffering isn’t highly social (pace Degen and Finlay), its dependence on higher cognition suggests that it is much more complex and conditional than it might appear on initial introspection, which on its own reduces the probability of its showing up elsewhere: complex things are relatively unlikely a priori, are especially hard to evolve, and demand especially strong selection pressure if they’re to evolve and if they’re to be maintained.
(Note that suffering introspectively feels relatively basic, simple, and out of our control, even though it’s not. Note also that what things introspectively feel like is itself under selection pressure. If suffering felt complicated, derived, and dependent on our choices, then the whole suite of social thoughts and emotions related to deception and manipulation would be much more salient, both to sufferers and to people trying to evaluate others’ displays of suffering. This would muddle and complicate attempts by sufferers to consistently socially signal that their distress is important and real.)
3. When humans experience large sudden neurological changes and are able to remember and report on them, their later reports generally suggest positive states more often than negative ones. This seems true of near-death experiences and drug states, though the case of drugs is obviously filtered: the more pleasant and/or reinforcing drugs will generally be the ones that get used more.
Sometimes people report remembering that a state change was scary or disorienting. But they rarely report feeling agonizing pain, and they often either endorse having had the experience (with the benefit of hindsight), or report having enjoyed it at the time, or both.
This suggests that humans’ capacity for suffering (especially more ‘pain-like’ suffering, as opposed to fear or anxiety) may be fragile and complex. Many different ways of disrupting brain function seem to prevent suffering, suggesting suffering is the more difficult and conjunctive state for a brain to get itself into; you need more of the brain’s machinery to be in working order in order to pull it off.
4. Similarly, I frequently hear about dreams that are scary or disorienting, but I don’t think I’ve ever heard of someone recalling having experienced severe pain from a dream, even when they remember dreaming that they were being physically damaged.
This may be for reasons of selection: if dreams were more unpleasant, people would be less inclined to go to sleep and their health would suffer. But it’s interesting that scary dreams are nonetheless common. This again seems to point toward ‘states that are further from the typical human state are much more likely to be capable of things like fear or distress, than to be capable of suffering-laden physical agony.’
Devoodooifying Psychology says “the best studies now suggest that the placebo effect is probably very weak and limited to controlling pain”.
How is the signal being kept “costly/honest” though? Is the pain itself the cost? That seems somewhat weird …
Facebook comment I wrote in February, in response to the question ‘Why might having beauty in the world matter?’:
I assume you’re asking about why it might be better for beautiful objects in the world to exist (even if no one experiences them), and not asking about why it might be better for experiences of beauty to exist.
[… S]ome reasons I think this:
1. If it cost me literally nothing, I feel like I’d rather there exist a planet that’s beautiful, ornate, and complex than one that’s dull and simple—even if the planet can never be seen or visited by anyone, and has no other impact on anyone’s life. This feels like a weak preference, but it helps get a foot in the door for beauty.
(The obvious counterargument here is that my brain might be bad at simulating the scenario where there’s literally zero chance I’ll ever interact with a thing; or I may be otherwise confused about my values.)
2. Another weak foot-in-the-door argument: People seem to value beauty, and some people claim to value it terminally. Since human value is complicated and messy and idiosyncratic (compare person-specific ASMR triggers or nostalgia triggers or culinary preferences) and terminal and instrumental values are easily altered and interchanged in our brain, our prior should be that at least some people really do have weird preferences like that at least some of the time.
(And if it’s just a few other people who value beauty, and not me, I should still value it for the sake of altruism and cooperativeness.)
3. If morality isn’t “special”—if it’s just one of many facets of human values, and isn’t a particularly natural-kind-ish facet—then it’s likelier that a full understanding of human value would lead us to treat aesthetic and moral preferences as more coextensive, interconnected, and fuzzy. If I can value someone else’s happiness inherently, without needing to experience or know about it myself, it then becomes harder to say why I can’t value non-conscious states inherently; and “beauty” is an obvious candidate. My preferences aren’t all about my own experiences, and they aren’t simple, so it’s not clear why aesthetic preferences should be an exception to this rule.
4. Similarly, if phenomenal consciousness is fuzzy or fake, then it becomes less likely that our preferences range only and exactly over subjective experiences (or their closest non-fake counterparts). Which removes the main reason to think unexperienced beauty doesn’t matter to people.
Combining the latter two points, and the literature on emotions like disgust and purity which have both moral and non-moral aspects, it seems plausible that the extrapolated versions of preferences like “I don’t like it when other sentient beings suffer” could turn out to have aesthetic aspects or interpretations like “I find it ugly for brain regions to have suffering-ish configurations”.
Even if consciousness is fully a real thing, it seems as though a sufficiently deep reductive understanding of consciousness should lead us to understand and evaluate consciousness similarly whether we’re thinking about it in intentional/psychologizing terms or just thinking about the physical structure of the corresponding brain state. We shouldn’t be more outraged by a world-state under one description than under an equivalent description, ideally.
But then it seems less obvious that the brain states we care about should exactly correspond to the ones that are conscious, with no other brain states mattering; and aesthetic emotions are one of the main ways we relate to things we’re treating as physical systems.
As a concrete example, maybe our ideal selves would find it inherently disgusting for a brain state that sort of almost looks conscious to go through the motions of being tortured, even when we aren’t the least bit confused or uncertain about whether it’s really conscious, just because our terminal values are associative and symbolic. I use this example because it’s an especially easy one to understand from a morality- and consciousness-centered perspective, but I expect our ideal preferences about physical states to end up being very weird and complicated, and not to end up being all that much like our moral intuitions today.
Addendum: As always, this kind of thing is ridiculously speculative and not the kind of thing to put one’s weight down on or try to “lock in” for civilization. But it can be useful to keep the range of options in view, so we have them in mind when we figure out how to test them later.
Somewhat more meta level: Heuristically speaking, it seems wrong and dangerous for the answer to “which expressed human preferences are valid?” to be anything other than “all of them”. There’s a common pattern in metaethics which looks like:
1. People seem to have preference X
2. X is instrumentally valuable as a source of Y and Z. The instrumental-value relation explains how the preference for X was originally acquired.
3. [Fallacious] Therefore preference X can be ignored without losing value, so long as Y and Z are optimized.
In the human brain algorithm, if you optimize something instrumentally for awhile, you start to value it terminally. I think this is the source of a surprisingly large fraction of our values.
Old discussion of this on LW: https://www.lesswrong.com/s/fqh9TLuoquxpducDb/p/synsRtBKDeAFuo7e3
Collecting all of the quantitative AI predictions I know of MIRI leadership making on Arbital (let me know if I missed any):
Aligning an AGI adds significant development time: Eliezer 95%
Almost all real-world domains are rich: Eliezer 80%
Complexity of value: Eliezer 97%, Nate 97%
Distant superintelligences can coerce the most probable environment of your AI: Eliezer 66%
Meta-rules for (narrow) value learning are still unsolved: Eliezer 95%
Natural language understanding of “right” will yield normativity: Eliezer 10%
Relevant powerful agents will be highly optimized: Eliezer 75%
Some computations are people: Eliezer 99%, Nate 99%
Sufficiently optimized agents appear coherent: Eliezer 85%
Some caveats:
Arbital predictions range from 1% to 99%.
I assume these are generally ~5 years old. Views may have shifted.
By default, I assume that the standard caveats for probabilities like these apply: I treat these as off-the-cuff ass numbers unless stated otherwise, products of ‘thinking about the problem on and off for years and then querying my gut about what it expects to actually see’, more so than of building Guesstimate models or trying to hard to make sure all the probabilities are perfectly coherent.
Inconsistencies are flags ‘something is wrong here’, but ass numbers are vague and unreliable enough that they’re to be expected to some degree. Similarly, ass numbers are often unstable hour-to-hour and day-to-day.
On my model, the point of ass numbers isn’t to demand perfection of your gut (e.g., of the sort that would be needed to avoid multiple-stage fallacies when trying to conditionalize a lot), but to:
Communicate with more precision than English-language words like ‘likely’ or ‘unlikely’ allow. Even very vague or uncertain numbers will, at least some of the time, be a better guide than natural-language terms that weren’t designed to cover the space of probabilities (and that can vary somewhat in meaning from person to person).
At least very vaguely and roughly bring your intuitions into contact with reality, and with each other, so you can more readily notice things like ‘I’m miscalibrated’, ‘reality went differently than I expected’, ‘these two probabilities don’t make sense together’, etc.
It may still be a terrible idea to spend too much time generating ass numbers, since “real numbers” are not the native format human brains compute probability with, and spending a lot of time working in a non-native format may skew your reasoning.
(Maybe there’s some individual variation here?)
But they’re at least a good tool to use sometimes, for the sake of crisper communication, calibration practice (so you can generate non-awful future probabilities when you need to), etc.
Suppose most people think there’s a shrew in the basement, and Richard Feynman thinks there’s a beaver. If you’re pretty sure it’s not a shrew, two possible reactions include:
- ‘Ah, the truth is probably somewhere in between these competing perspectives. So maybe it’s an intermediate-sized rodent, like a squirrel.’
- ‘Ah, Feynman has an absurdly good epistemic track record, and early data does indicate that the animal’s probably bigger than a shrew. So I’ll go with his guess and say it’s probably a beaver.’
But a third possible response is:
- ‘Ah, if Feynman’s right, then a lot of people are massively underestimating the rodent’s size. Feynman is a person too, and might be making the same error (just to a lesser degree); so my modal guess will be that it’s something bigger than a beaver, like a capybara.’
In particular, you may want to go more extreme than Feynman if you think there’s something systematically causing people to underestimate a quantity (e.g., a cognitive bias—the person who speaks out first against a bias might still be affected by it, just to a lesser degree), or systematically causing people to make weaker claims than they really believe (e.g., maybe people don’t want to sound extreme or out-of-step with the mainstream view).
This is true! But I think it’s important to acknowledge that this depends a lot on details of Feynman’s reasoning process, and it doesn’t go in a consistent direction. If Feynman is aware of the bias, he may have already compensated for it in his own estimate, so compensating on his behalf would be double-counting the adjustment. And sometimes the net incentive is to overestimate, not to underestimate, because you’re trying to sway the opinion of averagers, or because being more contrarian gets attention, or because shrew-thinkers feel like an outgroup.
In the end, you can’t escape from detail. But if you were to put full power into making this heuristic work, the way to do it would be to look at past cases of Feynman-vs-world disagreement (broadening the “Feynman” and “world” categories until there’s enough training data), and try to get a distribution empirically.
Endorsed!
Have you seen this ever work for an advance prediction? It seems like you need to be in a better epistemic position than Feynman, which is pretty hard.
From Twitter:
I replied:
Chana Messinger, replying to Brandon Bradford:
It’s a conspiracy, man. You gotta pay for water that’s just sitting there in the ground. It’s the corporations tricking people into thinking they gotta drink water that came from their pipes because it’s been “sanitized”. Wake up sheeple, they’re sanitizing your brains!!!
And don’t get me started on showing up later for work! We all know I could be there in just 15 minutes if there were no traffic, but you see the corporations make more money the longer you drive your car, so they make sure everyone has to be at work at the same time so there’s a traffic jam. Then we all use more fuel, end up buying fast food and other junk because we lost so much time in traffic, and gotta put in extra effort at work because the boss is always mad about how we’re showing up late. And to top it all of, all those exhaust fumes are causing ozone holes and global warming which is just what they want because then they can use it to sell you more stuff like sunscreen or to control your life to fight “climate change”.
As for the microwave? Well, let’s just say it’s not a coincidence they blow up if you put tin foil in them. Me? I’m keeping my tin foil right where intended: on my head to block out the thought rays and subliminal messages sent out by their so-called “internet”.
From an April 2019 Facebook discussion:
Rob Bensinger: avacyn:
… I love how EA does veganism / animal welfare things. It’s really good.
(From the comment section on https://forum.effectivealtruism.org/posts/TyLxMrByKuCmzZx6b/reasons-to-eat-meat)
[… Note that in posting this I’m not intending] to advocate for a specific intervention; it’s more that it makes me happy to see thorough and outside-the-box reasoning from folks who are trying to help others, whether or not they have the same background views as me.
Jonathan Salter: Even if this line of reasoning might technically be correct in a narrow, first order effects type way, my intuition tells me that that sort of behaviour would lessen EAs credibility when pushing animal welfare messages, and that spreading general anti-speciesist norms and values to be more important in the long run. Just my two cents though.
Rob Bensinger: My model of what EA should be shooting for is that it should establish its reputation as
’that group of people that engages in wonkish analyses and debates of moral issues at great length, and then actually acts on the conclusions they reach
″that group of people that does lots of cost-benefit analyses and is willing to consider really counter-intuitive concerns rather than rejecting unusual ideas out of hand
″that group of people that seems to be super concerned about its actual impact and nailing down all the details, rather than being content with good PR or moral signaling’
I think that’s the niche EA would occupy if it were going to have the biggest positive impact in the future. And given how diverse EA is and how many disagreements there already are, the ship may have already sailed on us being able to coordinate and converge on moral interventions without any public discussion of things like wild animal suffering.
This is similar to a respect in which my views have changed about whether EAs and rationalists should become vegan en masse. In the past, I’ve given arguments like [in Inhuman Altruism and Revenge of the Meat People]:
A lot more EAs and rationalists should go vegan, because it really does seem like future generations will view 21st-century factory farming similar to how we view 19th-century slavery today. It would be great to be “ahead of the curve” for once, and to clearly show that we’re not just ‘unusually good on some moral questions’ but actually morally exemplary in all the important ways that we can achieve.
I think this is really standing in for two different arguments:
First, a reputational argument saying ‘veganism is an unusually clear signal that we’re willing to take big, costly steps to do the right thing, and that we’re not just armchair theorists or insular contrarians; so we should put more value on paying that signal in order to convince other people that we’re really serious about this save-the-world, help-others, actually-act-based-on-the-abstract-arguments thing’.
Second, an ideal-advisor-style argument saying ‘meat-eating is probably worse than it seems, because the analytic arguments strongly support veganism but social pressure and social intuitions don’t back up those arguments, so we probably won’t emotionally feel their full moral force’.
One objection I got to the first argument is that it seems like the marginal effort and attention of a lot of EAs could save a lot more lives if it were going to things that can have global effects, rather than small-scale personal effects. The reputational argument weighs against this, but there’s a reputational argument going in the other way (I believe due to Katja Grace [update: Katja Grace wrote an in-depth response at the time, but this particular argument seems to be due to Paul Christiano and Oliver Habryka]):
‘What makes EA’s brand special and distinctive, and puts us in an unusual position to have an outsized impact on the world, is that we’re the group that gets really finicky and wonkish about EV and puts its energy into the things that seem highest-EV for the world. Prioritizing personal dietary choices over other, better uses of our time, and especially doing so for reputational or signaling reasons, seem like it actively goes against that unique aspect of EA, which makes it very questionable as a PR venture in the first place.‘
This still left me feeling, on a gut level, like the ‘history will view us as participants in an atrocity’ argument is a strong one—not as a reputational argument, just as an actual argument (by historical analogy) that there’s something morally wrong with participating in factory farming at all, even if we’re (in other aspects of our life) trying to actively oppose factory farming or even-larger atrocities.
Since then, a few things have made me feel like the latter argument’s force isn’t so strong. First, I’ve updated some on the object level about the probability that different species are conscious and that different species in particular circumstances have net-negative lives (though I still think there’s a high-enough-to-be-worth-massively-worrying-about probability that farmed chicken, beef, etc. are all causing immense amounts of suffering).
Second, I’ve realized that when I’ve done my ‘what would future generations think?’ ideal-advisor test in the past, I’ve actually been doing something weird. I’m taking 21st-century intuitions about which things matter most and are most emotionally salient, and projecting them forward to imagine a society where salience works the same way on the meta level, but the object-level social pressures/dynamics are a bit different. But it seems like that heuristic might have been the wrong one for past generations to use, if they wanted to make proper use of this ideal-advisor heuristic.
Jeremy Bentham’s exemplary forward-thinking moral views, for example, seem like a thing you’d achieve by going ‘imagine future generations that are just super reasonable and analytical about all these things, and view things as atrocities in proportion to what the strongest arguments say’, rather than by drawing analogies to things that present-day people find especially atrocious about the past.
(People who have read Bentham: did Bentham ever use intuition pumps like either of these? If so, did either line of thinking seem like it actually played a role in how he reached his conclusions, as opposed to being arguments for persuading others?)
Imagine instead a future society that’s most horrified, above all else, by failures of reasoning process like ‘foreseeably allocating attention and effort to something other than the thing that looks highest-EV to you’. Imagine a visceral gut-level reaction to systematic decision-making errors (that foreseeably have very negative EV) even more severe than modernity’s most negative gut-level reactions to world events. Those failures of reasoning process, after all, are much more directly in your control (and much more influencable by moral praise and condemnation) than action outcomes. That seems like a hypothetical that pushes in a pretty different direction in a lot of these cases. (And one that converges more with the obvious direct ‘just do the best thing’ argument, which doesn’t need any defense.)
I really like the FB crossposts here, and also really like this specific comment. Might be worth polishing it into a top-level post, either here or on the EA Forum sometime.
Thanks! :) I’m currently not planning to polish it; part of the appeal of cross-posting from Facebook for me is that I can keep it timeboxed by treating it as an artifact of something I already said. I guess someone else could cannibalize it into a prettier stand-alone post.
While your comment was clearly written in good faith, it seems to me like you’re missing some context. You recommend that EY recommend that the detractors read books. EY doesn’t just recommend people read books. He wrote the equivalent of like three books on the subjects relevant to this conversation in particular which he gives away for free. Also, most of the people in this conversation are already big into reading books.
It is my impression he also helped establish the Center for Applied Rationality, which has the explicit mission of training skills. (I’m not sure if he technically did but he was part of the community which did and he helped promote it in its early days.)
Eliezer was involved with CFAR in the early days, but has not been involved since at least 2016.
For being too indistinguishable from GPT-3.
From an April 2019 Facebook discussion:
Rob Bensinger:
Source: http://rationallyspeakingpodcast.org/show/rs-230-kelsey-piper-on-big-picture-journalism-covering-the-t.html
Based on https://theunitofcaring.tumblr.com/post/177842591031/what-are-your-favorite-virtues
Spencer Mulesky: Why is this good content? Im not getting it.
Rob Bensinger: That seems hard to summarize!
The “trust your gut” portion maybe obscures the thing I think is important, because it seems more banal and specific. The important thing I think is being pointed at with “sovereignty” is more general than just “notice how you feel about things, and hone your intuitions through experience”, though that’s certainly a core thing people need to do.
One way of pointing at the more basic thing I have in mind is: by default, humans are pretty bad at being honest with themselves and others; are pretty bad at thinking clearly; are pretty bad at expressing and resolving disagreements, as opposed to conformity/mimicry or unproductive brawls; are pretty bad at taking risks and trying new things; are pretty bad at attending to argument structure, as opposed to status/authority/respectability.
We can build habits and group norms that make it a lot easier to avoid those problems, and to catch ourselves when we slip up. But this generally requires that people see past abstractions like “what I should do” and “what’s normal to do” and “what’s correct to do” and be able to observe and articulate what concrete things are going on in their head. A common thing that blocks this is that people feel like there’s something silly or illegitimate or un-objective about reporting what’s really going on inside their heads, so they feel a need to grasp at fake reasons that sound more normal/objective/impartial. Giving EAs/rationalists/etc. more social credit for something like “sovereignty”, and giving them language for articulating this ideal, is one way of trying to fight back against this epistemically and instrumentally bad set of norms and mental habits.
Spencer Mulesky: Thanks!
Rob Bensinger: It might also help to give some random examples (with interesting interconnections) where I’ve found this helpful.
‘I’m in a longstanding relationship that’s turned sour. But I feel like I can’t just leave (or make other changes to my life) because I’m not having enough fun / my life isn’t satisfying as many values as I’d like; I feel like I need to find something objectively Bad my partner has done, so that I can feel Justified and Legitimate in leaving.’ People often feel like they’re not “allowed” to take radical action to improve their lives, because of others’ seeming claims on their life.
A lot of the distinct issues raised on https://www.facebook.com/robbensinger/posts/10160749026995447, like Jessica Taylor’s worry about using moral debt as a social lever to push people around. In my experience, this is not so dissimilar from the relationship case above; people think about their obligations in fuzzy ways that make it hard to see what they actually want and easy to get trapped by others’ claims on their output.
People feel like they’re being looked down on or shamed or insufficiently socially rewarded/incentivized/respected for things about how they’re trying to do EA. Examples might include ‘starting risky projects’, ‘applying to EA jobs’, ‘applying to non-EA jobs’, ‘earning to give’, ‘not earning to give’, ‘producing ideas that aren’t perfectly vetted or write-ups that aren’t perfectly polished’. (See e.g. the comments on https://www.facebook.com/robbensinger/posts/10161249846505447; or for the latter point, https://www.lesswrong.com/posts/7YG9zknYE8di9PuFg/epistemic-tenure and a bunch of other recent writings share a theme of ‘highly productive intellectuals are feeling pressure to not say things publicly until they’re super super confident of them’).
People feel like they can’t (even in private conversation) acknowledge social status and esteem (https://www.facebook.com/robbensinger/posts/10160475363630447), respectability, or ‘Ra’-type things (https://srconstantin.wordpress.com/2016/10/20/ra), or how those phenomena affects everyone’s preferences or judgments.
Modest epistemology in Inadequate Equilibria (https://equilibriabook.com/toc/)
A lot of CFAR-style personal debugging I’ve done has depended on my ability to catch myself in the mental motion of punishing myself (or disregarding myself, etc.) on the timescale of ‘less than a second’. And then stopping to change that response or analyze why that’s happening, so I can drill down better on underlying dispositions I want to improve. Cf. https://wiki.lesswrong.com/wiki/Ugh_field and https://www.lesswrong.com/posts/5dhWhjfxn4tPfFQdi/physical-and-mental-behavior.
Rob Wiblin, August 2019:
Cf. Brienne Yudkowsky on shame and the discussion on https://www.facebook.com/robbensinger/posts/10160749026995447.
Twitter thread collecting examples of alignment research MIRI has said relatively positive things about.
Copied from some conversations on Twitter:
· · · · · · · · · · · · · · ·
Eric Rogstad: I think “illusionism” is a really misleading term. As far as I can tell, illusionists believe that consciousness is real, but has some diff properties than others believe.
It’s like if you called Einstein an “illusionist” w.r.t. space or time.
See my comments here:
https://www.lesswrong.com/posts/biKchmLrkatdBbiH8/book-review-rethinking-consciousness
Rob Bensinger: I mostly disagree. It’s possible to define a theory-neutral notion of ‘consciousness’, but I think it’s just true that ‘there’s no such thing as subjective awareness / qualia / etc.’, and I think this cuts real dang deep into the heart of what most people mean by consciousness.
Before the name illusionism caught on, I had to use the term ‘eliminativism’, but I had to do a lot of work to clarify that I’m not like old-school eliminativists who think consciousness is obviously or analytically fake. Glad to have a clearer term now.
I think people get caught up in knots about the hard problem of consciousness because they try to gesture at ‘the fact that they have subjective awareness’, without realizing they’re gesturing at something that contains massive introspective misrepresentations / illusions.
Seeing that we have to throw out a key part of Chalmers’ explanandum is an important insight for avoiding philosophical knots, even though it doesn’t much help us build a positive account. That account matters, but epistemic spring cleaning matters too.
Eric Rogstad:
@robbensinger I don’t see how this is so different from my relativity example.
The situation still seems to be that there is a real thing that we’re pointing at with “subjective awareness”, and also people have a lot of wrong beliefs about it.
I guess the question is whether consciousness is more like “space” or more like “phlogiston”.
@robbensinger If we defined a theory-neutral notion of ‘consciousness’ (to refer to whatever the thing is that causes us to talk about our experiences), would you still want to describe yourself as an ‘illusionist’ w.r.t that theory?
Keith Frankish: Yes! In fact that’s exactly what I do. (The claim isn’t that consciousness itself is illusory, only that qualia are)
Rob Bensinger: Yeah, I’m happy to define ‘consciousness’ in theory-neutral terms and say it exists in some sense. ‘Qualia are an illusion’ or ‘phenomenal consciousness is an illusion’ is more precise anyway.
I don’t care how we define ‘consciousness’. The claims I care about are:
1. Illusionism is asserting something substantive and (even among physicalists) controversial.
2. Illusionism is genuinely denying the existence of something widely seen as real and crucial (and self-evident!).
3. On the object level: phenomenal consciousness is in fact an introspective illusion.
‘Phenomenal consciousness isn’t real’, ‘phenomenal consciousness is real and reducible’, and ‘phenomenal consciousness is real and fundamental’ are three substantively different views.
Hrothgar: What’s your answer to the hard problem of consciousness?
Rob Bensinger: The hard problem makes sense, and seems to successfully do away with ‘consciousness is real and reducible’. But ‘consciousness is real and irreducible’ isn’t tenable: it either implies violations of physics as we know it (interactionism), or implies we can’t know we’re conscious (epiphenomenalism).
So we seem to be forced to accept that consciousness (of the sort cited in the hard problem) is somehow illusory. This is… very weird and hard to wrap one’s head around. But some version of this view (illusionism) seems incredibly hard to avoid.
(Note: This is a twitter-length statement of my view, so it leaves out a lot of details. E.g., I think panpsychist views must be interactionist or epiphenomenalist, in the sense that matters. But this isn’t trivial to establish.)
Hrothgar: What does “illusory” mean here? I think I’m interpreting as gesturing toward denying consciousness is happening, which is, like, the one thing that can’t even be doubted (since the experience of doubt requires a conscious experiencer in the first place)
Rob Bensinger: I think “the fact that I’m having an experience” seems undeniable. E.g., it seems to just be a fact that I’m experiencing this exact color of redness as I look at the chair next to me. There’s a long philosophical tradition of treating experience as ‘directly given’, the foundation on which all our other knowledge is built.
I find this super compelling and intuitive at a glance, even if I can’t explain how you’d actually build a brain/computer that has infallible ‘directly given’ knowledge about some of its inner workings.
But I think the arguments alluded to above ultimately force us to reject this picture, and endorse the crazy-sounding view ‘the character of my own experiences can be illusory, even though it seems obviously directly given’.
An attempt to clarify what this means: https://nothingismere.com/2017/02/23/phenomenal-consciousness-is-a-quasiperceptual-illusion-objections-and-replies/
I don’t want to endorse the obviously false claim ‘light isn’t bouncing off the chair, hitting my eyes, and getting processed as environmental information by my brain.’
My brain is tracking facts about the environment. And it can accurately model many, many things about itself!
But I think my brain’s native self-modeling gets two things wrong: (1) it models my subjective experience as a sort of concrete, ‘manifest’ inner world; (2) it represents this world as having properties that are too specific or arbitrary to logically follow from ‘mere physics’.
I think there is a genuine perception-like (not ‘hunch-like’) introspective illusion that makes those things appear to be true (to people who are decent introspectors and have thought through the implications) -- even though they’re not true. Like a metacognitive optical illusion.
And yes, this sounds totally incoherent from the traditional Descartes-inspired philosophical vantage point.
Optical illusions are fine; calling consciousness itself an illusion invites the question ‘what is conscious of this illusion?’.
I nonetheless think this weird view is right.
I want to say: There’s of course something going on here; and the things that seems present in my visual field must correspond to real things insofar as they have the potential to affect my actions. But my visual field as-it-appears-to-me isn’t a real movie screen playing for an inner Me.
And what’s more, the movie screen isn’t translatable into neural firings that encode all the ‘given’-seeming stuff. (!)
The movie screen is a lie the brain tells itself—tells itself at the sensory, raw-feel level, not just at the belief/hunch level. (Illusion, rather than delusion.)
And (somehow! this isn’t intuitive to me either!) since there’s no homunculus outside the brain to notice all this, there’s no ‘check’ on the brain forcing it to not trick itself in how it represents the most basic features of ‘experience’ to itself.
The way the brain models itself is entirely a product of the functioning of that very brain, with no law of physics or CS to guarantee the truth of anything! No matter how counter-intuitive that seems to the brain itself. (And yes, it’s still counter-intuitive to me. I wouldn’t endorse this view if I didn’t think the alternatives were even worse!)
Core argument:
1. a Bayesian view of cognition. ‘the exact redness of red’ has to cause brain changes, or our brains can’t know about it.
2. we know enough about physics to know these causes aren’t coming from outside of physics.
3. hard problem: ‘the exact redness of red’ isn’t reducible.
Thus, ‘the exact redness of red’ must somehow not be real. Secondarily, we can circle back and consider things that help make sense of this conclusion and help show it isn’t nonsense:
4. thinking in detail about what cognition goes on in p-zombies’ heads that makes them think there’s a hard problem.
5. questioning the claim that (e.g.) my visual field is ‘directly given’ to me in an infallible way. questioning how you could design a computer that genuinely has infallible access to its internal states.
Hrothgar: But even if I grant that experience is illusion, the fact of ‘experiencing illusion’ is itself then undeniable. I don’t consider it a philosophical tradition, just a description of reality 🤷
Whether this reconciles with physics etc seems like a downstream problem
Reading what you wrote again, I think it’s likely I’m misunderstanding you.
What you’re saying seems crazy or nonsensical to me, and/but I’m super appreciative that you wrote this all out, and I do intend to spend more time with your words (now or later) to see if i can catch more of your drift
(I don’t claim to have it all figured out)
Rob Bensinger: Good, if it sounds crazy/nonsensical then I suspect that (a) I’ve communicated well, and (b) we share key background context: ‘why does consciousness seem obviously real?’, ‘why does the hard problem seem so hard?’, etc.
If my claims seemed obviously true, I’d be worried.
Hrothgar: I haven’t read your blog post yet, but i suppose my main objection right now is something like, “Thinking is itself sensorial in nature, & that nature precedes its content. Effectively it seems like you’re using thinking to try to refute thinking, & we get into gödel problems”
Rob Bensinger: I agree that thinking has an (apparent) phenomenal character, like e.g. seeing.
I don’t think that per se raises a special problem. A calculator could introspect on its acts of calculating and wrongly perceive them as ‘fluffy’ or ‘flibulous’, while still getting 2+2=4 right.
Hrothgar: Why would fluffy or flibulous be wrong? I don’t see what correctness has to do with it (fluffiness is neither wrong nor right) -- where is there a logical basis to evaluate “correctness” of that which isn’t a proposition?
Rob Bensinger: If we take ‘fluffy’ literally, then the computations can’t be fluffy because they aren’t physical. It’s possible to think that some property holds of your thoughts, when it simply doesn’t.
Edit: What it implies is violations of physicalism. You can accept that physics is a map that predicts observations, without accepting that it is the map, to which all other maps must be reduced.
The epiphenomenalist worry is that, if qualia are not denied entirely, they have no causal role to play, since physical causation already accounts for everything that needs to be accounted for.
But physics is a set of theories and descriptions...a map. Usually, the ability of a map to explain and is not exclusive of another map’s ability to do so on. We can explain the death of Mr Smith as the result of bullet entering his heart, or as the result of a finger squeezing a trigger, or a a result of the insurance policy recently taken out on his life, and so on.
So why can’t we resolve the epiphenomenal worry by saying that that physical causation and mental causation are just different, non rivalrous, maps? I screamed because my pain fibres fired” alongside—not versus “I screamed becaue I felt a sharp pain”. It is not the case that there is physical stuff that is doing all the causation, and mental stuff that is doing none of it: rather there is a physical view of what is going on, and a mentalistic view.
Physicalists are reluctant to go down this route, because physicalism is based on the idea that there is something special about the physical map, which means it is not just another map. This special quality means that a physical explanation excludes others, unlike a typical map. But what is it?
It’s rooted in reductionism, the idea that every other map (that is, every theory of the special sciences) can or should reduce to the physical map.
But the reducibility of consciousness is the center of the Hard Problem. If consciousness really is irreducible, and not just unreduced, then that is evidence against the reduction of everything to the physical, and, in turn, evidence against the special, exclusive nature of the physical map.
So, without the reducibility of consciousness, the epiphenomenal worry can be resolved by the two-view manoeuvre. (And without denying the very existence of qualia).
If the physics map doesn’t imply the mind map (because of the zombie argument, the Mary’s room argument, etc.), then how do you come to know about the mind map? The causal process by which you come to know the physics map is easy to understand:
What is the version of this story for the mind map, once we assume that the mind map has contents that have no causal effect on the physical world? (E.g., your mind map had absolutely no effect on the words you typed into the LW page.)
At some point you didn’t have a concept for “qualia”; how did you learn it, if your qualia have no causal effects?
At some point you heard about the zombie argument and concluded “ah yes, my mental map must be logically independent of my physical map”; how did you do that without your mental map having any effects?
I can imagine an interactionist video game, where my brain has more processing power than the game and therefore can’t be fully represented in the game itself. It would then make sense that I can talk about properties that don’t exist within the game’s engine: I myself exist outside the game universe, and I can use that fact to causally change the game’s outcomes in ways that a less computationally powerful agent could not.
Equally, I can imagine an epiphenomenal video game, where I’m strapped into a headset but forbidden from using the controls. I passively watch the events occurring in the game; but no event in the game ever reflects or takes note of the fact that I exist or have any ‘unphysical’ properties, and if there is an AI steering my avatar or camera’s behavior, the AI knows zilch about me. (You could imagine a programmer deliberately designing the game to have NPCs talk about entities outside the game world; but then the programmer’s game-transcending cognitive capacities are not epiphenomenal relative to the game.)
The thing that doesn’t make sense is to import intuitions from the interactionist game to the epiphenomenal game, while insisting it’s all still epiphenomenal.
Direct evidence. That’s the starting point of the whole thing. People think that they have qualia because it seems to them that they do.
Edit: In fact, it’s the other way round: we are always using the mind map, but we remove the subjectivity, “warm fuzzies” from it to arrive at the physics map. Ho wdo we know that physics is the whole story, when we start with our experience, and make a subset of it?
I’m not assuming that. I’m arguing against epiphenomenalism.
So I am saying that the mental is causal, but I am not saying that it is a kind of physical causality, as per reductive physicalism. Reductive physicalism is false because consciousness is irreducible, as you agree. Since mental causation isn’t a kind of physical causation, I don’t have to give a physical account if it.
And I am further not saying that the physical and mental are two separate ontologcal domains, two separate territories. I am talking about maps, not territories.
Without ontological dualism, there are no issues of overdetermination or interaction.
It’s apparently not true that 90% of startups fail. From Ben Kuhn:
According to an answer on Quora, “the real percentage of venture-backed startups that fail—as defined by companies that provide a 1X return or less to investors—has not risen above 60% since 2001” (2017 source). Other Quora answers I saw claimed numbers as high as 98% (on different definitions, like ten-year “failure” (survival?) rate), but didn’t cite their sources.
I don’t have a cite handy as it’s memories from 2014 but when I looked into it I recall the 7 year failure rate excluding the obvious dumb stuff like restaurants was something like 70% but importantly the 70% number included acquisitions, so the actual failure rate was something like 60 ish.
A blurb for the book “The Feeling of Value”:
My reply: Sounds descriptively false? I prefer lots of things that aren’t a matter of valenced experience.
You don’t need to have “direct experience” of all moral properties in order to be a moral realist, any more than you need direct experience of porcupines in order to be a porcupine realist. You can just acknowledge that moral knowledge is inferred indirectly, the same as our knowledge of most things. Seems like a classic case of philosophers reaching weird conclusions because they’re desperate for certainty (rather than embracing Bayesian/probabilistic ways of thinking about stuff).
Likewise, you don’t need direct experience or certainty in order to reconcile “is” and “ought”. Just accept that “ought” facts look like hypothetical imperatives that we happen to care about a lot, or look like rules-of-a-game that we happen to deeply endorse everyone always playing. No deep riddles are created by the fact that “what is a legal move in chess?” is not reducible to conjunctions of claims about our universe’s boundary conditions and laws of physics; we just treat chess-claims like math/logic claims and carry on with our lives. Treating our moral claims (insofar as they’re coherent and consistent) in the same way raises no special difficulties.
This also feels to me like an example of the common philosopher-error “notice a super important fact about morality, and rush (in your excitement) to conclude that this must therefore be the important fact about morality, the one big thing morality is About”.
Human morality is immensely complicated, because human brains are a complicated, incoherent mishmash of innumerable interacting preferences and experiences. Valenced experience indeed seems to be a super important piece of that puzzle, but we can acknowledge that without pretending that it Solves Everything or Exhausts The Phenomenon.
I suspect moral philosophy would have made a lot more progress by now if philosophers spent more of their time adding to the pool of claims about morality (so we can build a full understanding of what phenomenon we need to explain / account for in the first place), and less time trying to reduce all of those claims to a single simple principle.
In principle, I love theorizing and philosophizing about this stuff. In practice, seems like people have a strong tendency to fall in love with the first Grand Theory of Everything they discover in this domain, causing progress to stagnate (and unrealistic views to proliferate) relative to if we had more modest goals. Less “try to reduce all of morality to virtue cultivation”, more “try to marginally improve our understanding of what virtue cultivation consists in”.
Ben Weinstein-Raun wrote on social media:
I think panpsychism is outrageously false, and profoundly misguided as an approach to the hard problem. But I think I can describe it in a way that makes its appeal more obvious:
When I introspect, it seems like there’s a lot of complexity to my experiences. But it doesn’t seem like a complex fact that that I’m conscious at all—the distinction between conscious and unconscious seems very basic. It feels like there’s ‘what it’s like to be an algorithm from the inside’, and then there’s the causal behavior of the algorithm, and that’s all there is to it.
And thought experiments show that ‘how an algorithm feels from inside’ can’t be exhaustively reduced to any functional/causal/‘external’ behavior. (See the knowledge argument, the zombie argument, etc.)
So maybe I should just think of ‘algorithms have an inside’ as a basic feature of the universe, an extension of ‘everything has an inside (so to speak) and the only reason I feel “special” is that I happen to be this one part of the universe’.
Panprotopsychism might make this an easier pill to swallow. We can say that electrons aren’t “conscious” in the fashion of a human; but they maybe have an “inside” (in the sense of “how an algorithm feels from inside”) in the most rudimentary and abstract way possible, and the more complex “inside” of things like humans is built up out of all these smaller, more bare-bones “insides”.
We can imagine that “how an electron feels from inside” is like an empty room. There’s an “inside”, but there’s no content to it, just empty structure. This structure can then produce some amazing things, if you arrange an enormous number of parts in just the right way; but the important thing is to start from the kind of universe that has an “inside” at all, as opposed to the zombie universe.
The problem with this view, as usual, is that we’ve assumed that the “inside” can’t causally affect the “outside”; and yet I just wrote a bunch of paragraphs about the “inside”, presumably based on some knowledge I have about that inside.
So either my paragraphs must be straightforwardly false; or I must be confused about what I’m discussing (ascribing attributes to a nonfunctional, nonphysical thing that are really just run-of-the-mill descriptions of physical facts); or my statements must be miraculously true, even though there is no explanation for why I would have any knowledge of the things I’m discussing.
Without some further argument to make sense of the miracle, I think we have to reject the miracle, even if we still feel confused about what’s actually going on with phenomenal consciousness.
What do you think of Brian Tomasik’s flavor of panpsychism, which he says is compatible with (and, indeed, follows from) type-A materialism? As he puts it,
I haven’t read Brian Tomasik’s thoughts on this, so let me know if you think I’m misunderstanding him / should read more.
The hard problem of consciousness at least gives us a prima facie reason to consider panpsychism. (Though I think this ultimately falls apart when we consider ‘we couldn’t know about the hard problem of consciousness if non-interactionist panpsychism were true; and interactionist panpsychism would mean new, detectable physics’.)
If we deny the hard problem, then I don’t see any reason to give panpsychism any consideration in the first place. We could distinguish two panpsychist views here: ‘trivial’ (doesn’t have any practical implications, just amounts to defining ‘consciousness’ so broadly as to include anything and everything); and ‘nontrivial’ (has practical implications, or at least the potential for such; e.g., perhaps the revelation that panpsychism is true should cause us to treat electrons as moral patients, with their own rights and/or their own welfare).
But I see no reason whatsoever to think that electrons are moral patients, or that electrons have any other nontrivial mental property. The mere fact that we don’t fully understand how human brains work is not a reason to ask whether there’s some new undiscovered feature of particles ∼1031 times smaller than a human brain that explains the comically larger macro-process—any more than limitations in our understanding of stomachs would be a reason to ask whether individual electrons have some hidden digestive properties.
(Brian Tomasik’s view superficially sounds a lot like what Ben Weinstein-Raun is criticizing in his second paragraph, so I thought I’d add here the comment I wrote in response to Ben’s post:
I’m not sure if I should quote Ben’s reply to me, since his post is not public, but he pretty much said that his original post was not addressing type-A physicalist panpsychism, although he finds this view unuseful for other reasons.
)
Thanks for sharing. :) Yeah, it seems like most people have in mind type-F monism when they refer to panpsychism, since that’s the kind of panpsychism that’s growing in popularity in philosophy in recent years. I agree with Rob’s reasons for rejecting that view.
There’s another theory that isn’t even on Chalmers’s list: dual aspect neutral monism.
This holds that the physical sciences are one possible map of territory which is not itself, intrinsically, physical (or, for that matter, mental). Consciousness is another map, or aspect.
This approach has the advantage of dualism, in that there is no longer a need to explain the mental in terms of the physical, to reduce it to the physical, because the physical is no longer regarded as fundamental (nor is the mental, hence the “neutral”). Although an ontological identity between the physical and mental is accepted, the epistemic irreducibility of the mental to the physical is also accepted. Physicalism, in the sense that the physical sciences have a unique and priveleged explanatory role, is therefore rejected.
Nonetheless, the fact that the physical sciences “work” in many ways, that the physical map can be accurate, is retained. Moreover, since Dual Aspect theory is not fully fledged dualism, it is able to sidestep most or all of the standard objections to dualism.
To take one example, since the a conscious mental state and physical brain state are ultimately the same thing, the expected relationships hold between them. For instance, mental states cannot vary without some change in the physical state (supervenience follows directly from identity, without any special apparatus); furthermore, since mental states are ultimately identical to physical brain states, they share the causal powers of brain states (again without the need to posit special explanatory apparatus such as “psychophysical laws”)This holds that the physical sciences are one possible map of territory which is not itself, intrinsically, physical (or, for that matter, mental). Consciousness is another map, or aspect.
This approach has the advantage of dualism, in that there is no longer a need to explain the mental in terms of the physical, to reduce it to the physical, because the physical is no longer regarded as fundamental (nor is the mental, hence the “neutral”). Although an ontological identity between the physical and mental is accepted, the epistemic irreducibility of the mental to the physical is also accepted. Physicalism, in the sense that the physical sciences have a unique and privileged explanatory role, is therefore rejected.
Nonetheless, the fact that the physical sciences “work” in many ways, that the physical map can be accurate, is retained. Moreover, since Dual Aspect theory is not fully fledged dualism, it is able to sidestep most or all of the standard objections to dualism.
To take one example, since the a conscious mental state and physical brain state are ultimately the same thing, the expected relationships hold between them. For instance, mental states cannot vary without some change in the physical state (supervenience follows directly from identity, without any special apparatus); furthermore, since mental states are ultimately identical to physical brain states, they share the causal powers of brain states, and in that way epiphenomenalism is avoided.
The more familiar kinds of dualism are substance and property dualism. Both take a physical ontology “as is” and add something extra, and both have problems with explaining how the additional substances or properties interact with physical substances and properties, and both of course have problems with ontological parsimony (Occam’s Razor).
In contrast to a substance or property, an aspect is a relational kind of thing. In Dual Aspect theory, a conscious state is interpreted as being based on the kind of relationship and entity has with itself, and the kind of interaction it has with itself. The physical is reinterpreted as a kind of interaction with and relation to the external. It is not clear whether this theory adds anything fundamentally new, ontologically, since most people will accept the existence of some kind of inner/outer distinction, although the distinction may be made to do more work in Dual Aspect theory. Reinterpreting the physical is a genuine third alternative to accepting (only) the physical, denying the physical, and suplementing the physical.
The more familiar kinds of dualism are substance and property dualism. Both take a physical ontology “as is” and add something extra, and both have problems with explaining how the additional substances or properties interact with physical substances and properties, and both of course have problems with ontological parsimony (Occam’s Razor).
In contrast to a substance or property, an aspect is a relational kind of thing. In Dual Aspect theory, a conscious state is interpreted as being based on the kind of relationship and entity has with itself, and the kind of interaction it has with itself. The physical is reinterpreted as a kind of interaction with and relation to the external. It is not clear whether this theory adds anything fundamentally new, ontologically, since most people will accept the existence of some kind of inner/outer distinction, although the distinction may be made to do more work in Dual Aspect theory. Reinterpreting the physical is a genuine third alternative to accepting (only) the physical, denying the physical, and suplemmenting the physical.
[Epistemic status: Thinking out loud, just for fun, without having done any scholarship on the topic at all.]
It seems like a lot of horror games/movies are converging on things like ‘old people’, ‘diseased-looking people’, ‘psychologically ill people’, ‘women’, ‘children’, ‘dolls’, etc. as particularly scary.
Why would that be, from an evolutionary perspective? If horror is about fear, and fear is about protecting the fearful from threats, why would weird / uncanny / out-of-evolutionary-distribution threats have a bigger impact than e.g. ‘lots of human warriors coming to attack you’ or ‘a big predator-looking thing stalking you’, which are closer to the biggest things you’d want to worry about in our environment of evolutionary adaptedness? Why are shambling, decrepit things more of a horror staple than big bulky things with claws?
(I mean, both are popular, so maybe this isn’t a real phenomenon. I at least subjectively feel as though those uncanny things are scarier than super-lions or super-snakes.)
Maybe we should distinguish between two clusters of fear-ish emotions:
Terror. This is closer to the fight-or-flight response of ‘act quick because you’re in imminent danger’. It’s a panicky ‘go go go go go!!’ type of feeling, like when a jumpscare happens or when you’re running from a monster in a game.
Dread. This is more like feeling freaked out or creeped out, and it can occur alongside terror, or it can occur separately. It seems to be triggered less by ‘imminent danger’ than by ambiguous warning signs of danger.
So, a first question is why uncanny, mysterious, ‘unnatural’ phenomena often cause the most dread, even though they thereby become less similar to phenomena that actually posed the largest dangers to us ancestrally. (E.g., big hulking people with giant spears or snakes/dragons or werewolves correlate more with ‘things dangerous to our ancestors’ than decrepit zombies. Sure, creepiness maybe requires that the threat be ‘ambiguous’, but then why isn’t an ambiguous shadow of a maybe-snake or maybe-hulking-monster creepier than an obviously-frail/sickly monster?)
Plausibly part of the answer is that more mysterious, inexplicable phenomena are harder to control, and dread is the brain’s way of saying something like ‘this situation looks hard to control in a way that makes me want to avoid situations like this’.
Terror-inspiring things like jumpscares have relatively simple triggers corresponding to a relatively simple response—usually fleeing. Dread-inspiring things like ‘the local wildlife has gotten eerily quiet’ have subtler and more context-sensitive triggers corresponding to responses like ‘don’t necessarily rush into any hasty action, but do pay extra close attention to your environment, and if you can do something to get away from the stimuli that are giving you these unpleasant uneasy feelings, maybe prioritize doing that’.
A second question is why horror films and games seem to be increasingly converging on the creepy/uncanny/mysterious cluster of things, rather than on the overtly physically threatening cluster—assuming this is a real trend. Some hypotheses about the second question:
A: Horror games and movies are increasingly optimizing for dread instead of terror these days, maybe because it’s novel—pure terror feels overdone and out-of-fashion. Or because dread just lends itself to a more fun multi-hour viewing/playing experience, because it’s more of a ‘slow burn’. Or something else.
B: Horror games aren’t optimizing for dread to the exclusion of terror; rather, they’ve discovered that dread is a better way to maximize terror.
Why would B be true?
One just-so story you could tell is that humans have multiple responses to possible dangers, ranging from ‘do some Machiavellian scheming to undermine a political rival’ to ‘avoid eating that weird-smelling food’ to ‘be cautious near that precipice’ to ‘attack’ to ‘flee’. Different emotions correspond to different priors on ‘what reaction is likeliest to be warranted here?‘, and different movie genres optimize for different sets of emotions. And optimizing for a particular emotion usually involves steering clear of things that prime a person to experience a different emotion—people want a ‘purer’ experience.
So one possibility is: big muscular agents, lion-like agents, etc. are likelier to be dangerous (in reality) than a decrepit corpse or a creepy child or a mysterious frail woman; but the correct response to hulking masculine agents is much more mixed between ‘fight / confront’ and ‘run away / avoid’, whereas the correct response to situations that evoke disgust, anxiety, uncertainty, and dread is a lot more skewed toward ‘run away / avoid’. And an excess of jumpscare-ish, heart-pounding terror does tend to incline people more toward running away than toward fighting back, so it might be that both terror and dread are better optimized in tandem, while ‘fight back’ partly competes with terror.
On this view, ‘ratchet up the intensity of danger’ matters less for fear intensity than ‘eliminate likely responses to the danger other than being extra-alert or fleeing’.
… Maybe because movie/game-makers these days just find it really easy to max out our danger-intensity detectors regardless? Pretty much everything in horror movies is pretty deadly relative to the kind of thing you’d regularly encounter in the ancestral environment, and group sizes in horror contexts tend to be smaller than ancestral group sizes.
People who want to enjoy the emotions corresponding purely to the ‘fight’ response might be likelier to watch things like action movies. And indeed, action movies don’t make much use of jumpscares or terror (though they do like tension and adrenaline-pumping intensity).
Or perhaps there’s something more general going on, like:
Hypothesis C: Dread increases ‘general arousal / sensitivity to environmental stimuli’, and then terror can piggy-back off of that and get bigger scares.
Perhaps emotions like ‘disgust’ and ‘uncertainty’ also have this property, hence why horror movies often combine dread, disgust, and uncertainty with conventional terror. In contrast, hypothesis B seems to suggest that we should expect disgust and terror to mostly show up in disjoint sets of movies/games, because the correct response to ‘disease-ish things’ and the correct response to ‘physical attackers’ is very different.
The wiki glossary for the sequences / Rationality: A-Z ( https://wiki.lesswrong.com/wiki/RAZ_Glossary ) is updated now with the glossary entries from the print edition of vol. 1-2.
New entries from Map and Territory:
New entries from How to Actually Change Your Mind:
A bunch of other entries from the M&T and HACYM glossaries were already on the wiki; most of these have been improved a bit or made more concise.
This reminds me of something I’ve been meaning to ask:
Last I checked, the contents of the Less Wrong Wiki were licensed under the GNU Free Documentation License, which is… rather inconvenient. Is it at all possible to re-license it (ideally as CC BY-NC-SA, to match R:AZ itself)?
(My interest in this comes from the fact that the Glossary is mirrored on ReadTheSequences.com, and I’d prefer not to have to deal with two different licenses, as I currently have to.)
I can reach out to Trike Apps about this, but can we actually do this? Seems plausible that we would have to ask for permission from all editors involved in a page before we can change the license.
I have no idea; I cannot claim to really understand the GFDL well enough to know… but if doable, this seems worthwhile, as there’s a lot of material on the wiki which I and others could do various useful/interesting things with, if it were released under a convenient license.
Are there any other OK-quality rationalist glossaries out there? https://wiki.lesswrong.com/wiki/Jargon is the only one I know of. I vaguely recall there being one on http://www.bayrationality.com/ at some point, but I might be misremembering.
https://namespace.obormot.net/Jargon/Jargon
Fantastic!
It’s optimized on a *very* different axis, but there’s the Rationality Cardinality card database.
That counts! :) Part of why I’m asking is in case we want to build a proper LW glossary, and Rationality Cardinality could at least provide ideas for terms we might be missing.
Jeffrey Ladish asked on Twitter:
I replied:
FOOM is a synonym for intelligence explosion, based on the analogy where an AGI recursively self-improving to superintelligence is like a nuclear pile going critical.
Sometimes people also talk about already-pretty-smart-and-impactful AI “going FOOM”, which I take to mean that they’re shooting off to even higher capability levels.
Jeffrey replied:
To which Nate Soares responded:
I suspect there’s a school of thought for which “singularity” was massively overoptimistic—is this what you mean by Kurzweilian magical thinking? That it’s a transition in a very short period of time from scarcity-based capitalism to post-scarcity utopia. Rather than a simple destruction of most of humanity, and of the freedom and value of those remaining.
No, that part of Kurzweil’s view is 100% fine. In fact, I believe I expect a sharper transition than Kurzweil expects. My objection to Kurzweil’s thinking isn’t ‘realistic mature futurists are supposed to be pessimistic across the board’, it’s specific unsupported flaws in his arguments:
Rejection of Eliezer’s five theses (which were written in response to Kurzweil): intelligence explosion, orthogonality, convergent instrumental goals, complexity of value, fragility of value.
Mystical, quasi-Hegelian thinking about surface trends like ‘economic growth’. See the ‘Actual Ray Kurzweil’ quote in https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works.
Otherwise weird and un-Bayesian-sounding attitudes toward forecasting. Seems to think he has a crystal ball that lets him exactly time tech developments, even where he has no model of a causal path by which he could be entangled with evidence about that future development...?
From Facebook:
Mark Norris Lance: [...] There is a long history of differential evaluation of actions taken by grassroots groups and similar actions taken by elites or those in power. This is evident when we discuss violence. If a low-power group places someone under their control it is kidnapping. If they assess their crimes or punish them for it, it is mob justice or vigilanteism. [...]
John Maxwell: Does the low power group in question have a democratic process for appointing judges who then issue arrest warrants?
That’s a key issue for me… “Mob rule” is bad because the process mobs use to make their judgements are bad. Doubly so if the mob attacks anyone who points that out.
A common crime that modern mobs accuse people of is defending bad people. But if people can be convicted of defending bad people, that corrupts the entire justice process, because the only way we can figure out if someone really is bad is by hearing what can be said in their defense.
From https://twitter.com/JonHaidt/status/1166318786959609856:
Yeah, I’m an EA: an Estimated-as-Effective-in-Expectation (in Excess of Endeavors with Equivalent Ends I’ve Evaluated) Agent with an Audaciously Altruistic Agenda.
This is being cute, but I do think parsing ‘effective altruist’ this way makes a bit more sense than tacking on the word ‘aspiring’ and saying ‘aspiring EA’. (Unless you actually are a non-EA who’s aspiring to become one.)
I’m not an ‘aspiring effective altruist’. It’s not that I’m hoping to effectively optimize altruistic goals someday. It’s that I’m already trying to do that, but I’m uncertain about whether I’m succeeding. It’s an ongoing bet, not an aspiration to do something in the future.
‘Aspiring rationalist’ is better, but it feels at least a little bit artificial or faux-modest to me—I’m not aspiring to be a rationalist, I’m aspiring to be rational. I feel like rationalism is weight-training, and rationality is the goal.
If people are unhealthy, we might use ‘health-ism’ to refer to a community or a practice for improving health.
If everyone is already healthy, it seems fine to say they’re healthy but weird to say ‘they’re healthists’. Why is it an ism? Isn’t it just a fact about their physiology?
How would you feel about the creation of a Sequence of Shortform Feeds? (Including this one?) (Not a mod.)
Sure
I can’t speak for Rob but I’d be fine with my own shortform feed being included.
In the context of a conversation with Balaji Srinivasan about my AI views snapshot, I asked Nate Soares what sorts of alignment results would impress him, and he said: