I’m Rob Long. I work on AI consciousness and related issues.
Robbo
fixed the “Samberg” typo—thanks!
Thanks for your thoughts! I think I’m having a bit of trouble unpacking this. Can you help me unpack this sentence:
“But I our success rides on overcoming these arguments and designing AI where more is better.”
What is “more”? And what are “these arguments”? And how does this sentence relate to the question of whether explain data makes us put place more or less weight on similar-to-introspection hypotheses?
You might be interested in this post by Harri Besceli, which argues that “the best and worst experiences you had last week probably happened when you were dreaming”.
Eric Schwitzgebel has also written that philosophical hedonists, if consistent, would care more about the quality of dream experiences: https://schwitzsplinters.blogspot.com/2012/04/how-much-should-you-care-about-how-you.html
Even if we were able to get good readings from insula & cingulate cortex & amygdala et alia, do you have thoughts on how and whether we could “ground” these readings? Would we calibrate on someone’s cringe signal, then their gross signal, then their funny signal—matching various readings to various stimuli and subjective reports?
Hi Steven, thanks!
On terminology, I agree.
Wait But Why, which of course is not an authoritative neuroscience source, uses “scale” to mean “how many neurons can be simultaneously recorded”. But then it says fMRI and EEG have “high scale”, but “low spatial resolution”—somewhat confusing since low spatial resolution means that fMRI and EEG don’t record any individual neurons. So, my gloss on “scale” is more like WBW actually is talking about, and probably is better called “coverage”. And then it’s best to just talk about “number of simultaneously recorded [individual] neurons” without giving that a shorthand—and only talk about that when we really are recording individual neurons. That’s what Stevenson and Kording (2011) do in “How advances in neural recording affect data analysis”.
-
Good call on Kernel, I’ll edit to reflect that.
-
Yep—invasive techniques are necessary—but not sufficient, as the case of ECoG shows.
I scheduled a conversation with Evan based on this post and it was very helpful. If you’re on the fence, do it! For me, it was helpful as a general career / EA strategy discussion, in addition to being useful for thinking about specifically Long-Term Future Fund concerns.
And I can corroborate that Evan is indeed not that intimidating.
“I’m tempted to recommend this book to people who might otherwise be turned away by Rationality: From A to Z.”
Within the category of “recent accessible introduction to rationality”, would you recommend this Pinker book, or Julia Galef’s “Scout Mindset”? Do thoughts on the pros and cons of each, or who would benefit more from each?
Thanks for collecting these things! I have been looking into these arguments recently myself, and here are some more relevant things:
EA forum post “A New X-Risk Factor: Brain-Computer Interfaces” (August 2020) argues for BCI as a risk factor for totalitarian lock-in.
In a comment on that post, Kaj Sotala excerpts a section of Sotala and Yampolskiy (2015), “Responses to catastrophic AGI risk: a survey”. This excerpts contains links to many other relevant discussions:
“De Garis [82] argues that a computer could have far more processing power than a human brain, making it pointless to merge computers and humans. The biological component of the resulting hybrid would be insignificant compared to the electronic component, creating a mind that was negligibly different from a ‘pure’ AGI. Kurzweil [168] makes the same argument, saying that although he supports intelligence enhancement by directly connecting brains and computers, this would only keep pace with AGIs for a couple of additional decades.
“The truth of this claim seems to depend on exactly how human brains are augmented. In principle, it seems possible to create a prosthetic extension of a human brain that uses the same basic architecture as the original brain and gradually integrates with it [254]. A human extending their intelligence using such a method might remain roughly human-like and maintain their original values. However, it could also be possible to connect brains with computer programs that are very unlike human brains and which would substantially change the way the original brain worked. Even smaller differences could conceivably lead to the adoption of ‘cyborg values’ distinct from ordinary human values [290].
“Bostrom [49] speculates that humans might outsource many of their skills to non-conscious external modules and would cease to experience anything as a result. The value-altering modules would provide substantial advantages to their users, to the point that they could outcompete uploaded minds who did not adopt the modules. [...]
“Moravec [194] notes that the human mind has evolved to function in an environment which is drastically different from a purely digital environment and that the only way to remain competitive with AGIs would be to transform into something that was very different from a human.”
The sources in question from the above are:
de Garis H 2005 The Artilect War: Cosmists vs Terrans (Palm Springs, CA: ETC Publica-Tions)
Kurzweil, R. (2001). Response to Stephen Hawking. Kurzweil Accelerating Intelligence. September, 5.
Sotala K and Valpola H 2012 Coalescing minds Int. J. Machine Consciousness 4 293–312
Warwick K 2003 Cyborg morals, cyborg values, cyborg ethics Ethics Inf. Technol. 5 131–7
Bostrom N 2004 The future of human evolution ed C Tandy pp 339–71 Two Hundred Years After Kant, Fifty Years After Turing (Death and Anti-Death vol 2)
Moravec H P 1992 Pigs in cyberspace www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1992/CyberPigs.html
Here’s a relevant comment on that post from Carl Shulman, who notes that FHI has periodically looked into BCI in unpublished work: “I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term)”
Thank you for writing about this. It’s a tremendously interesting issue.
I feel qualitatively more conscious, which I mean in the “hard problem of consciousness” sense of the word. “Usually people say that high-dose psychedelic states are indescribably more real and vivid than normal everyday life.” Zen practitioners are often uninterested in LSD because it’s possible to reach states that are indescribably more real and vivid than (regular) real life without ever leaving real life. (Zen is based around being totally present for real life. A Zen master meditates eyes open.) It is not unusual for proficient meditators to describe mystical experiences as at least 100× more conscious than regular everyday experience.
I’m very curious about the issue of what it means to say that one creature is “more conscious” than another—or, that one person is more conscious while meditating than while surfing Reddit. Especially if this is meant in the sense of “more phenomenally conscious”. (I take it that you do mean “more phenomenally conscious”, and that’s what you are saying by invoking the hard problem. But let me know if that’s not right). Can you say more about what you mean? Some background:
Pautz (2019) has been influential on my thinking about this kind of talk about ‘more conscious’ or ‘level of conscious’ or ‘degree of consciousness’. Pautz distinguishes between many consciousness-related things that certainly do come in degrees.
On the one hand, we have certain features of the particular character of phenomenally conscious experiences:
Intensity level (193)
A whisper is less intense than a heavy metal concert; faint pink is less intense than bright red. And of course, certain pleasures and pains are more intense than others
Complexity level
The whiff of mint is a ‘simpler’ experience than visual experience of a bustling London street
Determinacy level
A tomato in the center of vision is represented more determinately than a tomato in the periphery
Access level
If you think that things can be more or less ‘access’ of phenomenal conscious experiences, then there might be some experiences that are not accessed, versus those that are fully accessed—e.g. something right in front of you that you are paying full attention to.
And then there is a ‘global’ feature of a creature’s phenomenal consciousness:
Richness of experiential repertoire: the ‘number’ of distinct experiences (types and tokens) the creature has the capacity to have (194). Adult humans probably have a greater richness of experiential repertoire than a worm (if indeed worms are phenomenally conscious).
In light of this, my questions for you:
Along which of these dimensions are you ‘more’ conscious when meditating? Would love to hear more. (I’m guessing: intensity, complexity, and access?)
Do you think there is some further way in which you are ‘more conscious’, that is not cashed out in these terms? (Pautz does not, and he uses this to criticize Integrated Information Theory)
Finally: this post has inspired me to be more ambitious about exploring the broader regions of consciousness space for myself. (“Our normal waking consciousness, rational consciousness as we call it, is but one special type of consciousness, whilst all about it, parted from it by the filmiest of screens, there lie potential forms of consciousness entirely different.” -William James). And for that, I am grateful.
Tons of handy stuff here, thanks!
I love the sound of Cold Turkey. I use Freedom for my computer, and I use it less than I otherwise would because of this anxious feeling, almost certainly exaggerated but still with a basis in reality, that whenever I start a full block it is a Really Big Deal and I might accidentally screw myself over—for example, if I suddenly remember I have to do something else. (Say, I’m looking for houses and it turns out I actually need to go look something up). But Cold Turkey, I’d just block stuff a lot more freely without the anxiety—I’ll know if I really need something I can unlock it. All while having the calm that comes from Twitter not being immediately accessible.
I also find the Freedom interface really terrible and that trivial inconvenience can keep me from starting blocks.
How often would you say you spend time-you-don’t-endorse after unlocking something with the N random characters? Is it pretty effective at keeping you in line?
I enjoyed reading this and skimming through your other shortforms. I’m intrigued by this idea of using the short form as something like a journal (albeit a somewhat public facing one).
Any tips, if I might want to start doing this? How helpful have you found it? Any failure modes?
Jonathan Simon is working on such a project: “What is it like to be AlphaGo”?
[disclaimer: not an expert, possibly still confused about the Baldwin effect]
A bit of feedback on this explanation: as written, it didn’t make clear to me what makes it a special effect. “Evolution selects for genome-level hardcoding of extremely important learned lessons.” As a reader I was like, what makes this a special case? If it’s useful lesson then of course evolution would tend to select for knowing it innately—that does seem handy for an organism.
As I understand it, what is interesting about the Baldwin effect is that such hard coding is selected for more among creatures that can learn, and indeed because of learning. The learnability of the solution makes it even more important to be endowed with the solution. So individual learning, in this way, drives selection pressures. Dennett’s explanation emphasizes this—curious what you make of his?
https://ase.tufts.edu/cogstud/dennett/papers/baldwincranefin.htm
I’m very intrigued by “prosthetic human voice meant for animal use”! Not knowing much about animal communication or speech in general, I don’t even know what this mean. Could you say a bit more about what that would be?
Welcome, David! What sort of math are you looking to level up on? And do you know what AI safety/related topics you might explore?
Thanks for this! People interested in the claim (which Korsgaard takes to be a deficiency of utilitarianism) that for utilitarians “people and animals don’t really matter at all; they are just the place where the valuable things happen”, might be interested in Richard Yetter Chappell’s [1] paper “Value Receptacles” (pdf). It’s an exploration of what this claim could even mean, and a defense of utilitarianism in light of it.
[1] Not incidentally, a long-time effective altruist. Whose blog is great.
Interesting—what sort of thing do you use this for? what sort of thing have you done after rolling a 2?
I imagine it must be things that are in some sense ‘optional’ since (quite literally) odds are you will not end up doing it.
I’m curious about this passage:
This seems like it’s alluding to a more detailed, strongly-held, and (if correct) damning assessment of (at least early-years) effective altruism. I’d like to understand that position more. Have you written about this elsewhere?