How will you make money in the future to pay back the loan?
Because I’ll have a job again? I have actually had paid employment before, and I don’t anticipate that the need to earn money will vanish from my life. The question is whether I’ll move a few steps up the economic food chain, either because I find a niche where I can do my thing, or because I can’t stand poverty any more and decide to do something that pays well. If I move up, repayment will be faster, if I don’t, it will be slower, but either way it will happen.
Why aren’t you doing that now, even on a part-time basis?
This is the culmination of a period in which I stopped trying to find compromise work and just went for broke. I’ve crossed all sorts of boundaries in the past few weeks, as the end neared and I forced more from myself. That will have been worth it, no matter what happens now.
Is there one academic physicist who will endorse your specific research agenda as worthwhile?
Well, let’s start with the opposite: go here for an ex-academic telling me that one part of my research agenda is not worthwhile. Against that, you might want to look at the reception of my “questions” on the Stack Exchange sites—most of those questions are actually statements of ideas, and they generally (though not always) get a positive reception.
Now if you’re talking about the big agenda I outlined in my first post in this series, there are clear resemblances between elements of it and work that is already out there. You don’t have to look far to find physicists who are interested in physical ontology or in quantum brain theories—though I think most of them are on the wrong track, and the feeling might be mutual. But yes, I can think of various people whose work is similar enough to what I propose, that they might endorse that part of the picture.
Likewise for an academic philosopher?
David Chalmers is best know for talking about dualism, but he’s flagged a new monism as an option worth exploring. We have an exchange of views on his blog here.
Likewise for anyone other than yourself?
Let’s see if anyone else has surfaced, by the time we’re done here.
Why won’t physicists doing ordinary physics (who are more numerous, have higher ability, and have better track record of productivity) solve your problems in the course of making better predictive models?
Well, let’s see what subproblems I have, which might be solved by a physicist. There’s the search for the right quantum ontology, and then there’s the search for quantum effects in the brain. Although most physicists take a positivist attitude and dismiss concerns about what’s really there, there are plenty of people working on quantum ontology. In my opinion, there are new technical developments in QFT, mentioned at the end of my first post, which make the whole work of quantum ontology to date a sort of “prehistory”, conducted in ignorance of very important new facts about alternative representations of QFT that are now being worked out by specialists. Just being aware of these new developments gives me a slight technical edge, though of course the word will spread.
As for the quantum brain stuff, there’s lots of room for independent new inquiries there. I gave a talk 9 years ago on the possibility of topological quantum computation in the microtubule, and no-one else seems to have explored the theoretical viability of that yet. Ideas can sit unexamined for many years at a time.
How would this particular piece of work help with your larger interests? Would it cause physicists to work on this topic? Provide a basis for assessing your productivity or lack thereof?
OK, I no longer know what you’re referring to by “particular piece of work” and “larger interests”. Do you mean, how would the discovery of a consciousness-friendly quantum ontology be relevant to Friendly AI?
Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.
If I ever go to work at Google it won’t be to live off the proceeds afterwards, it will be because it’s relevant to artificial intelligence. Of course you’re right that some forms of work pay well. Part of what keeps me down is impatience and the attempt to do the most important thing right now.
ETA: The indent turned every question into question “1”, so I removed the numbers.
Chalmers’ short comment in your link amounts to just Chalmers expressing enthusiasm for ontologically basic mental properties, not any kind of recommendation for your specific research program.
Of course you’re right that some forms of work pay well. Part of what keeps me down is impatience and the attempt to do the most important thing right now.
To be frank, the Outside View says that most people who have achieved little over many years of work will achieve little in the next few months. Many of them have trouble with time horizons, lack of willpower, or other problems that sabotage their efforts systematically, or prefer to indulge other desires rather than work hard. These things would hinder both scientific research and paid work. Refusing to self-finance with a lucrative job, combined with the absence of any impressive work history (that you have made clear in the post I have seen) is a bad signal about your productivity, your reasons for asking us for money, and your ability to eventually pay it back.
the attempt to do the most important thing right now
No one else seems to buy your picture of what is most important (qualia+safe AI). Have you actually thought through and articulated a model, with a chain of cause and effect, between your course of research and your stated aims of affecting AI? Which came first, your desire to think about quantum consciousness theories or an interest in safe AI? It seems like a huge stretch.
I’m sorry to be so blunt, but if you’re going to be asking for money on Less Wrong you should be able to answer such questions.
Chalmers’ short comment in your link amounts to just Chalmers expressing enthusiasm for ontologically basic mental properties, not any kind of recommendation for your specific research program.
There is no existing recommendation for my specific research program because I haven’t gone looking for one. I thought I would just work on it myself, finish a portion of it myself, and present that to the world, along with the outline of the rest of the program.
Refusing to self-finance with a lucrative job … is a bad signal
“Lucrative” is a weakness in your critique. I’m having trouble thinking of anyone who decided they should have been a scientist, then went and made lots of money, then did something of consequence in science. People who really want to do something tend to have trouble doing something else in its place.
Of course you’re correct that if someone wants to achieve big things, but has failed to do so thus far, there are reasons. One of my favorite lines from Bruce Sterling’s Schismatrix talks about the Superbrights, who were the product of an experiment in genetically engineering IQs above 200, as favoring “wild schemes, huge lunacies that in the end boiled down to nothing”. I spent my 20s trying to create a global transhumanist utopia in completely ineffective ways (which is why I can now write about the perils of utopianism with some conviction), and my 30s catching up on a lot of facts about the world. I am surely a case study in something, some type of failed potential, though I don’t know what exactly. I would even think about trying to extract the lessons from my own experience, and that of similar people like Celia Green and Marni Sheppeard, so that others don’t repeat my mistakes.
But throughout those years I also thought a great deal about the ontology of quantum mechanics and the ontology of consciousness. I could certainly have written philosophical monographs on those subjects. I could do so now, except that I now believe that the explanation of quantum mechanics will be found through the study of our most advanced theories, and not in reasoning about simple models, so the quick path to enlightenment turns out to lead up the slope of genuine particle physics. Anyway, perhaps the main reason I’m trying to do this now is that I have something to contribute and I don’t see anyone else doing it.
No one else seems to buy your picture of what is most important (qualia+safe AI). Have you actually thought through and articulated a model, with a chain of cause and effect, between your course of research and your stated aims of affecting AI?
See the first post in this series. You need to know how consciousness works if you are going to correctly act on values that refer to conscious beings. If you were creating a transhuman AI, and couldn’t even see that colors actually exist, you would clearly be a menace on account of having no clue about the reality of consciousness. Your theoretical apriori, your intellectual constructs, would have eclipsed any sensitivity to the phenomenological and ontological facts.
The issue is broader than just the viability of whatever ideas SIAI has about outsourcing the discovery of the true ontology of consciousness to an AI. We live in a culture possessing powerful computational devices that are interfacing with, and substituting for, human beings, in a plethora of ways. Human subjectivity has always understood itself through metaphor, and a lot of the metaphors are now coming from I.T. There is a synergy between the advance of computer power and the advance of this “computerized” subjectivity, that has trained itself to see itself as a computer. Perhaps the ultimate wrong turn would be a civilization which uploaded itself, thinking that it had thereby obtained immortality, when in fact they had just killed themselves, to be replaced by a society of unconscious simulations. That’s an extreme, science-fictional example, but there are many lesser forms of the problem that could come to pass, which are merely pathologies rather than disasters.
I don’t want to declare the impact of computers on human self-understanding as unconditionally negative, not at all; but it has led to a whole new type of false consciousness, a new way of “eclipsing the lifeworld”, and the only way to overcome that problem rationally and knowingly (it could instead be overcome by violent emotional luddism), is to transcend these incomplete visions of reality—find a deeper one which makes their incompleteness manifest.
Which came first, your desire to think about quantum consciousness theories or an interest in safe AI?
Quantum consciousness. I was thinking about that a few years before Eliezer showed up in the mid-1990s. There have been many twists and turns since then.
Perhaps the ultimate wrong turn would be a civilization which uploaded itself, thinking that it had thereby obtained immortality, when in fact they had just killed themselves, to be replaced by a society of unconscious simulations. That’s an extreme, science-fictional example, but there are many lesser forms of the problem that could come to pass, which are merely pathologies rather than disasters.
Imagine you have signed up to have your brain scanned by the most advanced equipment available in 2045. You set in a tube and close you eyes while the machine recreates all the details of your brain, its connectivity, electromagnetic fields and electrochemical gradients, and transient firing patterns.
The technician says, “Okay, you’ve been uploaded, the simulation is running.”
“Excellent,” you respond. “Now I can interact with he simulation and prove that it doesn’t have qualia.”
“Hold on, there,” said the technician. “You can’t interact with it yet. The nerve impulses from your sensory organs and non-cranial nerves are still being recorded and used as input for the simulation, so that we can make sure it’s a stable duplicate. Observe the screen.”
You turn and look at the screen, where you see an image of yourself, seen from a camera floating above you, turned to face the screen. The screen is hanging from a bright green wall.
“That’s me,” you say. “Where’s the simulation?” As you watch, you verify this, because the image of you on the screen says those words along with you.
“That is the simulation, on the monitor,” reasserts the technician.
You are somewhat taken aback, but not entirely surprised. It is a high-fidelity copy of you and it’s being fed the same inputs as you. You reach up to scratch your ear and notice that the you on the monitor mirrors this action perfectly. He even has the same bemused expression on his face as he examines the monitor, in his simulated world, which he doesn’t realize is just showing him an image of himself, in a dizzying hall-of-mirrors effect.
“The wall is green,” you say. The copy says this along with you in perfect unison. “No, really. It’s not just how the algorithm feels from the inside. Of course you would say it’s green,” you say pointing at the screen, just as the simulation points at the screen and says what you are saying in the same tone, “But you’re saying it for entirely different reasons than I am.
The technician tisks. “Your brain state matches that of the simulation for within an inordinately precise tolerance. He’s not thinking anything different than what you’re thinking. Whatever internal mental representation leads you to insist that the wall is green is identical to the internal mental representation that leads him to say the same.”
“It can’t be,” you insist. “It doesn’t have qualia, it’s just saying that because its brain is wired up the same way as mine. No matter what he says, his experience of color is distinct from mine.”
“Actually, we de-synchronized the inputs thirty seconds ago. You’re the simulation, that’s the real you on the monitor.”
Yes, I’ve read this in science fiction too. Do you want me to write my own science fiction in opposition to yours, about the monadic physics of the soul and the sinister society of the zombie uploads? It would be a stirring tale about the rise of a culture brainwashed to believe that simulations of mind are the same as the real thing, and the race against time to prevent it from implementing its deluded utopia by force.
Telling a vivid little story about how things would be if so-and-so were true, is not actually an argument in favor of the proposition. The relevant LW buzzword is fictional evidence.
Yes, I think your writing a “counter-fiction” would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it. I admit this is a fact about my own state of knowledge, and I would like it if you could at least show me an example of a fictional universe where you were proven right, as I have shown an account of a fictional universe where you are proven wrong.
I don’t intend for the story to serve as any kind of evidence, but I did intend for it to serve as an argument. If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a “mind” outside of the mechanistic brain? If it turns out that humans and their simulations both behave and think in exactly the same fashion?
Again, it’s not fictional evidence, it’s me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.
it’s not fictional evidence, it’s me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.
If it’s a question of why I believe what I do, the starting point is described here, in the section on being wrong about one’s phenomenology. Telling me that colors aren’t there at any level is telling me that my color phenomenology doesn’t exist. That’s like telling you, not just that you’re not having a conversation on lesswrong, but that you are not even hallucinating the occurrence of such a conversation. There are hard limits to the sort of doubt one can credibly engage in about what is happening to oneself at the level of appearance, and the abolition of color lies way beyond those limits, out in the land of “what if 2+2 actually equals 5?”
The next step is the insistence that such colors are not contained in physical ontology, and so a standard materialism is really a dualism, which will associate colors (and other ingredients of experience) with some material entity or property, but which cannot legitimately identify them with it. I think that ultimately this is straightforward—the mathematical ontologies of standard physics are completely explicit, it’s obvious what they’re made of, and you just won’t get color out of something like a big logical conjunction of positional properties—but the arguments are intricate because every conceivable attempt to avoid that conclusion is deployed. So if you want arguments for this step, I’m sure you can find them in Chalmers and other philosophers.
Then there’s my personal alternative to dualism. The existence of an alternative, as a palpable possibility if not a demonstrated reality, certainly helps me in my stubbornness. Otherwise I would just be left insisting that phenomenal ontology is definitely different to physical ontology, and historically that usually leads to advocacy of dualism, though starting in the late 19th century you had people talking about a monistic alternative—“panpsychism”, later Russell’s “neutral monism”. There’s surely an important issue buried here, something about the capacity of people to see that something is true, though it runs against their other beliefs, in the absence of an alternative set of beliefs that would explain the problematic truth. It’s definitely easier to insist on the reality of an inconvenient phenomenon when you have a candidate explanation; but one would think that, ideally, this shouldn’t be necessary.
your writing a “counter-fiction” would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it.
It shouldn’t require a story to convey the idea. Or rather, a story would not be the best vehicle, because it’s actually a mathematical idea. You would know that when we look at the world, we see individuated particles, but at the level of quantum wavefunctions, we have, not just a wavefunction per particle, but entangled wavefunctions for several particles at once, that can’t be factorized into a “product” of single-particle wavefunctions (instead, such entangled wavefunctions are sums, superpositions, of distinct product wavefunctions). One aspect of the dispute about quantum reality is whether the supreme entangled wavefunction of the universe is the reality, whether it’s just the particles, whether it’s some combination. But we could also speculate that the reality is something in between—that reality consists of lots of single particles, and then occasional complex entities which we would currently call entangled sets of particles. You could write down an exact specification of such an ontology; it would be a bit like what they call an objective-collapse ontology, except that the “collapses” or “quantum jumps” are potentially between entangled multiparticle states, not just localized single-particle states.
My concept is that the self is a single humongous “multi-particle state” somewhere in the brain, and the “lifeworld” (mentioned at the start of this post) is wholly contained within that state. This way we avoid the dualistic association between conscious state and computational state, in favor of an exact identity between conscious state and physical state. The isomorphism does not involve, on the physical side, a coarse-grained state machine, so here it can be an identity. When I’m not just defending the reality of color (etc) and the impossibility of identifying it with functional states, this is the model that I’m elaborating.
So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor’s dynamics in an entity which ontologically consists of a zillion of the simple tensor factors (a classical computer). In other words, whether the state machine is realized within a single tensor factor or a distributed causal network of them, is what determines whether it is conscious or not.
If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a “mind” outside of the mechanistic brain?
I was asked this some time ago—if I found myself to be an upload, as in your scenario, how would that affect my beliefs? To the extent that I believed what was going on, I would have to start considering Chalmers-style dualism.
So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor’s dynamics in an entity which ontologically consists of a zillion of the simple tensor factors
I’m more interested in the part in the fiction where the heroes realize that the people they’ve lived with their whole lives in their revealed-to-be-dystopian future, who’ve had an upload brain prosthesis after some traumatic injury or disease, are actually p-zombies. How do they find this out, exactly? And how do they deal with there being all these people, who might be their friends, parents or children, who are on all social and cognitive accounts exactly like them, who they are now convinced lack a subjective experience?
Let’s see what we need to assume for such a fictional scenario. First, we have (1) functionally successful brain emulation exists, at a level where the emulation includes memory and personality. Then I see a choice between (2a) the world is still run by human beings, and (2b) the world has powerful AI. Finally, we have a choice between (3a) there has been no discovery of a need for quantum neuroscience yet, and (3b) a quantum neuroscience exists, but a quantum implementation of the personal state machine is not thought necessary to preserve consciousness.
In my opinion, (1) is in tension with (3a) and even with (2a). Given that we are assuming some form of quantum-mind theory to be correct, it seems unlikely that you could have functionally adequate uploads of whole human beings, without this having already been discovered. And having the hardware and the models and the brain data needed to run a whole human sim, should imply that you are well past the threshold of being able to create AI that is nonhuman but with human intellectual potential.
So by my standards, the best chance to make the story work is the combination of (1) with (3b), and possibly with (2b) also. The (2b) scenario might be set after a “semi-friendly” singularity, in which an Iain M. Banks, Culture-like existence for humanity has been created, and the science and technology for brain prostheses has been developed by AIs. Since the existence of a world-ruling friendly super-AI (a “Sysop”) raises so many other issues, it might be better to think in terms of an “Aristoi”-like world where there’s a benevolent human ruling class who have used powerful narrow AI to produce brain emulation technology and other boons to humanity, and who keep a very tight control on its spread. The model here might be Vinge’s Peace Authority, a dictatorship under which the masses have a medieval existence and the rulers have the advanced technology, which they monopolize for the sake of human survival.
However it works, I think we have to suppose a technocratic elite who somehow know enough to produce working brain prostheses, but not enough to realize the full truth about consciousness. They should be heavily reliant on AI to do the R&D for them, but they’ve also managed to keep the genie of transhuman AI trapped in its box so far. I still have trouble seeing this as a stable situation—e.g. a society that lasts for several generations, long enough for a significant subpopulation to consist of “ems”. It might help if we are only dealing with a small population, either because most of humanity is dead or most of them are long-term excluded from the society of uploads.
And even after all this world-building effort, I still have trouble just accepting the scenario. Whole brain emulation good enough to provide a functionally viable copy of the original person implies enormously destabilizing computational and neuroscientific advances. It’s also not something that is achieved in a single leap; to get there, you would have to traverse a whole “uncanny valley” of bad and failed emulations.
Long before you faced the issue of whether a given implementation of a perfect emulation produced consciousness, you would have to deal with subcultures who believed that highly imperfect emulations are good enough. Consider all the forms of wishful thinking that afflict parents regarding their children, and people who are dying regarding their prospects of a cure, and on and on; and imagine how those tendencies would interact with a world in which a dozen forms of brain-simulation snake-oil are in the market.
Look at the sort of artificial systems which are already regarded by some people as close to human. We already have people marrying video game characters, and aiming for immortality via “lifebox”. To the extent that society wants the new possibilities that copies and backups are supposed to provide, it will not wait around while technicians try to chase down the remaining bugs in the emulation process. And what if some of your sims, or the users of brain prostheses, decide that what the technicians call bugs are actually features?
So this issue—autonomous personlike entities in society, which may or may not have subjective experience—is going to be upon us before we have ems to worry about. A child with a toy or an imaginary friend may speak very earnestly about what its companion is thinking or feeling. Strongly religious people may also have an intense imaginative involvement, a personal relationship, with God, angels, spirits. These animistic, anthropomorphizing tendencies are immediately at work whenever there is another step forward in the simulation of humanity.
At the same time, contemporary humans now spend so much time interacting via computer, that they have begun to internalize many of the concepts and properties of software and computer networks. It therefore becomes increasingly easy to create a nonhuman intelligent agent which passes for an Internet-using human. A similar consideration will apply to neurological prostheses: before we have cortical prostheses based on a backup of the old natural brain, we will have cortical prostheses which are meant to be augmentations, and so the criterion of whether even a purely restorative cortical prosthesis is adequate, will increasingly be based on the cultural habits and practices of people who were using cortical prostheses for augmentation.
There’s quite a bit of the least convenient possible world intention in the thought experiment. Yes, assuming that things are run by humans and transhuman or nonhuman AIs are either successfully not pursued or not achievable with anything close to the effort of EMs and therefore still in the future. Assuming that the EMs are made using advanced brain scanning and extensive brute force reverse engineering with narrow AIs, with people in charge and not actually understanding the brain well enough to build one from scratch themselves. Assuming that strong social taboos of the least convenient possible world prevent running EMs in anything but biological or extremely lifelike artificial bodies at the same subjective speed as biological humans, and no rampant copying of them that would destabilize society and lead to a whole new set of thought-experimental problems.
The wishful thinking not-really-there EMs are a good point, but again, the least convenient possible world would probably be really fixated on the idea that the brain is the self, so their culture just rejects stuff like lifeboxes as cute art projects, and goes straight for the whole brain emulation, with any helpful attempts to fudge broken output with a pattern matching chatbot AI being actively looked out for, quickly discovered and leading to swift disgrace for the shortcut-using researchers. It is possible that things would still lead to some kind of wishful thinking outcome, but getting to the stage where the brain emulation is producing actions recognizable as resulting from the personality and memories of the person it was taken from without using any cheating obvious to the researcher, such as a lifebox pattern matcher, sounds like it should be pretty far along being the real thing, given that the in-brain encoding of the personality and memories would be pretty much a complete black box.
There’s still a whole load of unknown unknowns with ways things could go wrong at this point, but it looks a lot more like the “NOW what do we do?” situation I was after in grandparent post than the admittedly likely-in-our-world scenario of people treating lifebox chatbots as their dead friends.
I’m having trouble thinking of anyone who decided they should have been a scientist, then went and made lots of money, then did something of consequence in science.
Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.
If I ever go to work at Google it won’t be to live off the proceeds afterwards, it will be because it’s relevant to artificial intelligence.
Supposing you can work at Google, why not? Beggaring yourself looks unlikely to maximize total productivity over the next few years, which seems like the timescale that counts.
I gave a talk 9 years ago on the possibility of topological quantum computation in the microtubule, and no-one else seems to have explored the theoretical viability of that yet. Ideas can sit unexamined for many years at a time.
Quantum topological computers are more robust than old-fashioned quantum computers, but they aren’t that more robust and they have their own host of issues. People likely aren’t looking into this because a) it seems only marginally more reasonable than the more common version of microtubules doing quantum computation and b) it isn’t clear how one would go about testing any such idea.
In the article I linked, Lubos is expressing a common opinion about the merit of such formulas. It’s a deeper issue than just Lubos being opinionated.
If I had bothered to write a paper, back when I was thinking about anyons in microtubules, we would know a lot more by now about the merits of that idea and how to test it. There would have been a response and a dialogue. But I let it go and no-one else took it up on the theoretical level.
From my perspective, first you should just try to find a biophysically plausible model which contains anyons. Then you can test it, e.g. by examining optical and electrical properties of the microtubule.
People measure such properties already, so we may already have relevant data, even evidence. There is a guy in Japan (Anirban Bandyopadhyay) who claims to have measured all sorts of striking electrical properties of the microtubule. When he talks about theory, I just shake my head, but that doesn’t tell us whether his data is good or bad.
Because I’ll have a job again? I have actually had paid employment before, and I don’t anticipate that the need to earn money will vanish from my life. The question is whether I’ll move a few steps up the economic food chain, either because I find a niche where I can do my thing, or because I can’t stand poverty any more and decide to do something that pays well. If I move up, repayment will be faster, if I don’t, it will be slower, but either way it will happen.
This is the culmination of a period in which I stopped trying to find compromise work and just went for broke. I’ve crossed all sorts of boundaries in the past few weeks, as the end neared and I forced more from myself. That will have been worth it, no matter what happens now.
Well, let’s start with the opposite: go here for an ex-academic telling me that one part of my research agenda is not worthwhile. Against that, you might want to look at the reception of my “questions” on the Stack Exchange sites—most of those questions are actually statements of ideas, and they generally (though not always) get a positive reception.
Now if you’re talking about the big agenda I outlined in my first post in this series, there are clear resemblances between elements of it and work that is already out there. You don’t have to look far to find physicists who are interested in physical ontology or in quantum brain theories—though I think most of them are on the wrong track, and the feeling might be mutual. But yes, I can think of various people whose work is similar enough to what I propose, that they might endorse that part of the picture.
David Chalmers is best know for talking about dualism, but he’s flagged a new monism as an option worth exploring. We have an exchange of views on his blog here.
Let’s see if anyone else has surfaced, by the time we’re done here.
Well, let’s see what subproblems I have, which might be solved by a physicist. There’s the search for the right quantum ontology, and then there’s the search for quantum effects in the brain. Although most physicists take a positivist attitude and dismiss concerns about what’s really there, there are plenty of people working on quantum ontology. In my opinion, there are new technical developments in QFT, mentioned at the end of my first post, which make the whole work of quantum ontology to date a sort of “prehistory”, conducted in ignorance of very important new facts about alternative representations of QFT that are now being worked out by specialists. Just being aware of these new developments gives me a slight technical edge, though of course the word will spread.
As for the quantum brain stuff, there’s lots of room for independent new inquiries there. I gave a talk 9 years ago on the possibility of topological quantum computation in the microtubule, and no-one else seems to have explored the theoretical viability of that yet. Ideas can sit unexamined for many years at a time.
OK, I no longer know what you’re referring to by “particular piece of work” and “larger interests”. Do you mean, how would the discovery of a consciousness-friendly quantum ontology be relevant to Friendly AI?
If I ever go to work at Google it won’t be to live off the proceeds afterwards, it will be because it’s relevant to artificial intelligence. Of course you’re right that some forms of work pay well. Part of what keeps me down is impatience and the attempt to do the most important thing right now.
ETA: The indent turned every question into question “1”, so I removed the numbers.
Chalmers’ short comment in your link amounts to just Chalmers expressing enthusiasm for ontologically basic mental properties, not any kind of recommendation for your specific research program.
To be frank, the Outside View says that most people who have achieved little over many years of work will achieve little in the next few months. Many of them have trouble with time horizons, lack of willpower, or other problems that sabotage their efforts systematically, or prefer to indulge other desires rather than work hard. These things would hinder both scientific research and paid work. Refusing to self-finance with a lucrative job, combined with the absence of any impressive work history (that you have made clear in the post I have seen) is a bad signal about your productivity, your reasons for asking us for money, and your ability to eventually pay it back.
No one else seems to buy your picture of what is most important (qualia+safe AI). Have you actually thought through and articulated a model, with a chain of cause and effect, between your course of research and your stated aims of affecting AI? Which came first, your desire to think about quantum consciousness theories or an interest in safe AI? It seems like a huge stretch.
I’m sorry to be so blunt, but if you’re going to be asking for money on Less Wrong you should be able to answer such questions.
There is no existing recommendation for my specific research program because I haven’t gone looking for one. I thought I would just work on it myself, finish a portion of it myself, and present that to the world, along with the outline of the rest of the program.
“Lucrative” is a weakness in your critique. I’m having trouble thinking of anyone who decided they should have been a scientist, then went and made lots of money, then did something of consequence in science. People who really want to do something tend to have trouble doing something else in its place.
Of course you’re correct that if someone wants to achieve big things, but has failed to do so thus far, there are reasons. One of my favorite lines from Bruce Sterling’s Schismatrix talks about the Superbrights, who were the product of an experiment in genetically engineering IQs above 200, as favoring “wild schemes, huge lunacies that in the end boiled down to nothing”. I spent my 20s trying to create a global transhumanist utopia in completely ineffective ways (which is why I can now write about the perils of utopianism with some conviction), and my 30s catching up on a lot of facts about the world. I am surely a case study in something, some type of failed potential, though I don’t know what exactly. I would even think about trying to extract the lessons from my own experience, and that of similar people like Celia Green and Marni Sheppeard, so that others don’t repeat my mistakes.
But throughout those years I also thought a great deal about the ontology of quantum mechanics and the ontology of consciousness. I could certainly have written philosophical monographs on those subjects. I could do so now, except that I now believe that the explanation of quantum mechanics will be found through the study of our most advanced theories, and not in reasoning about simple models, so the quick path to enlightenment turns out to lead up the slope of genuine particle physics. Anyway, perhaps the main reason I’m trying to do this now is that I have something to contribute and I don’t see anyone else doing it.
See the first post in this series. You need to know how consciousness works if you are going to correctly act on values that refer to conscious beings. If you were creating a transhuman AI, and couldn’t even see that colors actually exist, you would clearly be a menace on account of having no clue about the reality of consciousness. Your theoretical apriori, your intellectual constructs, would have eclipsed any sensitivity to the phenomenological and ontological facts.
The issue is broader than just the viability of whatever ideas SIAI has about outsourcing the discovery of the true ontology of consciousness to an AI. We live in a culture possessing powerful computational devices that are interfacing with, and substituting for, human beings, in a plethora of ways. Human subjectivity has always understood itself through metaphor, and a lot of the metaphors are now coming from I.T. There is a synergy between the advance of computer power and the advance of this “computerized” subjectivity, that has trained itself to see itself as a computer. Perhaps the ultimate wrong turn would be a civilization which uploaded itself, thinking that it had thereby obtained immortality, when in fact they had just killed themselves, to be replaced by a society of unconscious simulations. That’s an extreme, science-fictional example, but there are many lesser forms of the problem that could come to pass, which are merely pathologies rather than disasters.
I don’t want to declare the impact of computers on human self-understanding as unconditionally negative, not at all; but it has led to a whole new type of false consciousness, a new way of “eclipsing the lifeworld”, and the only way to overcome that problem rationally and knowingly (it could instead be overcome by violent emotional luddism), is to transcend these incomplete visions of reality—find a deeper one which makes their incompleteness manifest.
Quantum consciousness. I was thinking about that a few years before Eliezer showed up in the mid-1990s. There have been many twists and turns since then.
Imagine you have signed up to have your brain scanned by the most advanced equipment available in 2045. You set in a tube and close you eyes while the machine recreates all the details of your brain, its connectivity, electromagnetic fields and electrochemical gradients, and transient firing patterns.
The technician says, “Okay, you’ve been uploaded, the simulation is running.”
“Excellent,” you respond. “Now I can interact with he simulation and prove that it doesn’t have qualia.”
“Hold on, there,” said the technician. “You can’t interact with it yet. The nerve impulses from your sensory organs and non-cranial nerves are still being recorded and used as input for the simulation, so that we can make sure it’s a stable duplicate. Observe the screen.”
You turn and look at the screen, where you see an image of yourself, seen from a camera floating above you, turned to face the screen. The screen is hanging from a bright green wall.
“That’s me,” you say. “Where’s the simulation?” As you watch, you verify this, because the image of you on the screen says those words along with you.
“That is the simulation, on the monitor,” reasserts the technician.
You are somewhat taken aback, but not entirely surprised. It is a high-fidelity copy of you and it’s being fed the same inputs as you. You reach up to scratch your ear and notice that the you on the monitor mirrors this action perfectly. He even has the same bemused expression on his face as he examines the monitor, in his simulated world, which he doesn’t realize is just showing him an image of himself, in a dizzying hall-of-mirrors effect.
“The wall is green,” you say. The copy says this along with you in perfect unison. “No, really. It’s not just how the algorithm feels from the inside. Of course you would say it’s green,” you say pointing at the screen, just as the simulation points at the screen and says what you are saying in the same tone, “But you’re saying it for entirely different reasons than I am.
The technician tisks. “Your brain state matches that of the simulation for within an inordinately precise tolerance. He’s not thinking anything different than what you’re thinking. Whatever internal mental representation leads you to insist that the wall is green is identical to the internal mental representation that leads him to say the same.”
“It can’t be,” you insist. “It doesn’t have qualia, it’s just saying that because its brain is wired up the same way as mine. No matter what he says, his experience of color is distinct from mine.”
“Actually, we de-synchronized the inputs thirty seconds ago. You’re the simulation, that’s the real you on the monitor.”
Yes, I’ve read this in science fiction too. Do you want me to write my own science fiction in opposition to yours, about the monadic physics of the soul and the sinister society of the zombie uploads? It would be a stirring tale about the rise of a culture brainwashed to believe that simulations of mind are the same as the real thing, and the race against time to prevent it from implementing its deluded utopia by force.
Telling a vivid little story about how things would be if so-and-so were true, is not actually an argument in favor of the proposition. The relevant LW buzzword is fictional evidence.
Yes, I think your writing a “counter-fiction” would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it. I admit this is a fact about my own state of knowledge, and I would like it if you could at least show me an example of a fictional universe where you were proven right, as I have shown an account of a fictional universe where you are proven wrong.
I don’t intend for the story to serve as any kind of evidence, but I did intend for it to serve as an argument. If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a “mind” outside of the mechanistic brain? If it turns out that humans and their simulations both behave and think in exactly the same fashion?
Again, it’s not fictional evidence, it’s me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.
If it’s a question of why I believe what I do, the starting point is described here, in the section on being wrong about one’s phenomenology. Telling me that colors aren’t there at any level is telling me that my color phenomenology doesn’t exist. That’s like telling you, not just that you’re not having a conversation on lesswrong, but that you are not even hallucinating the occurrence of such a conversation. There are hard limits to the sort of doubt one can credibly engage in about what is happening to oneself at the level of appearance, and the abolition of color lies way beyond those limits, out in the land of “what if 2+2 actually equals 5?”
The next step is the insistence that such colors are not contained in physical ontology, and so a standard materialism is really a dualism, which will associate colors (and other ingredients of experience) with some material entity or property, but which cannot legitimately identify them with it. I think that ultimately this is straightforward—the mathematical ontologies of standard physics are completely explicit, it’s obvious what they’re made of, and you just won’t get color out of something like a big logical conjunction of positional properties—but the arguments are intricate because every conceivable attempt to avoid that conclusion is deployed. So if you want arguments for this step, I’m sure you can find them in Chalmers and other philosophers.
Then there’s my personal alternative to dualism. The existence of an alternative, as a palpable possibility if not a demonstrated reality, certainly helps me in my stubbornness. Otherwise I would just be left insisting that phenomenal ontology is definitely different to physical ontology, and historically that usually leads to advocacy of dualism, though starting in the late 19th century you had people talking about a monistic alternative—“panpsychism”, later Russell’s “neutral monism”. There’s surely an important issue buried here, something about the capacity of people to see that something is true, though it runs against their other beliefs, in the absence of an alternative set of beliefs that would explain the problematic truth. It’s definitely easier to insist on the reality of an inconvenient phenomenon when you have a candidate explanation; but one would think that, ideally, this shouldn’t be necessary.
It shouldn’t require a story to convey the idea. Or rather, a story would not be the best vehicle, because it’s actually a mathematical idea. You would know that when we look at the world, we see individuated particles, but at the level of quantum wavefunctions, we have, not just a wavefunction per particle, but entangled wavefunctions for several particles at once, that can’t be factorized into a “product” of single-particle wavefunctions (instead, such entangled wavefunctions are sums, superpositions, of distinct product wavefunctions). One aspect of the dispute about quantum reality is whether the supreme entangled wavefunction of the universe is the reality, whether it’s just the particles, whether it’s some combination. But we could also speculate that the reality is something in between—that reality consists of lots of single particles, and then occasional complex entities which we would currently call entangled sets of particles. You could write down an exact specification of such an ontology; it would be a bit like what they call an objective-collapse ontology, except that the “collapses” or “quantum jumps” are potentially between entangled multiparticle states, not just localized single-particle states.
My concept is that the self is a single humongous “multi-particle state” somewhere in the brain, and the “lifeworld” (mentioned at the start of this post) is wholly contained within that state. This way we avoid the dualistic association between conscious state and computational state, in favor of an exact identity between conscious state and physical state. The isomorphism does not involve, on the physical side, a coarse-grained state machine, so here it can be an identity. When I’m not just defending the reality of color (etc) and the impossibility of identifying it with functional states, this is the model that I’m elaborating.
So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor’s dynamics in an entity which ontologically consists of a zillion of the simple tensor factors (a classical computer). In other words, whether the state machine is realized within a single tensor factor or a distributed causal network of them, is what determines whether it is conscious or not.
I was asked this some time ago—if I found myself to be an upload, as in your scenario, how would that affect my beliefs? To the extent that I believed what was going on, I would have to start considering Chalmers-style dualism.
I’m more interested in the part in the fiction where the heroes realize that the people they’ve lived with their whole lives in their revealed-to-be-dystopian future, who’ve had an upload brain prosthesis after some traumatic injury or disease, are actually p-zombies. How do they find this out, exactly? And how do they deal with there being all these people, who might be their friends, parents or children, who are on all social and cognitive accounts exactly like them, who they are now convinced lack a subjective experience?
Let’s see what we need to assume for such a fictional scenario. First, we have (1) functionally successful brain emulation exists, at a level where the emulation includes memory and personality. Then I see a choice between (2a) the world is still run by human beings, and (2b) the world has powerful AI. Finally, we have a choice between (3a) there has been no discovery of a need for quantum neuroscience yet, and (3b) a quantum neuroscience exists, but a quantum implementation of the personal state machine is not thought necessary to preserve consciousness.
In my opinion, (1) is in tension with (3a) and even with (2a). Given that we are assuming some form of quantum-mind theory to be correct, it seems unlikely that you could have functionally adequate uploads of whole human beings, without this having already been discovered. And having the hardware and the models and the brain data needed to run a whole human sim, should imply that you are well past the threshold of being able to create AI that is nonhuman but with human intellectual potential.
So by my standards, the best chance to make the story work is the combination of (1) with (3b), and possibly with (2b) also. The (2b) scenario might be set after a “semi-friendly” singularity, in which an Iain M. Banks, Culture-like existence for humanity has been created, and the science and technology for brain prostheses has been developed by AIs. Since the existence of a world-ruling friendly super-AI (a “Sysop”) raises so many other issues, it might be better to think in terms of an “Aristoi”-like world where there’s a benevolent human ruling class who have used powerful narrow AI to produce brain emulation technology and other boons to humanity, and who keep a very tight control on its spread. The model here might be Vinge’s Peace Authority, a dictatorship under which the masses have a medieval existence and the rulers have the advanced technology, which they monopolize for the sake of human survival.
However it works, I think we have to suppose a technocratic elite who somehow know enough to produce working brain prostheses, but not enough to realize the full truth about consciousness. They should be heavily reliant on AI to do the R&D for them, but they’ve also managed to keep the genie of transhuman AI trapped in its box so far. I still have trouble seeing this as a stable situation—e.g. a society that lasts for several generations, long enough for a significant subpopulation to consist of “ems”. It might help if we are only dealing with a small population, either because most of humanity is dead or most of them are long-term excluded from the society of uploads.
And even after all this world-building effort, I still have trouble just accepting the scenario. Whole brain emulation good enough to provide a functionally viable copy of the original person implies enormously destabilizing computational and neuroscientific advances. It’s also not something that is achieved in a single leap; to get there, you would have to traverse a whole “uncanny valley” of bad and failed emulations.
Long before you faced the issue of whether a given implementation of a perfect emulation produced consciousness, you would have to deal with subcultures who believed that highly imperfect emulations are good enough. Consider all the forms of wishful thinking that afflict parents regarding their children, and people who are dying regarding their prospects of a cure, and on and on; and imagine how those tendencies would interact with a world in which a dozen forms of brain-simulation snake-oil are in the market.
Look at the sort of artificial systems which are already regarded by some people as close to human. We already have people marrying video game characters, and aiming for immortality via “lifebox”. To the extent that society wants the new possibilities that copies and backups are supposed to provide, it will not wait around while technicians try to chase down the remaining bugs in the emulation process. And what if some of your sims, or the users of brain prostheses, decide that what the technicians call bugs are actually features?
So this issue—autonomous personlike entities in society, which may or may not have subjective experience—is going to be upon us before we have ems to worry about. A child with a toy or an imaginary friend may speak very earnestly about what its companion is thinking or feeling. Strongly religious people may also have an intense imaginative involvement, a personal relationship, with God, angels, spirits. These animistic, anthropomorphizing tendencies are immediately at work whenever there is another step forward in the simulation of humanity.
At the same time, contemporary humans now spend so much time interacting via computer, that they have begun to internalize many of the concepts and properties of software and computer networks. It therefore becomes increasingly easy to create a nonhuman intelligent agent which passes for an Internet-using human. A similar consideration will apply to neurological prostheses: before we have cortical prostheses based on a backup of the old natural brain, we will have cortical prostheses which are meant to be augmentations, and so the criterion of whether even a purely restorative cortical prosthesis is adequate, will increasingly be based on the cultural habits and practices of people who were using cortical prostheses for augmentation.
There’s quite a bit of the least convenient possible world intention in the thought experiment. Yes, assuming that things are run by humans and transhuman or nonhuman AIs are either successfully not pursued or not achievable with anything close to the effort of EMs and therefore still in the future. Assuming that the EMs are made using advanced brain scanning and extensive brute force reverse engineering with narrow AIs, with people in charge and not actually understanding the brain well enough to build one from scratch themselves. Assuming that strong social taboos of the least convenient possible world prevent running EMs in anything but biological or extremely lifelike artificial bodies at the same subjective speed as biological humans, and no rampant copying of them that would destabilize society and lead to a whole new set of thought-experimental problems.
The wishful thinking not-really-there EMs are a good point, but again, the least convenient possible world would probably be really fixated on the idea that the brain is the self, so their culture just rejects stuff like lifeboxes as cute art projects, and goes straight for the whole brain emulation, with any helpful attempts to fudge broken output with a pattern matching chatbot AI being actively looked out for, quickly discovered and leading to swift disgrace for the shortcut-using researchers. It is possible that things would still lead to some kind of wishful thinking outcome, but getting to the stage where the brain emulation is producing actions recognizable as resulting from the personality and memories of the person it was taken from without using any cheating obvious to the researcher, such as a lifebox pattern matcher, sounds like it should be pretty far along being the real thing, given that the in-brain encoding of the personality and memories would be pretty much a complete black box.
There’s still a whole load of unknown unknowns with ways things could go wrong at this point, but it looks a lot more like the “NOW what do we do?” situation I was after in grandparent post than the admittedly likely-in-our-world scenario of people treating lifebox chatbots as their dead friends.
One possible counter fiction would have an ending similar to the bad ending of three worlds collide.
Jeff Hawkins.
Supposing you can work at Google, why not? Beggaring yourself looks unlikely to maximize total productivity over the next few years, which seems like the timescale that counts.
Oh. Lubos Motl. Considering he seems to spend his time doing things like telling Scott Aaronson that Scott doesn’t understand quantum mechanics, I don’t think Motl’s opinion should actually have much weight in this sort of context.
Quantum topological computers are more robust than old-fashioned quantum computers, but they aren’t that more robust and they have their own host of issues. People likely aren’t looking into this because a) it seems only marginally more reasonable than the more common version of microtubules doing quantum computation and b) it isn’t clear how one would go about testing any such idea.
In the article I linked, Lubos is expressing a common opinion about the merit of such formulas. It’s a deeper issue than just Lubos being opinionated.
If I had bothered to write a paper, back when I was thinking about anyons in microtubules, we would know a lot more by now about the merits of that idea and how to test it. There would have been a response and a dialogue. But I let it go and no-one else took it up on the theoretical level.
Do you have any suggested method for testing the microtubule claim?
From my perspective, first you should just try to find a biophysically plausible model which contains anyons. Then you can test it, e.g. by examining optical and electrical properties of the microtubule.
People measure such properties already, so we may already have relevant data, even evidence. There is a guy in Japan (Anirban Bandyopadhyay) who claims to have measured all sorts of striking electrical properties of the microtubule. When he talks about theory, I just shake my head, but that doesn’t tell us whether his data is good or bad.