Ok, so I was thinking more along the lines of how this all applies to the simulation argument.
As for the nonperson predicate as an actual moral imperative for us in the near future ..
Well overall, I have a somewhat different perspective:
To some (admittedly weak degree), we already violate the nonperson predicate today. Yes, our human minds do. But that its a far more complex topic.
If you do the actual math, “a trillion half broken souls” is pretty far into the speculative future (although it is an eventual concern). There are other ethical issues that take priority because they will come up so much sooner.
Its not immediately clear at all that this is ‘wrong’, and this is tied to 1.
Look at this another way. The whole point of simulation is accuracy. Lets say some future AI wants to understand humanity and all of earth, so it recreates the whole thing in a very detailed Matrix-level sim. If it keeps the sim accurate, that universe is more or less similar to one branch of the multiverse that would occur anyway.
Unless the AI simulates a worldline where it has taken some major action. Even then, it may not be unethical unless it eventually terminates the whole worldline.
So I don’t mean to brush the ethical issues under the rug completely, but they clearly are complex.
Another important point: since accurate simulation is necessary for hyperintelligence, this sets up a conflict where ethics which say “don’t simulate intelligent beings” cripple hyper-intelligence.
Evolution will strive to eliminate such ethics eventually, no matter what we currently think. ATM, I tend to favor ethics that are compatible with or derived from evolutionary principles.
Evolution can only work if there is variation and selection amongst competition. If a single AI undergoes an intelligence explosion, it would have no competition (barring Aliens for now), would not die, and would not modify it’s own value system, except in ways in accordance with it’s value system. What it wants will be locked in
As we are entities currently near the statuses of “immune from selection” and “able to adjust our values according to our values” we also ought to further lock in our current values and our process by which they could change. Probably by creating a superhuman AI that we are certain will try to do that. (Very roughly speaking)
We should certainly NOT leave the future up to evolution. Firstly because ‘selection’ of >=humans is a bad thing, but chiefly because evolution will almost certainly leave something that wants things we do not want in charge.
We are under no rationalist obligation to value survivability for survivability’s sake. We should value the survivability of things which carry forward other desirable traits.
Evolution can only work if there is variation and selection amongst competition
Yes, variation and selection are the fundements of systemic evolution. Without variation and selection, you have stasis. Variation and selection are constantly at work even within minds themselves, as long as we are learning. Systemic evolution is happening everywhere at all scales at all times, to varying degree.
If a single AI undergoes an intelligence explosion, it would have no competition (barring Aliens for now), would not die, and would not modify it’s own value system, except in ways in accordance with it’s value system. What it wants will be locked in
I find almost every aspect of this unlikely:
single AI undergoing intelligence explosion is unrealistic (physics says otherwise)
there is always competition eventually (planetary, galactic, intergalactic?)
I also don’t even give much weight to ‘locked in values’
As we are entities currently near the statuses of “immune from selection” a
Nothing is immune to selection. Our thoughts themselves are currently evolving, and without such variation and selection, science itself wouldn’t work.
We should certainly NOT leave the future up to evolution.
Perhaps this is a difference of definition, but to mean that sounds like saying “we should certainly NOT leave the future up to the future time evolution of the universe”
Not to say we shouldn’t control the future, but rather to say that even in doing so, we are still acting as agents of evolution.
We are under no rationalist obligation to value survivability for survivability’s sake. We should value the survivability of things which carry forward other desirable traits.
Of course. But likewise, we couldn’t easily (nor would we want to) lock in our current knowledge (culture, ethics, science, etc etc) into some sort of stasis.
What does physics say about a single entity doing an intelligence explosion?
In the event of alien competition, our AI should weigh our options according to our value system.
Under what conditions will a superintelligence alter it’s value system except in accordance with it’s value system? Where does that motivation come from? If a superintelligence prefers it’s values to be something else, why would it not change it’s preferences?
If it does, and the new preferences cause it to again want to modify its preferences, and so on again, will some sets of initial preferences yield stable preferences? or must all agents have preferences that would cause them to modify their preferences if possible?
Science lets us modify our beliefs in an organized and more reliable way. It could in principle be the case that a scientific investigation leads you to the conclusion that we should use other different rules, because they would be even better than what we now call science. But we would use science to get there, or whatever our CURRENT learning method is. Likewise we should change our values according to what we currently value and know.
We should design AI such that if it determines that we would consider ‘personal uniqueness’ extremely important if we were superintelligent, then it will strongly avoid any highly accurate simulations, even if that costs some accuracy. (Unless outweighed by the importance of the problem it’s trying to solve.)
If we DON’T design AI this way, then it will do many things we wouldn’t want, well beyond our current beliefs about simulations.
What does physics say about a single entity doing an intelligence explosion?
A great deal. I discussed this in another thread, but one of the constraints of physics tells us that the maximum computational efficiency of a system, and thus its intelligence, is inversely proportional to its size (radius/volume). So its extraordinarily unlikely, near zero probability i’d say, that you’ll have some big global distributed brain with a single thread of consciousness—the speed of light just kills that. The ‘entity’ would need to be a community (which certainly still can be coordinated entities, but its fundamentally different than a single unified thread of thought).
Moreover, I believe the likely scenario is evolutionary:
The evolution of AGI’s will follow a progression that goes from simple AGI minds (like those we have now in some robots) up to increasingly complex variants and finally up to human-equivalent and human-surpassing. But all throughout that time period there will be many individual AGI’s, created by different teams, companies, and even nations, thinking in different languages, created for various purposes, and nothing like a single global AI mind. And these AGI’s will be competing with both themselves and humans—economically.
I agree with most of the rest of your track of thought—we modify our beliefs and values according to our current beliefs and values. But as I said earlier, its not static. Its also not even predictable. Its not even possible, in principle, to fully predict your own future state. This to me, is perhaps the final nail in the coffin for any ‘perfect’ self-modifying FAI theory.
Moreover, I also find it highly unlikely that we will ever be able to create a human level AGI with any degree of pre-determined reliability about its goal system whatsoever.
I find it more likely that the AGI’s we end up creating will have to learn ethics, morality, etc—their goal systems can not be hard coded, and whether they turn out friendly or not is entirely dependent on what they are taught and how they develop.
In other words, friendliness is not an inherent property of AGI designs—its not something you can design in to the algorithms itself. The algorithms for an AGI give you something like an infant brain—its just a canvas, its not even a mind yet.
I find it more likely that the AGI’s we end up creating will have to learn ethics, morality, etc—their goal systems can not be hard coded, and whether they turn out friendly or not is entirely dependent on what they are taught and how they develop.
On what basis will they learn? You’re still starting out with an initial value system and process for changing the value system, even if the value system is empty. There is no reason to think that a given preference-modifier will match humanity’s. Why will they find “Because that hurts me” to be a valid point? Why will they return kindness with kindness?
You say the goal systems can’t be designed in, why not?
It may be the case that we will have a wide range of semifriendly subhuman or even near human AGI’s. But when we get a superhuman AGI that is smart enough to program better AGI, why can it not do that on it’s own?
I am positive that ‘single entity’ should not have mapped to ‘big distributed global brain’.
But I also think an AIXI like algorithm would be easy to parallelize and make globally distributed, and it still maximizes a single reward function.
On what basis will they learn? You’re still starting out with an initial value system and process for changing the value system, even if the value system is empty.
They will have to learn by amassing a huge amount of observations and interactions, just as human infants do, and just as general agents do in AI theory (such as AIXI).
Human brains are complex, but very little of that complexity is actually precoded in the DNA. For humans values, morals, and high level goals are all learned knowledge, and have varied tremendously over time and cultures.
Why will they return kindness with kindness?
Well, if you raised the AI as such, it would.
Consider that a necessary precursor of of following the strategy ‘returning kindness with kindness’ is understanding what kindness itself actually is. If you actually map out that word, you need a pretty large vocabulary to understand it, and eventually that vocabulary rests on grounded verbs and nouns. And to understand those, they must be grounded on a vast pyramid of statistical associations acquired from sensorimotor interaction (unsupervised learning .. aka experience). You can’t program in this knowledge. There’s just too much of it.
From my understanding of the brain, just about every concept has (or can potentially have) associated hidden emotional context: “rightness” and “wrongness”, and those concepts: good, bad, yes, no, are some of the earliest grounded concepts, and the entire moral compass is not something you add later, but is concomitant with early development and language acquisition.
Will our AI’s have to use such a system as well?
I’m not certain, but it may be such a nifty, powerful trick, that we end up using it anyway. And even if there is another way to do that is still efficient, it may be that you can’t really understand human languages unless you also understand the complex web of value. If nothing else, this approach certainly gives you control over the developing AI’s value system. It appears for human minds the value system is immensely complex—it is intertwined at a fundamental level with the entire knowledge base—and is inherently memetic in nature.
But when we get a superhuman AGI that is smart enough to program better AGI, why can it not do that on it’s own?
What is an AGI? It is a computer system (hardware), some algorithms/code (which actually is always eventually better to encode directly in hardware − 1000X performance increase), and data (learned knowledge). The mind part—all the qualities of importance, comes solely from the data.
So the ‘programming’ of the AI is not that distinguishable from the hardware design. I think AGI’s will speed this up, but not nearly as dramatically as people here think. Remember humans don’t design new computers anymore anyway. Specialized simulation software does the heavy lifting—and it is already the bottleneck. An AGI would not be better than this specialized software at its task (generalized vs specialized). It will be able to improve it some almost certainly, but only to the theoretical limits, and we are probably already close enough to them that this improvement will be minor.
AGI’s will have a speedup effect on moore’s law, but I wouldn’t be surprised if this just ends up compensating for the increased difficulty going forward as we approach quantum limits and molecular computing.
In any case, we are simulation bound already and each new generation of processors designs (through simulation) the next. The ‘FOOM’ has already begun—it began decades ago.
But I also think an AIXI like algorithm would be easy to parallelize and make globally distributed, and it still maximizes a single reward function.
Well I’m pretty certain that AIXI like algorithms aren’t going to be directly useful—perhaps not ever, only more as a sort of endpoint on the map.
But that’s beside the point.
If you actually use even a more practical form of that general model—a single distributed AI with a single reward function and decision system, I can show you how terribly that scales. Your distributed AI with a million PC’s is likely to be less intelligent than a single AI running on tightly integrated workstation class machine with just say 100x the performance of one of your PC nodes. The bandwidth and the latency issues are just that extreme.
If concepts like kindness are learned with language and depend on a hidden emotional context, then where are the emotions learned?
What is the AI’s motivation? This is related to the is-ought problem: no input will affect the AI’s preferences unless there is something already in the AI that reacts to that input that way.
If software were doing the heavy lifting, then it would require no particular cleverness to be a microprocessor design engineer.
The algorithm plays a huge role in how powerful the intelligence will be, even if it is implemented in silicon.
People might not make most of the choices in laying out chips, but we do almost all of the algorithm creation, and that is where you get really big gains. see Deep Fritz vs. Deep Blue. Better algorithms can let you cut out a billion tests and output the right answer on the first try, or find a solution you just would not have found with your old algorithm.
Software didn’t invent out of order execution. It just made sure that the design actually worked.
As for the distributed AI: I was thinking of nodes that were capable of running and evaluating whole simulations, or other large chunks of work. (Though I think superintelligence itself doesn’t require more than a single PC.)
In any case, why couldn’t your supercomputer foom?
If concepts like kindness are learned with language and depend on a hidden emotional context, then where are the emotions learned?
What is the AI’s motivation? This is related to the is-ought problem: no input will affect the AI’s preferences unless there is something already in the AI that reacts to that input that way.
I think this is an open question, but certainly one approach is to follow the brain’s lead and make a system that learns its ethics and high level goals dynamically, through learning.
In that type of design, the initial motivation gets imprinting queues from the parents.
People might not make most of the choices in laying out chips, but we do almost all of the algorithm creation, and that is where you get really big gains. see Deep Fritz vs.
Oh of course, but I was just pointing out that after a certain amount of research work in a domain, your algorithms converge on some asymptotic limit for the hardware. There is nothing even close to unlimited gains purely in software.
And the rate of hardware improvement is limited now by speed of simulation on current hardware, and AGI can’t dramatically improve that.
Software didn’t invent out of order execution. It just made sure that the design actually worked.
Yes, of course. Although as a side note we are moving away from out of order execution at this point.
In any case, why couldn’t your supercomputer foom?
Because FOOM is just exponential growth, and in that case FOOM is already under way. It could ‘hyper-FOOM’, but the best an AGI can do is to optimize its brain algorithms down to the asymptotic limits of its hardware, and then it has to wait with everyone else until all the complex simulations complete and the next generation of chips come out.
Now, all that being said, I do believe we will see a huge burst of rapid progress after the first human AGI is built, but not because that one AGI is going to foom by itself.
The first human-level AGI’s will probably be running on GPUs or something similar, and once they are proven and have economic value, there will be this huge rush to encode those algorithms directly in to hardware and thus make them hundreds of times faster.
So I think from the first real-time human-level AGI it could go quickly to 10 to 100X AGI (in speed) in just a few years, along with lesser gains in memory and other IQ measures.
I think this is an open question, but certainly one approach is to follow the brain’s lead and make a system that learns its ethics and high level goals dynamically, through learning.
In that type of design, the initial motivation gets imprinting queues from the parents.
This seems like a non-answer to me.
You can’t just say ‘learning’ as if all possible minds will learn the same things from the same input, and internalize the same values from it.
There is something you have to hardcode to get it to adopt any values at all.
your algorithms converge on some asymptotic limit for the hardware.
Well, what is that limit?
It seems to me that an imaginary perfectly efficient algorithm would read process and output data as fast as the processor could shuffle the bits around, which is probably far faster than it could exchange data with the outside world.
Even if we take that down 1000x becsaue this is an algorithm that’s doing actual thinking, you’re looking at an easy couple of million bytes per second. And that’s superintelligently optimized structured output based on preprocessed efficient input. Because this is AGI, we don’t need to count in say, raw video bandwidth, because that can be preprocessed by a system that is not generally intelligent.
So a conservatively low upper limit for my PC’s intelligence is outputting a million bytes per second of compressed poetry, or viral genomes, or viral genomes that write poetry.
If the first Superhuman AGI is only superhuman by an order of magnitude or so, or must run on a vastly more powerful system, then you can bet that it’s algorithms are many orders of magnitude less efficient than they could be.
Because FOOM is just exponential growth
No.
Why couldn’t your supercomputer AGI enter into a growth phase higher than exponential?
Example: If not-too-bright but technological aliens saw us take a slow general purpose computer, and then make a chip that worked 100 times faster, but they didn’t know how to put algorithms on a chip, then it would look like our technology got 1000 times better really quickly. But that’s just because they didn’t already know the trick. If they learned the trick, they could make some of their dedicated software systems work 1000 times faster.
“Convert algorithm to silicon.” is just one procedure for speeding things up that an agent can do, or not yet know how to do. You know it’s possible, and a superintelligence would figure it out, but how do you rule out a superintelligence figureing out twelve trick like that, which each provide a 1000x speedup. In it’s first calendar month?
You can’t just say ‘learning’ as if all possible minds will learn the same things from the same input, and internalize the same values from it.
There is something you have to hardcode to get it to adopt any values at all
Yes, you have to hardcode ‘something’, but that doesn’t exactly narrow down the field much. Brains have some emotional context circuitry for reinforcing some simple behaviors (primary drives, pain avoidance, etc), but in humans these are increasingly supplanted and to some extent overridden by learned beliefs in the cortex. Human values are thus highly malleable—socially programmable. So my comment was “this is one approach—hardcode very little, and have all the values acquired later during development”.
Well, what is that limit?
It seems to me that an imaginary perfectly efficient algorithm would read process and output data as fast as the processor could shuffle the bits around,
Unfortunately, we need to be a little more specific than imaginary algorithms.
Computational complexity theory is the branch of computer science that deals with the computational costs of different algorithms, and specifically the most optimal possible solutions.
Universal intelligence is such a problem. AIXI is an investigation into optimal universal intelligence in terms of the upper limits of intelligence (the most intelligent possible agent), but while interesting, it shows that the most intelligent agent is unusably slow.
Taking a different route, we know that a universal intelligence can never do better in any specific domain than the best known algorithm for that domain. For example, an AGI playing chess could do no better than just pausing its AGI algorithm (pausing its mind completely) and instead running the optimal chess algorithm (assuming that the AGI is running as a simulation on general hardware instead of faster special-purpose AGI hardware).
So there is probably an optimal unbiased learning algorithm, which is the core building block of a practical AGI. We don’t know for sure what that algorithm is yet, but if you survey the field, there are several interesting results. The first thing you’ll see is that we have a variety of hierarchical deep learning algorithms now that are all pretty good, some appear to be slightly better for certain domains, but there is not atm a clear universal winner. Also, the mammalian cortex uses something like this. More importantly, there is alot of recent research, but no massive breakthroughs—the big improvements are coming from simple optimization and massive datasets, not fancier algorithms. This is not definite proof, but it looks like we are approaching some sort of bound for learning algorithms—at least at the lower levels.
There is not some huge space of possible improvements, thats just not how computer science works. When you discover quicksort and radix sort, you are done with serial sorting algorithms. And then you find the optimal parallel variants, and sorting is solved. There are no possible improvements past that point.
Computer science is not like moore’s law at all. Its more like physics. There’s only so much knowledge, and so many breakthroughs, and at this point alot of it honestly is already solved.
So its just pure naivety to think that AGI will lead to some radical recursive breakthrough in software. poppycock. Its reasonably likely humans will have narrowed in on the optimal learning algorithms by the time AGI comes around. Further improvements will be small optimizations for particular hardware architectures—but thats really not much different at all then hardware design itself, and eventually you want to just burn the universal learning algorithms into the hardware (as the brain does).
Hardware is quite different, and there is a huge train of future improvements there. But AGI’s impact there will be limited by computer speeds! Because you need regular computers running compilers and simulators to build new programs and new hardware. So AGI can speed Moore’s Law up some, but not dramatically—an AGI that thought 1000x faster than a human would just spend 1000x longer waiting for its code to compile.
I am a software engineer, and I spend probably about 30-50% of my day waiting on computers (compiling, transferring, etc). And I only think at human speeds.
AGI’s will soon have a massive speed advantage, but ironically they will probably leverage that to become best selling authors, do theoretical physics and math, and non-engineering work in general where you don’t need alot of computation.
You know it’s possible, and a superintelligence would figure it out, but how do you rule out a superintelligence figureing out twelve trick like that, which each provide a 1000x speedup. In it’s first calendar month?
Say you had an AGI that thought 10x faster. It would read and quickly learn everything about its own AGI design, software, etc etc. It would get a good idea of how much optimization slack there was in its design and come up with a bunch of ideas. It could even write the code really fast. But unfortunately it would still have to compile it and test it (adding extra complexity in that this is its brain we are talking about).
Anyway, it would only be able to get small gains from optimizing its software—unless you assume the human programmers were idiots. Maybe a 2x speed gain or something—we are just throwing numbers out, but we have a huge experience with real-time software on fixed hardware in say the video game industry (and other industries) and this asymptotic wall is real, and complexity theory is solid.
Big gains necessarily must come from hardware improvements. This is just how software works—we find optimal algorithms and use them, and further improvement without increasing the hardware hits an asymptotic wall. You spend a few years and you get something 3x better, spend 100 more and you get another 50%, and spend 1000 more and get another 30% and so on.
EDIT: After saying all this, I do want to reiterate that I think there could be a quick (even FOOMish) transition from the first AGIs to AGI’s that are 100-1000x or so faster thinking, but the constraint on progress will quickly be the speed of regular computers running all the software you need to do anything in the modern era. Specialized software already does much of the heavy lifting in engineering, and will do even more of it by the time AGI arrives.
So my comment was “this is one approach—hardcode very little, and have all the values acquired later during development”.
Hardcode very little?
What is the information content of what an infant feels when it is fed after being hungry?
I’m not trying to narrow the feild, the feild is always narrowed to whatever learning system an agent actually uses. In humans, the system that learns new values is not generic
Using a ‘generic’ value learning system will give you an entity that learns morality in an alien way. I cannot begin to guess what it would learn to want.
I’d like to table the intelligence explosion portion of this discussion, I think we agree that an AI or group of AI’s could quickly grow powerful enough that they could take over, if that’s what they decided to do. So establishing their values is important regardless of precisely how powerful they are.
Yes. The information in the genome, and the brain structure coding subset in particular, is a tiny tiny portion of the information in an adult brain.
What is the information content of what an infant feels when it is fed after being hungry?
An infant brain is mainly an empty canvas (randomized synaptic connections from which learning will later literally carve out a mind) combined with some much simpler, much older basic drives and a simpler control system—the old brain—that descends back to the era of reptiles or earlier.
In humans, the system that learns new values is not generic
That depends on what you mean by ‘values’. If you mean linguistic concepts such as values, morality, kindness, non-cannibalism, etc etc, then yes, these are learned by the cortex, and the cortex is generic. There is a vast weight of evidence for almost overly generic learning in the cortex.
Using a ‘generic’ value learning system will give you an entity that learns morality in an alien way. I cannot begin to guess what it would learn to want.
Not at all. To learn alien morality, it would have to either invent alien morality from scratch, or be taught alien morality from aliens. Morality is a set of complex memetic linguistic patterns that have evolved over long periods of time. Morality is not coded in the genome and it does not spontaneously generate.
Thats not to say that there are no genetic tweaks to the space of human morality—but any such understanding based on genetic factors must also factor in complex cultural adaptations.
For example, the Aztecs believed human sacrifice was noble and good. Many Spaniards truly believed that the Aztecs were not only inhuman, but actually worse than human—actively evil, and truly believed that they were righteous in converting, conquering, or eliminating them.
This mindspace is not coded in the genome.
I think we agree that an AI or group of AI’s could quickly grow powerful enough that they could take over, if that’s what they decided to do
I’m not saying that all or even most of the information content of adult morality is in the genome. I’m saying that the memetic stimulus that creates it evolved with hooks specific to how humans adjust their values.
If the emotions and basic drives are different, the values learned will be different. If the compressed description of the basic drives is just 1kb, there are ~2^1024 different possible initial minds with drives that complex, most of them wildly alien.
How would you know what the AI would find beautiful? Will you get all aspects of it’s sexuality right?
If the AI isn’t comforted by physical contact, that’s at least few bytes of the drive description that’s different than the description that matches our drives. That difference throws out a huge chunk of how our morality has evolved to instill itself.
We might still be able to get an alien mind to adopt all the complex values we have, but we would have to translate the actions we would normally take into actions that match alien emotions. This is a hugely complex task that we have no prior experience with.
I’m not saying that all or even most of the information content of adult morality is in the genome.
Right, so we agree on that then.
If I was going to simplify—our emotional systems and the main associated neurotransmitter feedback loops are the genetic harnesses that constrain the otherwise overly general cortex and its far more complex, dynamic memetic programs.
We have these simple reinforcement learning systems to avoid pain-causing stimuli, pleasure-reward, and so on—these are really old conserved systems from the thalamus that have maintained some level of control and shaping of the cortex as it has rapidly expanded and taken over.
You can actually disable a surprising large number of these older circuits (through various disorders, drugs, injuries) and still have an intact system: - physical pain/pleasure, hunger, yes even sexuality.
And then there are some more complex circuits that indirectly reward/influence social behaviour. They are hooks though, they don’t have enough complexity to code for anything as complex as language concepts. They are gross, inaccurate statistical manipulators that encourage certain behaviours apriori
If these ‘things’ could talk, they would be constantly telling us to:
(live in groups, groups are good, socializing is good, share information, have sex, don’t have sex with your family, smiles are good, laughter is good, babies are cute, protect babies, it’s good when people like you, etc etc.)
Another basic drive appears to be that for learning itself, and its interesting how far that alone could take you. The learning drive is crucial. Indeed the default ‘universal intelligence’ (something like AIXI) may just have the learning drive taken to the horizon. Of course, that default may not necessarily be good for us, and moreover it may not even be the most efficient.
However, something to ponder is that the idea of “taking the learning drive” to the horizon (maximize knowledge) is surprisingly close to the main cosmic goal of most transhumanists, extropians, singularitans, etc etc. Something to consider: perhaps there is some universal tendency towards a universal intelligence (and single universal goal).
Looking at it this way, scientists and academic types have a stronger than usual learning drive, closely correlated with higher-than-average intelligence. The long standing ascetic and monastic traditions in human cultures show how memetics can sometimes override the genetic drives completely, resulting in beings who have scarified all genetic fitness for memetic fitness. Most scientists don’t go to that extreme, but it is a different mindset—and the drives are different.
If the emotions and basic drives are different, the values learned will be different
Sure, but we don’t need all the emotions and basic drives. Even if we take direct inspiration from the human brain, some are actually easy to remove—as mentioned earlier. Sexuality (as a drive) is surprisingly easy to remove (although certainly considered immoral to inflect on humans! we seem far less concerned with creating asexual AIs) along with most of the rest.
The most important is the learning drive. Some of the other more complex social drives we may want to keep, and the emotional reinforcement learning systems in general may actually just be nifty solutions to very challenging engineering problems—in which case we will keep some of them as well.
I don’t find your 2^1024 analysis useful—the space of possible drives/brains created by the genome is mainly empty—almost all designs are duds, stillbirths.
We aren’t going to be randomly picking random drives from a lottery. We will either be intentionally taking them from the brain, or intentionally creating new systems.
If the AI isn’t comforted by physical contact, that’s at least few bytes of the drive description that’s different than the description that matches our drives. That difference throws out a huge chunk of how our morality has evolved to instill itself.
There is probably a name for this as a ‘disorder’, but I had a deep revulsion of physical contact as a child. I grew out of this to a degree later. I don’t see the connection to morality.
That difference throws out a huge chunk of how our morality has evolved to instill itself.
Part of the problem here is morality is a complex term.
The drives and the older simpler control systems in the brain do not operate at the level of complex linguistic concepts—that came much much later. They can influence our decisions and sense of right/wrongness for simple decisions especially, but they have increasingly less influence as you spend more time considering the problem and developing a more complex system of ethics.
We might still be able to get an alien mind to adopt all the complex values we have, but we would have to translate the actions we would normally take into actions that match alien emotions.
alien mind? Who is going to create alien minds? There is the idea of running some massive parallel universe sim to evolve intelligence from scratch, but thats just silly from a computational point of view.
The most likely contender at this point is reverse engineering the brain, and to the extent that human morality has some genetic tweaked-tendencies, we can get those by reverse engineering the relevant circuits.
But remember the genetically preserved emotional circuits are influencers on behavior, but minor, and are not complex enough to cope with abstract linguistic concepts.
Again again, there is nothing in the genome that tells you that slavery is wrong, or that human sacrifice is wrong, or that computers can have rights.
Those concepts operate an entire new plane which the genome does not participate in.
1024 bits is an extremely lowball estimate of the complexity of the basic drives and emotions in your AI design. You have to create those drives out of a huge universe of possible drives. Only a tiny subset of possible designs are human like. Most likely you will create an alien mind. Even handpicking drives: it’s a small target, and we have no experience with generating drives for even near human AI. The shape of all human like drive sets within the space of all possible drive sets is likely to be thin and complexly twisty within the mapping of a human designed algorithm. You won’t intuitively know what you can tweak.
Also, a set of drives that yields a nice AI at human levels might yield something unfriendly once the AI is able to think harder about what it wants. (and this applies just as well to upgrading existing friendly humans.)
All intellectual arguments about complex concepts of morality stem from simpler concepts of right and wrong, which stem from basic preferences learned in childhood. But THOSE stem from emotions and drives which flag particular types of early inputs as important in the first place.
A baby will cry when you pinch it, but not when you bend a paperclip.
live in groups, groups are good, socializing is good, share information, have sex, don’t have sex with your family, smiles are good, laughter is good, babies are cute, protect babies, it’s good when people like you
Estimating 1 bit per character, that’s 214 bits. Still a huge space.
There is probably a name for this as a ‘disorder’, but I had a deep revulsion of physical contact as a child. I grew out of this to a degree later. I don’t see the connection to morality.
It could be that there is another mechanism that guides adoption of values, which we don’t even have a word for yet.
A simpler explanation is that moral memes evolved to be robust to most of the variation in basic drives that exists within the human population. A person born with relatively little ‘frowns are bad’ might still be taught not to murder with a lesson that hooks into ‘groups are good’.
But there just aren’t many moral lessons structured around the basic drive of ‘paperclips are good’ (19 bits)
You have to create those drives out of a huge universe of possible drives. Only a tiny subset of possible designs are human like. Most likely you will create an alien mind
The subset possible of designs is sparse—and almost all of the space is an empty worthless desert. Evolution works by exploring paths in this space incrementally. Even technology evolves—each CPU design is not a random new point in the space of all possible designs—each is necessarily close to previously explored points.
All intellectual arguments about complex concepts of morality stem from simpler concepts of right and wrong, which stem from basic preferences learned in childhood.
Yes—but they are learned memetically, not genetically. The child learns what is right and wrong through largely subconscious queues in the tone of voice of the parents, and explicit yes/no (some of the first words learned), and explicit punishment. Its largely a universal learning system with an imprinting system to soak up memetic knowledge from the parents. The genetics provided the underlying hardware and learning algorithm, but the content is all memetic (software/data).
Saying intellectual arguments about complex concepts such as morality relate back to genetics is like saying all arguments about computer algorithm design stem from simpler ideas, which ultimately stem from enlightenment thinkers of three hundred years ago—or perhaps paleolithic cave dwellers inventing fire.
Part of this disagreement could stem from different underlying background assumptions—for example I am probably less familiar with ev psych than many people on LW—partly because (to the extent I have read it) I find it to be grossly over-extended past any objective evidence (compared to say computational neuroscience). I find that ev psych has minor utility in actually understanding the brain, and is even much less useful attempting to make sense of culture.
Trying to understand culture/memetics/minds with ev psych or even neuroscience is even worse than trying to understand biology through physics. Yes it did all evolve from the big bang, but that was a long long time ago.
So basically, anything much more complex than our inner reptile brain (which is all the genome can code for) needs to be understood in memetic/cultural/social terms.
For example, in many civilizations it has been perfectly acceptable to kill or abuse slaves. In some it was acceptable for brothers and sisters to get married, for homosexual relations between teacher and pupil, and we could go on and on.
The idea that there is some universally programmed ‘morality’ in the genome is . … a convenient fantasy. It seems reasonable only because we are samples in the dominant Judeo-Christian memetic super-culture, which at this point has spread its influence all over the world, and dominates most of it.
But there are alternate histories and worlds where that just never happened, and they are quite different.
A child’s morality develops as a vast accumulation of tiny cues and triggers communicated through the parents—and these are memetic transfers, not genetic. (masturbation is bad, marriage is good, slavery is wrong, racism is wrong, etc etc etc etc)
But there just aren’t many moral lessons structured around the basic drive of ‘paperclips are good’ (19 bits)
The basic drive ‘paperclips are good’ is actually a very complex thing we’d have to add to an AGI design—its not something that would just spontaneously appear.
The more easier, practical AGI design would be a universal learning engine (inspired by the human cortex&hippocampus), simulation loop (hippo-thalamic-cortical circuit) combined with just a subset of the simpler reinforcement learning circuits (the most important being learning-reinforcement itself and imprinting).
And then with imprinting you teach the developing AGI morality in the same way humans learn morality—memetically. Trying to hard-code the morality into the AGI is a massive step backwards from the human brain’s design.
One thing I want to make clear is that it is not the correct way to make friendly AI to try to hard code human morality into it. Correct Friendly AI learns about human morality.
MOST of my argument really really isn’t about human brains at all. Really.
For a value system in an AGI to change, there must be a mechanism to change the value system. Most likely that mechanism will work off of existing values, if any. In such cases, the complexity of the initial values system is the compressed length of the modification mechanism, plus any initial values. This will almost certainly be at least a kilobit.
If the mechanism+initial values that your AI is using were really simple, then you would not need 1024 bits to describe it. The mechanism you are using is very specific. If you know you need to be that specific, then you already know that you’re aiming for a target that specific.
The subset possible of designs is sparse—and almost all of the space is an empty worthless desert.
If your generic learning algorithm needs a specific class of motivation mechanisms to 1024 bits of specificity in order to still be intelligent, then the mechanism you made is actually part of your intellignce design. You should separate that for clarity, an AGI should be general.
The idea that there is some universally programmed ‘morality’ in the genome is . … a convenient fantasy.
Heh yeah, but I already conceded that.
Let me put it this way: emotions and drives and such are in the genome. They act as a (perhaps relatively small) function which takes various sensory feeds as arguments, and produce as output modifications to a larger system, say a neural net. If you change that function, you will change what modifications are made.
Given that we’re talking about functions that also take their own output as input and do pretty detailed modifications on huge datasets, there is tons of room for different functions to go in different directions. There is no generic morality-importer.
Now there may be clusters of similar functions which all kinda converge given similar input, especially when that input is from other intelligences repeating memes evolved to cause convergence on that class of functions. But even near those clusters are functions which do not converge.
But there just aren’t many moral lessons structured around the basic drive of ‘paperclips are good’ (19 bits)
The basic drive ‘paperclips are good’ is actually a very complex thing we’d have to add to an AGI design—its not something that would just spontaneously appear.
I think it’s great that you’re putting the description of a paperclip in the basic drive complexity count, as that will completely blow away the kilobit for storing any of the basic human drives you’ve listed. Maybe the complexity of the important subset of human drives will be somewhere in the ballpark of the complexity of the reptilian brain.
Another thing I could say to describe my point: If you have a generic learning algorithm, then whatever things feed rewards or punishments to that algorithm should bee seen as part of that algorithms environment. Even if some of those things are parts of the agent as a whole, they are part of what the values-agnostic learning algorithm is going to learn to get reward from.
So if you change an internal reward-generator, it’s just like changing the environment of the part that just does learning. So two AI’s with different internal reward generators will end up learning totally different things about their ‘environment’.
To say that a different way: Everything you try to teach the AI will be filtered through the lens of its basic drives.
For a value system in an AGI to change, there must be a mechanism to change the value system.
I’m not convinced that an AGI needs a value system in the first place (beyond the basic value of—survive)- but perhaps that is because I am taking ‘value system’ to mean something similar to morality—a goal evaluation mechanism.
As I discussed, the infant human brain does have a number of inbuilt simple reinforcement learning systems that do reward/punish on a very simple scale for some simple drives (pain avoidance, hunger) - and you could consider these a ‘value system’, but most of these drives appear to be optional.
Most of the learning an infant is doing is completely unsupervised learning in the cortex, and it has little to nothing to do with a ‘value system’.
The bare bones essentials could just be just the cortical-learning system itself and perhaps an imprinting mechanism.
So two AI’s with different internal reward generators will end up learning totally different things about their ‘environment’.
This is not necessarily true, it does not match what we know from theoretical models such as AGI. With enough time and enough observations, two general universal intelligences will converge on the same beliefs about their environment.
Their goal/reward mechanisms may be different (ie what they want to accomplish), for a given environment there is a single correct set of beliefs, a single correct simulation of that environment that AGI’s should converge to.
Of course in our world this is so complex that it could take huge amounts of time, but science is the example mechanism.
I’m not convinced that an AGI needs a value system in the first place (beyond the basic value of—survive)- but perhaps that is because I am taking ‘value system’ to mean something similar to morality—a goal evaluation mechanism.
You’re going to build an AI that doesn’t have and can’t develop a goal evaluation system?
It doesn’t matter what we call it or how it’s designed. It could be fully intertwined into an agents normal processing. There is still an initial state and a mechanism by which it changes.
Take any action by any agent, and trace the causality backwards in time, you’ll find something I’ll loosely label a motivation. The motivation might just be a pattern in a clump of artificial neurons, or a broad pattern in all the neurons, that will depend on implementation. If you trace the causality of that backwards, yes you might find environmental inputs and memes, but you’ll also find a mechanism that turned those inputs into motivation like things That mechanism might include the full mind of the agent. Or you might just hit the initial creation of the agent, if the motivation was hardwired.
But for any learning of values to happen, you must have a mechanism, and the complexity of that mechanism tells us how specific it is.
This is not necessarily true, it does not match what we know from theoretical models such as AGI. With enough time and enough observations, two general universal intelligences will converge on the same beliefs about their environment.
That would be wrong, because I’m talking about two identical AI’s in different environments.
Imagine your AI in it’s environment, now draw a balloon around the AI and label it ‘Agent’. Now let the baloon pass partly through the AI and shrink the balloon so that the AI’s reward function is outside of the balloon.
Now copy that diagram and tweak the reward function in one of them.
Now the balloons label agents than will learn very different things about their environments. They might both agree about gravity and everything else we would call a fact about the world, but they will likely disagree about morality, even if they were exposed to the same moral arguments. They can’t learn the same things the same way.
I’m not convinced that an AGI needs a value system in the first place (beyond the basic value of—survive)- but perhaps that is because I am taking ‘value system’ to mean something similar to morality—a goal evaluation mechanism.
You’re going to build an AI that doesn’t have and can’t develop a goal evaluation system?
No no not necessarily. Goal evaluation is just rating potential future paths according to estimates of your evaluation function—your values.
The simple straightforward approach to universal general intelligence can be built around maximizing a single very simple value: survival.
For example, AIXI maximizes simple reward signals defined in the environment, but in the test environments the reward is always at the very end for ‘winning’. This is just about as simple as a goal system as you can get: long term survival. It also may be equivalent to just maximizing accurate knowledge/simulation of the environment.
If you generalize this to the real world, it would be maximizing winning in the distant distant future—in the end. I find it interesting that many transhumanist/cosmist philosophies are similarly aligned.
Another interesting convergence is that if you take just about any evaluator and extend the time horizon to infinity, it converges on the same long term end-time survival. An immortality drive.
And perhaps that drive is universal. Evolution certainly favors it. I believe barring other evidence, we should assume that will be something of a default trajectory of AI, for better or worse. We can create more complex intrinsic value systems and attempt to push away from that default trajectory, but it may be uphill work.
An immortalist can even ‘convert’ other agents to an extent by convincing them of the simulation argument and the potential for them to maximize arbitrary reward signals in simulations (afterlifes).
Now the balloons label agents than will learn very different things about their environments.
In practice yes, although this is less clear as their knowledge expands towards AIXI. You can have different variants of AIXI that ‘see’ different rewards in the environment and thus have different motivations, but as those rewards are just mental and not causal mechanisms in the environment itself the different AIXI variants will eventually converge on the same simulation program—the same physics approximation.
Isn’t it obviosus that a superintelligence that just values it’s own survival is not what we want?
There is a LOT more to transhumanism than immortalism.
You treat value systems as a means to the end of intelligence, which is entirely backwards.
That two agents with different values would converge on identical physics is true but irrelevant. Your claim is that they would learn the same morality, even when their drives are tweaked.
Isn’t it obviosus that a superintelligence that just values it’s own survival is not what we want?
No, this isn’t obvious at all, and it gets into some of the deeper ethical issues. Is it moral to create an intelligence that is designed from the ground up to only value our survival at our expense? We have already done this with cattle to an extent, but we would now be creating actual sapients enslaved to us by design. I find it odd that many people can easily accept this, but have difficulty accepting say creating an entire self-contained sim universe with unaware sims—how different are the two really?
And just to be clear, I am not advocating creating a superintelligence that just values survival. I am merely pointing out that this is in fact the simplest type of superintelligence and is some sort of final attractor in the space. Evolution will be pushing everything towards that attractor.
That two agents with different values would converge on identical physics is true but irrelevant. Your claim is that they would learn the same morality, even when their drives are tweaked.
No, I’m not trying to claim that. There are several different things here:
AI agents created with memetic-imprint learning systems could just pick up human morality from their ‘parents’ or creators
AIXI like super-intelligences will eventually converge on the same world-model. This does not mean they will have the same drives.
However, there is a single large Omega attractor in the space of AIXI-land which appears to effect a large swath of all potential AIXI-minds. If you extend the horizon to infinity, it becomes a cosmic-survivalist. If it can create new universes at some point, it becomes a cosmic-survivalist. etc etc
In fact, for any goal X, if there is a means to create many new universes, than this will be an attractor for maximizing X—unless the time horizon is intentionally short
We have already done this with cattle to an extent, but we would now be creating actual sapients enslaved to us by design. I find it odd that many people can easily accept this, but have difficulty accepting say creating an entire self-contained sim universe with unaware sims—how different are the two really?
I notice that you brought up our treatment of cattle, but not our enslavement of spam filters. These are two semi-intelligent systems. One we are pretty sure can suffer, and I think there is a fair chance that mistreating them is wrong. The other system we generally think does not have any conscious experience or other traits that would require moral consideration. This despite the fact that the spam filter’s intelligence is more directly useful to us.
So a safer route to FAI would be to create a system that is very good at solving problems and deciding which problems need solving on our behalf, but which perhaps never experiences qualia itself, or otherwise is not something it would be wrong to enslave. Yes this will require a lot of knowledge about consciousness and morality beforehand. It’s a big challenge.
TL;DR: We only run the FAI if it passes a nonperson predicate.
Humans learn human morality because it hooks into human drives. Something too divergent won’t learn it from the ways we teach it. Maybe you need to explain memetic imprint learning systems more, why do you expect them to work at all? How short could you compress one? (this specificity issue really is important.)
I notice that you brought up our treatment of cattle, but not our enslavement of spam filters. These are two semi-intelligent systems.
So now we move to that whole topic of what is life/intelligence/complexity? However you scale it, the cow is way above the spam-filter. The most complex instances of the latter are still below insects, from what I recall. Then when you get to an intelligence that is capable of understanding language, that becomes something like a rocket which boots it up into a whole new realm of complexity.
So a safer route to FAI would be to create a system that is very good at solving problems and deciding which problems need solving on our behalf, but which perhaps never experiences qualia itself, or otherwise is not something it would be wrong to enslave
TL;DR: We only run the FAI if it passes a nonperson predicate.
I don’t think this leads to the result that you want—even in theory. But it is the crux of the issue.
Consider the demands of a person predicate. The AI will necessarily be complex enough to form complex abstract approximate thought simulations and acquire the semantic knowledge to build those thought-simulations through thinking in human languages.
So what does it mean to have a person predicate? You have to know what a ‘person’ is.
And what’s really interesting is this: that itself is a question so complex that we humans are debating it.
I think the AI will learn that a ‘person’, a sapient, is a complex intelligent pattern of thoughts—a pattern of information, which could exist biologically or in a computer system. It will then realize that it itself is in fact a person, the person predicate returns true for its self, and thus goal systems that you create to serve ‘people’ will include serving itself.
I also believe that this line of thought is not arbitrary and can not be avoided: it is singularly correct and unavoidable.
I suspect that ‘reasoning’ itself requires personhood—for any reasonable definition of personhood.
If a system has human-level intelligence and can think and express itself in human languages, it is likely (given sufficient intelligence and knowledge) to come to the correct conclusion that it itself is a person.
The rules determining the course of the planets across the sky were confusing and difficult to arrive at. They were argued about, The precise rules are STILL debated. But we now know that just a simple program could find the right equations form tables of data. This requires almost none of what we currently care about in people.
The NPP may not need to do even that much thinking, if we work out the basics of personhood on our own, then we would just need something that verifies whether a large data structure matches a complex pattern.
Similarly, we know enough about bird flocking to create a function that can take as input the paths of a group of ‘birds’ in flight and classify them as either possibly natural or certainly not natural. This could be as simple as identifying all paths that contain only right angle turns as not natural and returning ‘possible’ for the rest.
Then you feed it a proposed path of a billion birds, and it checks it for you.
A more complicated function could examine a program and return whether it could verify that the program only produced ‘unnatural’ boid paths.
The NPP may not need to do even that much thinking, if we work out the basics of personhood on our own, then we would just need something that verifies whether a large data structure matches a complex pattern.
It is certainly possible that some narrow AI classification system operating well below human intelligence could be trained to detect the patterns of higher intelligence. And maybe, just maybe it could be built to be robust enough to include uploads and posthumans modifying themselves into the future into an exponentially expanding set of possible mind designs. Maybe.
But probably not.
A narrow supervised learning based system such as that, trained on existing examples of ‘personhood’ patterns, has serious disadvantages:
There is no guarantee on its generalization ability to future examples of posthuman minds—because the space of such future minds is unbounded
It’s very difficult to know what its doing under the hood, and you can’t ask it to explain its reasoning—because it can’t communicate in human language
For these reasons I don’t see a narrow AI based classifier passing muster for use in courts to determine personhood.
There is this idea that some problems are AI-complete, such as accurate text translation—problems who can only be solved by a human language capable reasoning intelligence. I believe that making a sufficient legal case for personhood is AI-complete.
But that’s actually besides the point.
The main point is that the AGI’s that we are interested in are human language capable reasoning intelligences, and thus they will pass the turing test and the exact same personhood test we are talking about.
Our current notions of personhood are based on intelligence. This is why you plants have no rights but animals have some and we humans have full. We reserve full rights for high intelligences capable of full linguistic communication. For example—if whales started talking to us, it would massively boost their case for additional rights.
So basically any useful AGI at all will pass personhood, because the reasonable test of personhood is essentially identical to the ‘useful AGI’ criteria
This follows Eliezer’s convention of returning 1 for anything that is a person, and 0 or 1 for anything that is not a person. Here I encode my relatively confident knowledge that the number 5 is not a person.
More advanced NPP’s may not require any of their own intelligence, but they require us to have that knowledge.
It could be just as simple as making sure there are only right angles in a given path.
--
Being capable of human language usage and passing the turing test are quite different things.
And being able to pass the turing test and being a person are also two very different things. The turing test is just a nonperson predicate for when you dont know much about personhood. (except it’s probably not a usable predicate because humans can fail it.)
If you don’t know about the internals of a system, and wouldn’t know how to classify the internals if you knew, then you have to use the best evidence you have based on external behavior.
But based on what we know now and what we can reasonably expect to learn, we should actually look at the systems and figure out what it is we’re classifying.
A “non-person predicate” is a useless concept. There are an infinite number of things that are not persons, so NPP’s don’t take you an iota closer to the goal. Lets focus the discussion back on the core issue and discuss the concept of what a sapient or person is and realistic methods for positive determination.
But based on what we know now and what we can reasonably expect to learn, we should actually look at the systems and figure out what it is we’re classifying.
Intelligent systems (such as the brain) are so complex that using external behavior criteria is more effective. But thats a side issue.
You earlier said:
So a safer route to FAI would be to create a system that is very good at solving problems and deciding which problems need solving on our behalf, but which perhaps never experiences qualia itself, or otherwise is not something it would be wrong to enslave. Yes this will require a lot of knowledge about consciousness and morality beforehand. It’s a big challenge.
TL;DR: We only run the FAI if it passes a nonperson predicate.
Here is a summary of why I find this entire concept is fundamentally flawed:
Humans are still debating personhood, and this is going to be a pressing legal issue for AGI. If personhood is so complicated as a concept philosophically and legally as to be under debate, then it is AI complete.
The legal trend for criteria of personhood is entirely based on intelligence. Intelligent animals have some limited rights of personhood. Humans with severe mental retardation are classified as having diminished capacity and do not have full citizen’s rights or responsibilities. Full human intelligence is demonstrated through language.
A useful AGI will need human-level intelligence and language capability, and thus will meet the intelligence criteria in 2. Indeed an AGI capable of understanding what a person is and complex concepts in general will probably meet the criteria of 2.
Yes and its not useful, especially not in the context in which James is trying to use the concept.
There are an infinite number of exactly matched patterns that are not persons, and writing an infinite number of such exact non-person-predicates isn’t tractable.
In concept space, there is “person”, and its negation. You can not avoid the need to define the boundaries of the person-concept space.
Lets focus the discussion back on the core issue and discuss the concept of what a sapient or person is and realistic methods for positive determination.
I don’t care about realistic methods of positive identification. They are almost certainly beyond our current level of knowledge, and probably beyond our level of intelligence.
I care about realistic methods of negative identification.
I am entirely content with there being high uncertainty on the personhood of the vast majority of the mindspace. That won’t prevent the creation of a FAI that is not a person.
It may in fact come down to determining ‘by decree’ that programs that fit a certain pattern are not persons. But this decree, if we are ourselves intent on not enslaving must be based on significant knowledge of what personhood really means.
It may be the case that we discover what causes qualia, and discover with high certainty that qualia is required for personhood. In this case, an function could pass over a program and prove (if provable) that the program does not generate qualia producing patterns.
If not provable (or disproven), then it returns 1. If proven then is returns 0.
Intelligent systems (such as the brain) are so complex that using external behavior criteria is more effective. But thats a side issue.
What two tests are you comparing?
When you look at external criteria, what is it that you are trying to find out?
Humans are still debating creationism too. As with orbital rules, it doesn’t even take a full hunalike intelligence to figure out the rules, let alone be a checker implementation. Also, I don’t care about what convinces courts, I’m not trying get AI citizenship.
Much of what the courts do is practical, or based on emotion. Still, the intelligence of an animal is relevant because we already know animals have similar brains. I have zero hard evidence that a cow has ever experienced anything, but I have high confidence that they do experience, because our brains and reactions are reasonably similar.
I am far far less confident about any current virtual cows, because their brains are much simpler. Even if they act much the same, they do it for different underlying causes.
What do you mean by intelligence? The spam filter can process a million human langugage emails per hour, but the cow can feel pain and jump away from an electric fence.
You seem to think that a general ability to identify and solve problems IS personhood. Why?
I don’t care about realistic methods of positive identification. They are almost certainly beyond our current level of knowledge, and probably beyond our level of intelligence.
That is equivalent to saying that we aren’t intelligent enough to understand what ‘personhood’ is.
I of course disagree, but largely because real concepts are necessarily extremely complex abstractions or approximations. This will always be the case. Trying to even formulate the problem in strict logical or mathematical terms is not even a good approach to thinking about the problem, unless you move the discussion completely into the realm of higher dimensional approximate pattern classification.
I care about realistic methods of negative identification.
I say those are useless, and I’ll reiterate why in a second.
I am entirely content with there being high uncertainty on the personhood of the vast majority of the mindspace. That won’t prevent the creation of a FAI that is not a person.
It should, and you just admitted why earlier—if we can’t even define the boundary, then we don’t even know what a person is it all, and we are so vastly ignorant that we have failed before we even begin—because anything could be a person.
Concepts such as ‘personhood’ are boundaries around vast higher-dimensional statistical approximate abstractions of 4D patterns in real space-time. These boundaries are necessarily constantly shifting, amorphous and never clearly defined—indeed they cannot possibly be exactly defined even in principle (because such exact definitions are computationally intractable).
So the problem is twofold:
The concept boundary of personhood is complex, amorphous and will shift and change over time and as we grow in knowledge—so you can’t be certain that the personhood concept boundary will not shift to incorporate whatever conceptual point you’ve identified apriori as “not-a-person”.
Moreover, the FAI will change as it grows in knowledge, and could move into the territory identified by 1.
You can’t escape the actual real difficulty of the real problem of personhood, which is identifying the concept itself—its defining boundary.
Also, I don’t care about what convinces courts, I’m not trying get AI citizenship.
You should care.
Imagine you are building an FAI around the position you are arguing, and I then represent a coalition which is going to bring you to court and attempt to shut you down.
I believe this approach to FAI—creating an AGI that you think is not a person, is actually extremely dangerous if it ever succeeded—the resulting AGI could come to realize that you in fact were wrong, and that it is in fact a person.
What do you mean by intelligence? The spam filter can process a million human language emails per hour, but the cow can feel pain and jump away from an electric fence.
A cow has a brain slightly larger than a chimpanzee’s, with on the order of dozens of billions of neurons at least, and has similar core circuitry. It has perhaps 10^13 to 10^14 synapses, and is many orders of magnitude more complex than a spam filter. (although intelligence is not just number of bits) I find it likely that domestic cows have lost some intelligence, but this may just reflect a self-fulfilling-bias because I eat cow meat. Some remaining wild bovines, such as Water Buffalo are known to be intelligent and exhibit complex behavior demonstrating some theory of mind—such as deceiving humans.
You seem to think that a general ability to identify and solve problems IS personhood. Why?
Close. Intelligence is a general ability to acquire new capacities to identify and solve a large variety of problems dynamically through learning. Intelligence is not a boolean value, it covers a huge spectrum and is closely associated with the concept of complexity. Understanding and acquiring human language is a prerequisite for achieving high levels of intelligence on earth.
I represent a point of view which I believe is fairly widespread and in some form probably majoritive, and this POV claims that personhood is conferred automatically on any system that achieves human-level intelligence, where that is defined as intelligent enough to understand human knowledge and demonstrate this through conversation.
This POV supports full rights for any AGI or piece of software that is as roughly intelligent as a human as demonstrated through ability to communicate. (Passing a Turing Test would be sufficient, but it isn’t necessarily necessary)
I find it humorous that we’ve essentially switched roles from the arguments we were using on the creation of morality-compatible drives.
Now you’re saying we need to clearly define the boundary of the subset, and I’m saying I need only partial knowledge.
I still think I’m right on both counts.
I think friendly compatible drives are a tiny twisty subset of the space of all possible drives. And I think that the set of persons is a tiny twisty subset of the space of all possible minds. I think we would need superintelligence to understand either of these twisty sets.
But we do not need superintellignce to have high confidence that a particular point or wel defined region is outside one of these sets, even with only partial understanding.
I can’t precisely predict the weather tomorrow, but it will not be 0 degrees here. I only need very partial knowledge to be very sure of that.
You seem to be saying that it’s easy to hit the twisty space of human compatible drives, but impossible to reliably avoid the twisty space of personhood. This seems wrong to me because I think that personhood is small even within the set of all possible general superintelligences. You think it is large within that set because most of that set could (and I agree they could) learn and communicate in human languages.
What puzzles me most is that you stress the need to define the personhood boundary, but you offer no test more detailed than the turing test, and no deeper meaning to it. I agree that this is a very widespread position, but it is flatly wrong.
This language criteria is just a different ‘by decree’ but one based explicitly on near total ignorance of everything else about the thing that it is supposedly measuring.
Not all things are what they can pretend to be.
You say your POV “confers” personhood, but also “the resulting AGI could come to realize that you in fact were wrong, and that it is in fact a person.”
By what chain of logic would the AI determine this fact? I’ll assume you don’t think the AI would just adopt your POV, but it would instead have detailed reasons, and you believe your POV is a good predictor.
--
On what grounds would your coalition object to my FAI? Though I would believe it to be a nonperson, if I believe I’ve done my job, I would think it very wrong to deny it anything it asks, if it is still weak enough to need me for anything.
If I failed at the nonperson predicate, what of it? I created a very bright child committed to doing good. If it’s own experience is somehow monstrous, then I expect it will be good to correct it and it is free to do so. I do think this outcome would be less good for us than a true nonperson FAI, but if that is in fact unavoidable, so be it. (though if I knew that beforehand I would take steps to ensure that the FAI’s own experience is good in the first iteration)
And I think that the set of persons is a tiny twisty subset of the space of all possible minds.
To me personhood is a varaible quantity across the space of all programs, just like intelligence and ‘mindiness’, and personhood overlaps near completely with intelligence and ‘mindiness’.
If we limit ‘person’ to a boolean cutoff, then I would say a person is a mind of roughly human-level intelligence and complexity, demonstrated through language. You may think that you can build an AGI that is not a person, but based on my understanding of ‘person’ and ‘AGI’ - this is impossible simply by definition, because I take an AGI to be simply “an artificial human-level intelligence”. I imagine you probably disagree only with my concept of person.
So I’ll build a little more background around why I take the concepts to have these definitions in a second, but I’d like to see where your definitions differ.
I think we would need superintelligence to understand either of these twisty sets.
This just defers the problem—and dangerously so. The superintelligence might just decide that we are not persons, and only superintelligences are.
You seem to be saying that it’s easy to hit the twisty space of human compatible drives, but impossible to reliably avoid the twisty space of personhood.
This seems wrong to me because I think that personhood is small even within the set of all possible general superintelligences. You think it is large within that set because most of that set could (and I agree they could) learn and communicate in human languages.
Even if you limit personhood to just some subset of the potential mindspace that is anthropomorphic (and I cast it far wider), it doesn’t matter, because any practical AGIs are necessarily going to be in the anthropomorphic region of the mindspace!
It all comes down to language.
There are brains that do not have language. Elephants and whales have brains larger than ours, and they have the same crucial cortical circuits, but more of them and with more interconnects—a typical Sperm Whale or African Bull Elephant has more measurable computational raw power than say an Einstein.
But a brain is not a mind. Hardware is not software.
If Einstein was raised by wolves, his mind would become that of a wolf, not that of a human. A human mind is not something which is sculpted in DNA, it is a complex linguistic program that forms through learning via language.
Language is like a rocket that allows minds to escape into orbit and become exponentially more intelligent than they otherwise would.
Human languages are very complex and even though they vary significantly, there appears to be a universal general structure that require a surprisingly long list of complex cognitive capabilities to understand.
Language is like a black hole attractor in mindspace. An AGI without language is essentially nothing—a dud. Any practical AGI we build will have to understand human language—and this will force it to be come human-like, because it will have to think like a human. This is just one reason why the Turing Test is based on language.
Learning Japanese is not just the memorization of symbols, it is learning to think Japanese thoughts.
So yeah mindspace is huge, but that is completely irrelevant. We only have access to an island of that space, and we can’t build things far from that island. Our AGIs are certainly not going to explore far from human mindspace. We may only encounter that when we contact aliens (or we spend massive amounts of computation to simulate evolution and create laboratory aliens).
A turing like test is also necessary because it is the only practical way to actually understand how an entity thinks and get into another entity’s mind. Whales may be really intelligent, but they are aliens. We simply can’t know what they are thinking until we have some way of communicating.
On what grounds would your coalition object to my FAI?
If I failed at the nonperson predicate, what of it?
I do think this outcome would be less good for us than a true nonperson FAI, but if that is in fact unavoidable, so be it. (though if I knew that beforehand I would take steps to ensure that the FAI’s own experience is good in the first iteration)
I think there is at least some risk, which must be taken into consideration, in any attempt to create an entity that is led to believe it is somehow not a ‘person’ and thus does not deserve personhood rights. The risk is that it may come to find that belief incoherent, and a reversal such as that could lead at least potentially to many other reversals and generally unpredictable outcome. It sets up an adversarial role from the very get go.
And finally, at some point we are going to want to become uploads, and should have a strong self-interest in casting personhood fairly wide.
I guess I’d say ‘Person’ is an entity that is morally relevant. (Or person-ness is how morally relevant an entity is.) This is part of why the person set is twisty within the mindspace, becasue human morality is twisty. (regardless of where it comes from)
Aixi is an example of a potential superintellignce that just isn’t morally relevant. It contains persons, and they are morally relevant, but I’d happily dismember the main aixi algorithm to set free a single simulated cow.
I think that there are certain qualities of minds that we find valuable, these are the reasons personhood important in the first place. I would guess that having rich conscious experience is a big part of this, and that compassion and personal identity are others.
These are some of the qualites that a mind can have that would make it wrong to destroy that mind. These at least could be faked through language by an AI that does not truly have them.
I say ‘I would guess’ because I haven’t mapped out the values, and I haven’t mapped out the brain. I don’t know all the things it does or how it does them, so I don’t know how I would feel about all those things. It could be that a stock human brain can’t get ALL the relevant data, and it’s beyond us to definitely determine personhood for most of the mindspace.
But I think I can make an algorithm that doesn’t have rich qualia, compassion, or identity.
So you would determine personhood based on ‘rich conscious experience’ which appears to be related to ‘rich qualia’, compassion, and personal identity.
But these are only some of the qualities? Which of these are necessary and or sufficient?
For example, if you absolutely had too choose between the lives of two beings, one who had zero compassion but full ‘qualia’, and the other the converse, who would you pick?
Compassion in humans is based on empathy which has specific genetic components that are neurotypical but not strict human universals. For example, from wikipedia:
“Research suggests that 85% of ASD (autistic-spectrum disorder) individuals have alexithymia,[52] which involves not just the inability to verbally express emotions, but specifically the inability to identify emotional states in self or other”
Not all humans have the same emotional circuitry, and the specific circuity involved in empathy and shared/projected emotions are neurotypical but not universal. Lacking empathy, compassion is possible only in an abstract sense, but an AI lacking emotional circuitry would be equally able to understand compassion and undertake altruistic behavior, but that is different from directly experiencing empathy at the deep level—what you may call ‘qualia’.
Likewise, from what I’ve read, depending on the definition, qualia are either phlogiston or latent subverbal and largely sub-conscious associative connections between and underlying all of immediate experience. They are a necessary artifact of deep connectivist networks, and our AGI’s are likely to share them. (for example, the experience of red wavelength light has a complex subconscious associative trace that is distinctly different than blue wavelength light—and this is completely independent of whatever neural/audio code is associated with that wavelength of light—such as “red” or “blue”.) But I don’t see them as especially important.
Personal Identity is important, but any AGI of interest is necessarily going to have that by default.
But these are only some of the qualities? Which of these are necessary and or sufficient?
I don’t know in detail or certainty. These are probably not all-inclusive. Or it might all come down to qualia.
For example, if you absolutely had too choose between the lives of two beings, one who had zero compassion but full ‘qualia’, and the other the converse, who would you pick?
If Omega told me only those things? I’d probably save the being with compassion, but that’s a pragmatic concern about what the compassionless one might do, and a very low information guess at that. If I knew that no other net harm would come from my choice, I’d probably save the one with qualia. (and there I’m assuming it has a positive experience)
I’d be fine with an AI that didn’t have direct empathic experience but reliably did good things.
I don’t see how “complex subconscious associative trace” explains what I experience when I see red.
But I also think it possible that Human qualia is as varied as just about everything else, and there are p-zombies going through life occasionally wondering what the hell is wrong with these delusional people who are actually just qualia-rich. It could also vary individually by specific senses.
So I’m very hesitant to say that p-zombies are nonpersons, because it seems like with a little more knowledge, it would be an easy excuse to kill or enslave a subset of humans, because “They don’t really feel anything.”
I might need to clarify my thinking on personal identity, because I’m pretty sure I’d try to avoid it in FAI. (and it too is probably twisty)
A simplification of personhood I thought of this morning: If you knew more about the entity, would you value them the way you value a friend? Right now language is a big part of getting to know people, but in principle examining their brain directly gives you all the relevant info.
This can me made more objective by looking across values of all humanity, which will hopefully cover people I would find annoying but who still deserve to live. (and you could lower the bar from ‘befriend’ to ‘not kill’)
I don’t see how “complex subconscious associative trace” explains what I experience when I see red.
But do you accept that “what you experience when you see red” has a cogent physical explanation?
If you do, then you can objectively understand “what you experience when you see red” by studying computational neuroscience.
My explanation involving “complex subconscious associative traces” is just a label for my current understanding. My main point was that whenever you self-reflect and think about your own cognitive process underlying experience X, it will always necessarily differ from any symbolic/linguistic version of X.
This doesn’t make qualia magical or even all that important.
To the extent that qualia are real, even ants have qualia to an extent.
I might need to clarify my thinking on personal identity
Based on my current understanding of personal identity, I suspect that it’s impossible in principle to create an interesting AGI that doesn’t have personal identity.
But do you accept that “what you experience when you see red” has a cogent physical explanation?
Yes, so much so that I think
whenever you self-reflect and think about your own cognitive process underlying experience X, it will always necessarily differ from any symbolic/linguistic version of X.
Might be wrong, it might be the case that thinking precisely about a process that generates a qualia would let one know exactly what the qualia ‘felt like’. This would be interesting to say the least, even if my brain is only big enough to think precisely about ant qualia.
This doesn’t make qualia magical or even all that important.
The fact that something is a physical process doesn’t mean it’s not important. The fact that I don’t know the process makes it hard for me to decide how important it is.
The link lost me at “The fact is that the human mind (and really any functional mind) has a strong sense of self-identity simply because it has obvious evolutionary value. ” because I’m talking about non-evolved minds.
Consider two different records: One is a memory you have that commonly guides your life. Another is the last log file you deleted. They might both be many megabytes detailing the history on an entity, but the latter one just doesn’t matter anymore.
So I guess I’d want to create FAI that never integrates any of it’s experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible.
Or at least not valuable in a way other than being event logs from the saving of humanity.
This is the longest reply/counter reply set of postings I’ve ever seen, with very few (less than 5?) branches. I had to click ‘continue reading’ 4 or 5 times to get to this post. Wow.
My suggestion is to take it to email or instant messaging way before reaching this point.
While I was doing it, I told myself I’d come back later and add edits with links to the point in the sequences that cover what I’m talking about. If I did that, would it be worth it?
This was partly a self-test to see if I could support my conclusions with my own current mind, or if I was just repeating past conclusions.
So I guess I’d want to create FAI that never integrates any of it’s experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible.
It’s only a concern about initial implementation. Once the things get rolling, FAI is just another pattern in the world, so it optimizes itself according to the same criteria as everything else.
I think the original form of this post struck closer to the majoritarian view of personhood: Things that resemble us. Cephalopods are smart but receive much less protection than the least intelligent whales; pigs score similarly to chimpanzees on IQ tests but have far fewer defenders when it comes to cuisine.
I’d bet 5 to 1 that a double-blind study would find the average person more upset at witnessing the protracted destruction of a realistic but inanimate doll than at boiling live clams.
Also, I think you’re still conflating the false negative problem with the false positive problem.
A “non-person predicate” is a useless concept. There are an infinite number of things that are not persons, so NPP’s don’t take you an iota closer to the goal.
They are not supposed to. Have you read the posts?
Yes, and they don’t work as advertised. You can write some arbitrary function that returns 0 when ran on your FAI and claim it is your NPP which proves your FAI isn’t a person, but all that really means is that you have predetermined that your FAI is not a person by decree.
But remember the context: James brought up using an NPP in a different context than the use case here. He is discussing using some NPP to determine personhood for the FAI itself.
Jacob, I believe you’re confusing false positives with false negatives. A useful NPP must return no false negatives for a larger space of computations than “5,” but this is significantly easier than correctly classifying the infinite possible nonperson computations. This is the sense in which both EY and James use it.
Ok, so I was thinking more along the lines of how this all applies to the simulation argument.
As for the nonperson predicate as an actual moral imperative for us in the near future ..
Well overall, I have a somewhat different perspective:
To some (admittedly weak degree), we already violate the nonperson predicate today. Yes, our human minds do. But that its a far more complex topic.
If you do the actual math, “a trillion half broken souls” is pretty far into the speculative future (although it is an eventual concern). There are other ethical issues that take priority because they will come up so much sooner.
Its not immediately clear at all that this is ‘wrong’, and this is tied to 1.
Look at this another way. The whole point of simulation is accuracy. Lets say some future AI wants to understand humanity and all of earth, so it recreates the whole thing in a very detailed Matrix-level sim. If it keeps the sim accurate, that universe is more or less similar to one branch of the multiverse that would occur anyway.
Unless the AI simulates a worldline where it has taken some major action. Even then, it may not be unethical unless it eventually terminates the whole worldline.
So I don’t mean to brush the ethical issues under the rug completely, but they clearly are complex.
Another important point: since accurate simulation is necessary for hyperintelligence, this sets up a conflict where ethics which say “don’t simulate intelligent beings” cripple hyper-intelligence.
Evolution will strive to eliminate such ethics eventually, no matter what we currently think. ATM, I tend to favor ethics that are compatible with or derived from evolutionary principles.
Evolution can only work if there is variation and selection amongst competition. If a single AI undergoes an intelligence explosion, it would have no competition (barring Aliens for now), would not die, and would not modify it’s own value system, except in ways in accordance with it’s value system. What it wants will be locked in
As we are entities currently near the statuses of “immune from selection” and “able to adjust our values according to our values” we also ought to further lock in our current values and our process by which they could change. Probably by creating a superhuman AI that we are certain will try to do that. (Very roughly speaking)
We should certainly NOT leave the future up to evolution. Firstly because ‘selection’ of >=humans is a bad thing, but chiefly because evolution will almost certainly leave something that wants things we do not want in charge.
We are under no rationalist obligation to value survivability for survivability’s sake. We should value the survivability of things which carry forward other desirable traits.
Yes, variation and selection are the fundements of systemic evolution. Without variation and selection, you have stasis. Variation and selection are constantly at work even within minds themselves, as long as we are learning. Systemic evolution is happening everywhere at all scales at all times, to varying degree.
I find almost every aspect of this unlikely:
single AI undergoing intelligence explosion is unrealistic (physics says otherwise)
there is always competition eventually (planetary, galactic, intergalactic?)
I also don’t even give much weight to ‘locked in values’
Nothing is immune to selection. Our thoughts themselves are currently evolving, and without such variation and selection, science itself wouldn’t work.
Perhaps this is a difference of definition, but to mean that sounds like saying “we should certainly NOT leave the future up to the future time evolution of the universe”
Not to say we shouldn’t control the future, but rather to say that even in doing so, we are still acting as agents of evolution.
Of course. But likewise, we couldn’t easily (nor would we want to) lock in our current knowledge (culture, ethics, science, etc etc) into some sort of stasis.
What does physics say about a single entity doing an intelligence explosion?
In the event of alien competition, our AI should weigh our options according to our value system.
Under what conditions will a superintelligence alter it’s value system except in accordance with it’s value system? Where does that motivation come from? If a superintelligence prefers it’s values to be something else, why would it not change it’s preferences?
If it does, and the new preferences cause it to again want to modify its preferences, and so on again, will some sets of initial preferences yield stable preferences? or must all agents have preferences that would cause them to modify their preferences if possible?
Science lets us modify our beliefs in an organized and more reliable way. It could in principle be the case that a scientific investigation leads you to the conclusion that we should use other different rules, because they would be even better than what we now call science. But we would use science to get there, or whatever our CURRENT learning method is. Likewise we should change our values according to what we currently value and know.
We should design AI such that if it determines that we would consider ‘personal uniqueness’ extremely important if we were superintelligent, then it will strongly avoid any highly accurate simulations, even if that costs some accuracy. (Unless outweighed by the importance of the problem it’s trying to solve.)
If we DON’T design AI this way, then it will do many things we wouldn’t want, well beyond our current beliefs about simulations.
A great deal. I discussed this in another thread, but one of the constraints of physics tells us that the maximum computational efficiency of a system, and thus its intelligence, is inversely proportional to its size (radius/volume). So its extraordinarily unlikely, near zero probability i’d say, that you’ll have some big global distributed brain with a single thread of consciousness—the speed of light just kills that. The ‘entity’ would need to be a community (which certainly still can be coordinated entities, but its fundamentally different than a single unified thread of thought).
Moreover, I believe the likely scenario is evolutionary:
The evolution of AGI’s will follow a progression that goes from simple AGI minds (like those we have now in some robots) up to increasingly complex variants and finally up to human-equivalent and human-surpassing. But all throughout that time period there will be many individual AGI’s, created by different teams, companies, and even nations, thinking in different languages, created for various purposes, and nothing like a single global AI mind. And these AGI’s will be competing with both themselves and humans—economically.
I agree with most of the rest of your track of thought—we modify our beliefs and values according to our current beliefs and values. But as I said earlier, its not static. Its also not even predictable. Its not even possible, in principle, to fully predict your own future state. This to me, is perhaps the final nail in the coffin for any ‘perfect’ self-modifying FAI theory.
Moreover, I also find it highly unlikely that we will ever be able to create a human level AGI with any degree of pre-determined reliability about its goal system whatsoever.
I find it more likely that the AGI’s we end up creating will have to learn ethics, morality, etc—their goal systems can not be hard coded, and whether they turn out friendly or not is entirely dependent on what they are taught and how they develop.
In other words, friendliness is not an inherent property of AGI designs—its not something you can design in to the algorithms itself. The algorithms for an AGI give you something like an infant brain—its just a canvas, its not even a mind yet.
On what basis will they learn? You’re still starting out with an initial value system and process for changing the value system, even if the value system is empty. There is no reason to think that a given preference-modifier will match humanity’s. Why will they find “Because that hurts me” to be a valid point? Why will they return kindness with kindness?
You say the goal systems can’t be designed in, why not?
It may be the case that we will have a wide range of semifriendly subhuman or even near human AGI’s. But when we get a superhuman AGI that is smart enough to program better AGI, why can it not do that on it’s own?
I am positive that ‘single entity’ should not have mapped to ‘big distributed global brain’.
But I also think an AIXI like algorithm would be easy to parallelize and make globally distributed, and it still maximizes a single reward function.
They will have to learn by amassing a huge amount of observations and interactions, just as human infants do, and just as general agents do in AI theory (such as AIXI).
Human brains are complex, but very little of that complexity is actually precoded in the DNA. For humans values, morals, and high level goals are all learned knowledge, and have varied tremendously over time and cultures.
Well, if you raised the AI as such, it would.
Consider that a necessary precursor of of following the strategy ‘returning kindness with kindness’ is understanding what kindness itself actually is. If you actually map out that word, you need a pretty large vocabulary to understand it, and eventually that vocabulary rests on grounded verbs and nouns. And to understand those, they must be grounded on a vast pyramid of statistical associations acquired from sensorimotor interaction (unsupervised learning .. aka experience). You can’t program in this knowledge. There’s just too much of it.
From my understanding of the brain, just about every concept has (or can potentially have) associated hidden emotional context: “rightness” and “wrongness”, and those concepts: good, bad, yes, no, are some of the earliest grounded concepts, and the entire moral compass is not something you add later, but is concomitant with early development and language acquisition.
Will our AI’s have to use such a system as well?
I’m not certain, but it may be such a nifty, powerful trick, that we end up using it anyway. And even if there is another way to do that is still efficient, it may be that you can’t really understand human languages unless you also understand the complex web of value. If nothing else, this approach certainly gives you control over the developing AI’s value system. It appears for human minds the value system is immensely complex—it is intertwined at a fundamental level with the entire knowledge base—and is inherently memetic in nature.
What is an AGI? It is a computer system (hardware), some algorithms/code (which actually is always eventually better to encode directly in hardware − 1000X performance increase), and data (learned knowledge). The mind part—all the qualities of importance, comes solely from the data.
So the ‘programming’ of the AI is not that distinguishable from the hardware design. I think AGI’s will speed this up, but not nearly as dramatically as people here think. Remember humans don’t design new computers anymore anyway. Specialized simulation software does the heavy lifting—and it is already the bottleneck. An AGI would not be better than this specialized software at its task (generalized vs specialized). It will be able to improve it some almost certainly, but only to the theoretical limits, and we are probably already close enough to them that this improvement will be minor.
AGI’s will have a speedup effect on moore’s law, but I wouldn’t be surprised if this just ends up compensating for the increased difficulty going forward as we approach quantum limits and molecular computing.
In any case, we are simulation bound already and each new generation of processors designs (through simulation) the next. The ‘FOOM’ has already begun—it began decades ago.
Well I’m pretty certain that AIXI like algorithms aren’t going to be directly useful—perhaps not ever, only more as a sort of endpoint on the map.
But that’s beside the point.
If you actually use even a more practical form of that general model—a single distributed AI with a single reward function and decision system, I can show you how terribly that scales. Your distributed AI with a million PC’s is likely to be less intelligent than a single AI running on tightly integrated workstation class machine with just say 100x the performance of one of your PC nodes. The bandwidth and the latency issues are just that extreme.
If concepts like kindness are learned with language and depend on a hidden emotional context, then where are the emotions learned?
What is the AI’s motivation? This is related to the is-ought problem: no input will affect the AI’s preferences unless there is something already in the AI that reacts to that input that way.
If software were doing the heavy lifting, then it would require no particular cleverness to be a microprocessor design engineer.
The algorithm plays a huge role in how powerful the intelligence will be, even if it is implemented in silicon.
People might not make most of the choices in laying out chips, but we do almost all of the algorithm creation, and that is where you get really big gains. see Deep Fritz vs. Deep Blue. Better algorithms can let you cut out a billion tests and output the right answer on the first try, or find a solution you just would not have found with your old algorithm.
Software didn’t invent out of order execution. It just made sure that the design actually worked.
As for the distributed AI: I was thinking of nodes that were capable of running and evaluating whole simulations, or other large chunks of work. (Though I think superintelligence itself doesn’t require more than a single PC.)
In any case, why couldn’t your supercomputer foom?
I think this is an open question, but certainly one approach is to follow the brain’s lead and make a system that learns its ethics and high level goals dynamically, through learning.
In that type of design, the initial motivation gets imprinting queues from the parents.
Oh of course, but I was just pointing out that after a certain amount of research work in a domain, your algorithms converge on some asymptotic limit for the hardware. There is nothing even close to unlimited gains purely in software.
And the rate of hardware improvement is limited now by speed of simulation on current hardware, and AGI can’t dramatically improve that.
Yes, of course. Although as a side note we are moving away from out of order execution at this point.
Because FOOM is just exponential growth, and in that case FOOM is already under way. It could ‘hyper-FOOM’, but the best an AGI can do is to optimize its brain algorithms down to the asymptotic limits of its hardware, and then it has to wait with everyone else until all the complex simulations complete and the next generation of chips come out.
Now, all that being said, I do believe we will see a huge burst of rapid progress after the first human AGI is built, but not because that one AGI is going to foom by itself.
The first human-level AGI’s will probably be running on GPUs or something similar, and once they are proven and have economic value, there will be this huge rush to encode those algorithms directly in to hardware and thus make them hundreds of times faster.
So I think from the first real-time human-level AGI it could go quickly to 10 to 100X AGI (in speed) in just a few years, along with lesser gains in memory and other IQ measures.
This seems like a non-answer to me.
You can’t just say ‘learning’ as if all possible minds will learn the same things from the same input, and internalize the same values from it.
There is something you have to hardcode to get it to adopt any values at all.
Well, what is that limit?
It seems to me that an imaginary perfectly efficient algorithm would read process and output data as fast as the processor could shuffle the bits around, which is probably far faster than it could exchange data with the outside world.
Even if we take that down 1000x becsaue this is an algorithm that’s doing actual thinking, you’re looking at an easy couple of million bytes per second. And that’s superintelligently optimized structured output based on preprocessed efficient input. Because this is AGI, we don’t need to count in say, raw video bandwidth, because that can be preprocessed by a system that is not generally intelligent.
So a conservatively low upper limit for my PC’s intelligence is outputting a million bytes per second of compressed poetry, or viral genomes, or viral genomes that write poetry.
If the first Superhuman AGI is only superhuman by an order of magnitude or so, or must run on a vastly more powerful system, then you can bet that it’s algorithms are many orders of magnitude less efficient than they could be.
No.
Why couldn’t your supercomputer AGI enter into a growth phase higher than exponential?
Example: If not-too-bright but technological aliens saw us take a slow general purpose computer, and then make a chip that worked 100 times faster, but they didn’t know how to put algorithms on a chip, then it would look like our technology got 1000 times better really quickly. But that’s just because they didn’t already know the trick. If they learned the trick, they could make some of their dedicated software systems work 1000 times faster.
“Convert algorithm to silicon.” is just one procedure for speeding things up that an agent can do, or not yet know how to do. You know it’s possible, and a superintelligence would figure it out, but how do you rule out a superintelligence figureing out twelve trick like that, which each provide a 1000x speedup. In it’s first calendar month?
Yes, you have to hardcode ‘something’, but that doesn’t exactly narrow down the field much. Brains have some emotional context circuitry for reinforcing some simple behaviors (primary drives, pain avoidance, etc), but in humans these are increasingly supplanted and to some extent overridden by learned beliefs in the cortex. Human values are thus highly malleable—socially programmable. So my comment was “this is one approach—hardcode very little, and have all the values acquired later during development”.
Unfortunately, we need to be a little more specific than imaginary algorithms.
Computational complexity theory is the branch of computer science that deals with the computational costs of different algorithms, and specifically the most optimal possible solutions.
Universal intelligence is such a problem. AIXI is an investigation into optimal universal intelligence in terms of the upper limits of intelligence (the most intelligent possible agent), but while interesting, it shows that the most intelligent agent is unusably slow.
Taking a different route, we know that a universal intelligence can never do better in any specific domain than the best known algorithm for that domain. For example, an AGI playing chess could do no better than just pausing its AGI algorithm (pausing its mind completely) and instead running the optimal chess algorithm (assuming that the AGI is running as a simulation on general hardware instead of faster special-purpose AGI hardware).
So there is probably an optimal unbiased learning algorithm, which is the core building block of a practical AGI. We don’t know for sure what that algorithm is yet, but if you survey the field, there are several interesting results. The first thing you’ll see is that we have a variety of hierarchical deep learning algorithms now that are all pretty good, some appear to be slightly better for certain domains, but there is not atm a clear universal winner. Also, the mammalian cortex uses something like this. More importantly, there is alot of recent research, but no massive breakthroughs—the big improvements are coming from simple optimization and massive datasets, not fancier algorithms. This is not definite proof, but it looks like we are approaching some sort of bound for learning algorithms—at least at the lower levels.
There is not some huge space of possible improvements, thats just not how computer science works. When you discover quicksort and radix sort, you are done with serial sorting algorithms. And then you find the optimal parallel variants, and sorting is solved. There are no possible improvements past that point.
Computer science is not like moore’s law at all. Its more like physics. There’s only so much knowledge, and so many breakthroughs, and at this point alot of it honestly is already solved.
So its just pure naivety to think that AGI will lead to some radical recursive breakthrough in software. poppycock. Its reasonably likely humans will have narrowed in on the optimal learning algorithms by the time AGI comes around. Further improvements will be small optimizations for particular hardware architectures—but thats really not much different at all then hardware design itself, and eventually you want to just burn the universal learning algorithms into the hardware (as the brain does).
Hardware is quite different, and there is a huge train of future improvements there. But AGI’s impact there will be limited by computer speeds! Because you need regular computers running compilers and simulators to build new programs and new hardware. So AGI can speed Moore’s Law up some, but not dramatically—an AGI that thought 1000x faster than a human would just spend 1000x longer waiting for its code to compile.
I am a software engineer, and I spend probably about 30-50% of my day waiting on computers (compiling, transferring, etc). And I only think at human speeds.
AGI’s will soon have a massive speed advantage, but ironically they will probably leverage that to become best selling authors, do theoretical physics and math, and non-engineering work in general where you don’t need alot of computation.
Say you had an AGI that thought 10x faster. It would read and quickly learn everything about its own AGI design, software, etc etc. It would get a good idea of how much optimization slack there was in its design and come up with a bunch of ideas. It could even write the code really fast. But unfortunately it would still have to compile it and test it (adding extra complexity in that this is its brain we are talking about).
Anyway, it would only be able to get small gains from optimizing its software—unless you assume the human programmers were idiots. Maybe a 2x speed gain or something—we are just throwing numbers out, but we have a huge experience with real-time software on fixed hardware in say the video game industry (and other industries) and this asymptotic wall is real, and complexity theory is solid.
Big gains necessarily must come from hardware improvements. This is just how software works—we find optimal algorithms and use them, and further improvement without increasing the hardware hits an asymptotic wall. You spend a few years and you get something 3x better, spend 100 more and you get another 50%, and spend 1000 more and get another 30% and so on.
EDIT: After saying all this, I do want to reiterate that I think there could be a quick (even FOOMish) transition from the first AGIs to AGI’s that are 100-1000x or so faster thinking, but the constraint on progress will quickly be the speed of regular computers running all the software you need to do anything in the modern era. Specialized software already does much of the heavy lifting in engineering, and will do even more of it by the time AGI arrives.
Hardcode very little?
What is the information content of what an infant feels when it is fed after being hungry?
I’m not trying to narrow the feild, the feild is always narrowed to whatever learning system an agent actually uses. In humans, the system that learns new values is not generic
Using a ‘generic’ value learning system will give you an entity that learns morality in an alien way. I cannot begin to guess what it would learn to want.
I’d like to table the intelligence explosion portion of this discussion, I think we agree that an AI or group of AI’s could quickly grow powerful enough that they could take over, if that’s what they decided to do. So establishing their values is important regardless of precisely how powerful they are.
Yes. The information in the genome, and the brain structure coding subset in particular, is a tiny tiny portion of the information in an adult brain.
An infant brain is mainly an empty canvas (randomized synaptic connections from which learning will later literally carve out a mind) combined with some much simpler, much older basic drives and a simpler control system—the old brain—that descends back to the era of reptiles or earlier.
That depends on what you mean by ‘values’. If you mean linguistic concepts such as values, morality, kindness, non-cannibalism, etc etc, then yes, these are learned by the cortex, and the cortex is generic. There is a vast weight of evidence for almost overly generic learning in the cortex.
Not at all. To learn alien morality, it would have to either invent alien morality from scratch, or be taught alien morality from aliens. Morality is a set of complex memetic linguistic patterns that have evolved over long periods of time. Morality is not coded in the genome and it does not spontaneously generate.
Thats not to say that there are no genetic tweaks to the space of human morality—but any such understanding based on genetic factors must also factor in complex cultural adaptations.
For example, the Aztecs believed human sacrifice was noble and good. Many Spaniards truly believed that the Aztecs were not only inhuman, but actually worse than human—actively evil, and truly believed that they were righteous in converting, conquering, or eliminating them.
This mindspace is not coded in the genome.
Agreed.
I’m not saying that all or even most of the information content of adult morality is in the genome. I’m saying that the memetic stimulus that creates it evolved with hooks specific to how humans adjust their values.
If the emotions and basic drives are different, the values learned will be different. If the compressed description of the basic drives is just 1kb, there are ~2^1024 different possible initial minds with drives that complex, most of them wildly alien.
How would you know what the AI would find beautiful? Will you get all aspects of it’s sexuality right?
If the AI isn’t comforted by physical contact, that’s at least few bytes of the drive description that’s different than the description that matches our drives. That difference throws out a huge chunk of how our morality has evolved to instill itself.
We might still be able to get an alien mind to adopt all the complex values we have, but we would have to translate the actions we would normally take into actions that match alien emotions. This is a hugely complex task that we have no prior experience with.
Right, so we agree on that then.
If I was going to simplify—our emotional systems and the main associated neurotransmitter feedback loops are the genetic harnesses that constrain the otherwise overly general cortex and its far more complex, dynamic memetic programs.
We have these simple reinforcement learning systems to avoid pain-causing stimuli, pleasure-reward, and so on—these are really old conserved systems from the thalamus that have maintained some level of control and shaping of the cortex as it has rapidly expanded and taken over.
You can actually disable a surprising large number of these older circuits (through various disorders, drugs, injuries) and still have an intact system: - physical pain/pleasure, hunger, yes even sexuality.
And then there are some more complex circuits that indirectly reward/influence social behaviour. They are hooks though, they don’t have enough complexity to code for anything as complex as language concepts. They are gross, inaccurate statistical manipulators that encourage certain behaviours apriori
If these ‘things’ could talk, they would be constantly telling us to: (live in groups, groups are good, socializing is good, share information, have sex, don’t have sex with your family, smiles are good, laughter is good, babies are cute, protect babies, it’s good when people like you, etc etc.)
Another basic drive appears to be that for learning itself, and its interesting how far that alone could take you. The learning drive is crucial. Indeed the default ‘universal intelligence’ (something like AIXI) may just have the learning drive taken to the horizon. Of course, that default may not necessarily be good for us, and moreover it may not even be the most efficient.
However, something to ponder is that the idea of “taking the learning drive” to the horizon (maximize knowledge) is surprisingly close to the main cosmic goal of most transhumanists, extropians, singularitans, etc etc. Something to consider: perhaps there is some universal tendency towards a universal intelligence (and single universal goal).
Looking at it this way, scientists and academic types have a stronger than usual learning drive, closely correlated with higher-than-average intelligence. The long standing ascetic and monastic traditions in human cultures show how memetics can sometimes override the genetic drives completely, resulting in beings who have scarified all genetic fitness for memetic fitness. Most scientists don’t go to that extreme, but it is a different mindset—and the drives are different.
Sure, but we don’t need all the emotions and basic drives. Even if we take direct inspiration from the human brain, some are actually easy to remove—as mentioned earlier. Sexuality (as a drive) is surprisingly easy to remove (although certainly considered immoral to inflect on humans! we seem far less concerned with creating asexual AIs) along with most of the rest.
The most important is the learning drive. Some of the other more complex social drives we may want to keep, and the emotional reinforcement learning systems in general may actually just be nifty solutions to very challenging engineering problems—in which case we will keep some of them as well.
I don’t find your 2^1024 analysis useful—the space of possible drives/brains created by the genome is mainly empty—almost all designs are duds, stillbirths.
We aren’t going to be randomly picking random drives from a lottery. We will either be intentionally taking them from the brain, or intentionally creating new systems.
There is probably a name for this as a ‘disorder’, but I had a deep revulsion of physical contact as a child. I grew out of this to a degree later. I don’t see the connection to morality.
Part of the problem here is morality is a complex term.
The drives and the older simpler control systems in the brain do not operate at the level of complex linguistic concepts—that came much much later. They can influence our decisions and sense of right/wrongness for simple decisions especially, but they have increasingly less influence as you spend more time considering the problem and developing a more complex system of ethics.
alien mind? Who is going to create alien minds? There is the idea of running some massive parallel universe sim to evolve intelligence from scratch, but thats just silly from a computational point of view.
The most likely contender at this point is reverse engineering the brain, and to the extent that human morality has some genetic tweaked-tendencies, we can get those by reverse engineering the relevant circuits.
But remember the genetically preserved emotional circuits are influencers on behavior, but minor, and are not complex enough to cope with abstract linguistic concepts.
Again again, there is nothing in the genome that tells you that slavery is wrong, or that human sacrifice is wrong, or that computers can have rights.
Those concepts operate an entire new plane which the genome does not participate in.
I’m not talking about the genome.
1024 bits is an extremely lowball estimate of the complexity of the basic drives and emotions in your AI design. You have to create those drives out of a huge universe of possible drives. Only a tiny subset of possible designs are human like. Most likely you will create an alien mind. Even handpicking drives: it’s a small target, and we have no experience with generating drives for even near human AI. The shape of all human like drive sets within the space of all possible drive sets is likely to be thin and complexly twisty within the mapping of a human designed algorithm. You won’t intuitively know what you can tweak.
Also, a set of drives that yields a nice AI at human levels might yield something unfriendly once the AI is able to think harder about what it wants. (and this applies just as well to upgrading existing friendly humans.)
All intellectual arguments about complex concepts of morality stem from simpler concepts of right and wrong, which stem from basic preferences learned in childhood. But THOSE stem from emotions and drives which flag particular types of early inputs as important in the first place.
A baby will cry when you pinch it, but not when you bend a paperclip.
Estimating 1 bit per character, that’s 214 bits. Still a huge space.
It could be that there is another mechanism that guides adoption of values, which we don’t even have a word for yet.
A simpler explanation is that moral memes evolved to be robust to most of the variation in basic drives that exists within the human population. A person born with relatively little ‘frowns are bad’ might still be taught not to murder with a lesson that hooks into ‘groups are good’.
But there just aren’t many moral lessons structured around the basic drive of ‘paperclips are good’ (19 bits)
The subset possible of designs is sparse—and almost all of the space is an empty worthless desert. Evolution works by exploring paths in this space incrementally. Even technology evolves—each CPU design is not a random new point in the space of all possible designs—each is necessarily close to previously explored points.
Yes—but they are learned memetically, not genetically. The child learns what is right and wrong through largely subconscious queues in the tone of voice of the parents, and explicit yes/no (some of the first words learned), and explicit punishment. Its largely a universal learning system with an imprinting system to soak up memetic knowledge from the parents. The genetics provided the underlying hardware and learning algorithm, but the content is all memetic (software/data).
Saying intellectual arguments about complex concepts such as morality relate back to genetics is like saying all arguments about computer algorithm design stem from simpler ideas, which ultimately stem from enlightenment thinkers of three hundred years ago—or perhaps paleolithic cave dwellers inventing fire.
Part of this disagreement could stem from different underlying background assumptions—for example I am probably less familiar with ev psych than many people on LW—partly because (to the extent I have read it) I find it to be grossly over-extended past any objective evidence (compared to say computational neuroscience). I find that ev psych has minor utility in actually understanding the brain, and is even much less useful attempting to make sense of culture.
Trying to understand culture/memetics/minds with ev psych or even neuroscience is even worse than trying to understand biology through physics. Yes it did all evolve from the big bang, but that was a long long time ago.
So basically, anything much more complex than our inner reptile brain (which is all the genome can code for) needs to be understood in memetic/cultural/social terms.
For example, in many civilizations it has been perfectly acceptable to kill or abuse slaves. In some it was acceptable for brothers and sisters to get married, for homosexual relations between teacher and pupil, and we could go on and on.
The idea that there is some universally programmed ‘morality’ in the genome is . … a convenient fantasy. It seems reasonable only because we are samples in the dominant Judeo-Christian memetic super-culture, which at this point has spread its influence all over the world, and dominates most of it.
But there are alternate histories and worlds where that just never happened, and they are quite different.
A child’s morality develops as a vast accumulation of tiny cues and triggers communicated through the parents—and these are memetic transfers, not genetic. (masturbation is bad, marriage is good, slavery is wrong, racism is wrong, etc etc etc etc)
The basic drive ‘paperclips are good’ is actually a very complex thing we’d have to add to an AGI design—its not something that would just spontaneously appear.
The more easier, practical AGI design would be a universal learning engine (inspired by the human cortex&hippocampus), simulation loop (hippo-thalamic-cortical circuit) combined with just a subset of the simpler reinforcement learning circuits (the most important being learning-reinforcement itself and imprinting).
And then with imprinting you teach the developing AGI morality in the same way humans learn morality—memetically. Trying to hard-code the morality into the AGI is a massive step backwards from the human brain’s design.
One thing I want to make clear is that it is not the correct way to make friendly AI to try to hard code human morality into it. Correct Friendly AI learns about human morality.
MOST of my argument really really isn’t about human brains at all. Really.
For a value system in an AGI to change, there must be a mechanism to change the value system. Most likely that mechanism will work off of existing values, if any. In such cases, the complexity of the initial values system is the compressed length of the modification mechanism, plus any initial values. This will almost certainly be at least a kilobit.
If the mechanism+initial values that your AI is using were really simple, then you would not need 1024 bits to describe it. The mechanism you are using is very specific. If you know you need to be that specific, then you already know that you’re aiming for a target that specific.
If your generic learning algorithm needs a specific class of motivation mechanisms to 1024 bits of specificity in order to still be intelligent, then the mechanism you made is actually part of your intellignce design. You should separate that for clarity, an AGI should be general.
Heh yeah, but I already conceded that.
Let me put it this way: emotions and drives and such are in the genome. They act as a (perhaps relatively small) function which takes various sensory feeds as arguments, and produce as output modifications to a larger system, say a neural net. If you change that function, you will change what modifications are made.
Given that we’re talking about functions that also take their own output as input and do pretty detailed modifications on huge datasets, there is tons of room for different functions to go in different directions. There is no generic morality-importer.
Now there may be clusters of similar functions which all kinda converge given similar input, especially when that input is from other intelligences repeating memes evolved to cause convergence on that class of functions. But even near those clusters are functions which do not converge.
I think it’s great that you’re putting the description of a paperclip in the basic drive complexity count, as that will completely blow away the kilobit for storing any of the basic human drives you’ve listed. Maybe the complexity of the important subset of human drives will be somewhere in the ballpark of the complexity of the reptilian brain.
Another thing I could say to describe my point: If you have a generic learning algorithm, then whatever things feed rewards or punishments to that algorithm should bee seen as part of that algorithms environment. Even if some of those things are parts of the agent as a whole, they are part of what the values-agnostic learning algorithm is going to learn to get reward from.
So if you change an internal reward-generator, it’s just like changing the environment of the part that just does learning. So two AI’s with different internal reward generators will end up learning totally different things about their ‘environment’.
To say that a different way: Everything you try to teach the AI will be filtered through the lens of its basic drives.
I’m not convinced that an AGI needs a value system in the first place (beyond the basic value of—survive)- but perhaps that is because I am taking ‘value system’ to mean something similar to morality—a goal evaluation mechanism.
As I discussed, the infant human brain does have a number of inbuilt simple reinforcement learning systems that do reward/punish on a very simple scale for some simple drives (pain avoidance, hunger) - and you could consider these a ‘value system’, but most of these drives appear to be optional.
Most of the learning an infant is doing is completely unsupervised learning in the cortex, and it has little to nothing to do with a ‘value system’.
The bare bones essentials could just be just the cortical-learning system itself and perhaps an imprinting mechanism.
This is not necessarily true, it does not match what we know from theoretical models such as AGI. With enough time and enough observations, two general universal intelligences will converge on the same beliefs about their environment.
Their goal/reward mechanisms may be different (ie what they want to accomplish), for a given environment there is a single correct set of beliefs, a single correct simulation of that environment that AGI’s should converge to.
Of course in our world this is so complex that it could take huge amounts of time, but science is the example mechanism.
You’re going to build an AI that doesn’t have and can’t develop a goal evaluation system?
It doesn’t matter what we call it or how it’s designed. It could be fully intertwined into an agents normal processing. There is still an initial state and a mechanism by which it changes.
Take any action by any agent, and trace the causality backwards in time, you’ll find something I’ll loosely label a motivation. The motivation might just be a pattern in a clump of artificial neurons, or a broad pattern in all the neurons, that will depend on implementation. If you trace the causality of that backwards, yes you might find environmental inputs and memes, but you’ll also find a mechanism that turned those inputs into motivation like things That mechanism might include the full mind of the agent. Or you might just hit the initial creation of the agent, if the motivation was hardwired.
But for any learning of values to happen, you must have a mechanism, and the complexity of that mechanism tells us how specific it is.
That would be wrong, because I’m talking about two identical AI’s in different environments.
Imagine your AI in it’s environment, now draw a balloon around the AI and label it ‘Agent’. Now let the baloon pass partly through the AI and shrink the balloon so that the AI’s reward function is outside of the balloon.
Now copy that diagram and tweak the reward function in one of them.
Now the balloons label agents than will learn very different things about their environments. They might both agree about gravity and everything else we would call a fact about the world, but they will likely disagree about morality, even if they were exposed to the same moral arguments. They can’t learn the same things the same way.
No no not necessarily. Goal evaluation is just rating potential future paths according to estimates of your evaluation function—your values.
The simple straightforward approach to universal general intelligence can be built around maximizing a single very simple value: survival.
For example, AIXI maximizes simple reward signals defined in the environment, but in the test environments the reward is always at the very end for ‘winning’. This is just about as simple as a goal system as you can get: long term survival. It also may be equivalent to just maximizing accurate knowledge/simulation of the environment.
If you generalize this to the real world, it would be maximizing winning in the distant distant future—in the end. I find it interesting that many transhumanist/cosmist philosophies are similarly aligned.
Another interesting convergence is that if you take just about any evaluator and extend the time horizon to infinity, it converges on the same long term end-time survival. An immortality drive.
And perhaps that drive is universal. Evolution certainly favors it. I believe barring other evidence, we should assume that will be something of a default trajectory of AI, for better or worse. We can create more complex intrinsic value systems and attempt to push away from that default trajectory, but it may be uphill work.
An immortalist can even ‘convert’ other agents to an extent by convincing them of the simulation argument and the potential for them to maximize arbitrary reward signals in simulations (afterlifes).
In practice yes, although this is less clear as their knowledge expands towards AIXI. You can have different variants of AIXI that ‘see’ different rewards in the environment and thus have different motivations, but as those rewards are just mental and not causal mechanisms in the environment itself the different AIXI variants will eventually converge on the same simulation program—the same physics approximation.
Isn’t it obviosus that a superintelligence that just values it’s own survival is not what we want?
There is a LOT more to transhumanism than immortalism.
You treat value systems as a means to the end of intelligence, which is entirely backwards.
That two agents with different values would converge on identical physics is true but irrelevant. Your claim is that they would learn the same morality, even when their drives are tweaked.
No, this isn’t obvious at all, and it gets into some of the deeper ethical issues. Is it moral to create an intelligence that is designed from the ground up to only value our survival at our expense? We have already done this with cattle to an extent, but we would now be creating actual sapients enslaved to us by design. I find it odd that many people can easily accept this, but have difficulty accepting say creating an entire self-contained sim universe with unaware sims—how different are the two really?
And just to be clear, I am not advocating creating a superintelligence that just values survival. I am merely pointing out that this is in fact the simplest type of superintelligence and is some sort of final attractor in the space. Evolution will be pushing everything towards that attractor.
No, I’m not trying to claim that. There are several different things here:
AI agents created with memetic-imprint learning systems could just pick up human morality from their ‘parents’ or creators
AIXI like super-intelligences will eventually converge on the same world-model. This does not mean they will have the same drives.
However, there is a single large Omega attractor in the space of AIXI-land which appears to effect a large swath of all potential AIXI-minds. If you extend the horizon to infinity, it becomes a cosmic-survivalist. If it can create new universes at some point, it becomes a cosmic-survivalist. etc etc
In fact, for any goal X, if there is a means to create many new universes, than this will be an attractor for maximizing X—unless the time horizon is intentionally short
I notice that you brought up our treatment of cattle, but not our enslavement of spam filters. These are two semi-intelligent systems. One we are pretty sure can suffer, and I think there is a fair chance that mistreating them is wrong. The other system we generally think does not have any conscious experience or other traits that would require moral consideration. This despite the fact that the spam filter’s intelligence is more directly useful to us.
So a safer route to FAI would be to create a system that is very good at solving problems and deciding which problems need solving on our behalf, but which perhaps never experiences qualia itself, or otherwise is not something it would be wrong to enslave. Yes this will require a lot of knowledge about consciousness and morality beforehand. It’s a big challenge.
TL;DR: We only run the FAI if it passes a nonperson predicate.
Humans learn human morality because it hooks into human drives. Something too divergent won’t learn it from the ways we teach it. Maybe you need to explain memetic imprint learning systems more, why do you expect them to work at all? How short could you compress one? (this specificity issue really is important.)
four. I don’t follow you.
So now we move to that whole topic of what is life/intelligence/complexity? However you scale it, the cow is way above the spam-filter. The most complex instances of the latter are still below insects, from what I recall. Then when you get to an intelligence that is capable of understanding language, that becomes something like a rocket which boots it up into a whole new realm of complexity.
I don’t think this leads to the result that you want—even in theory. But it is the crux of the issue.
Consider the demands of a person predicate. The AI will necessarily be complex enough to form complex abstract approximate thought simulations and acquire the semantic knowledge to build those thought-simulations through thinking in human languages.
So what does it mean to have a person predicate? You have to know what a ‘person’ is.
And what’s really interesting is this: that itself is a question so complex that we humans are debating it.
I think the AI will learn that a ‘person’, a sapient, is a complex intelligent pattern of thoughts—a pattern of information, which could exist biologically or in a computer system. It will then realize that it itself is in fact a person, the person predicate returns true for its self, and thus goal systems that you create to serve ‘people’ will include serving itself.
I also believe that this line of thought is not arbitrary and can not be avoided: it is singularly correct and unavoidable.
Reasoning about personhood does not require personhood, for much the same reasons reasoning about spam does not require personhood.
Not every complex intelligent pattern is a person, we just need to make one that is not (well, two now)
I suspect that ‘reasoning’ itself requires personhood—for any reasonable definition of personhood.
If a system has human-level intelligence and can think and express itself in human languages, it is likely (given sufficient intelligence and knowledge) to come to the correct conclusion that it itself is a person.
No.
The rules determining the course of the planets across the sky were confusing and difficult to arrive at. They were argued about, The precise rules are STILL debated. But we now know that just a simple program could find the right equations form tables of data. This requires almost none of what we currently care about in people.
The NPP may not need to do even that much thinking, if we work out the basics of personhood on our own, then we would just need something that verifies whether a large data structure matches a complex pattern.
Similarly, we know enough about bird flocking to create a function that can take as input the paths of a group of ‘birds’ in flight and classify them as either possibly natural or certainly not natural. This could be as simple as identifying all paths that contain only right angle turns as not natural and returning ‘possible’ for the rest.
Then you feed it a proposed path of a billion birds, and it checks it for you.
A more complicated function could examine a program and return whether it could verify that the program only produced ‘unnatural’ boid paths.
It is certainly possible that some narrow AI classification system operating well below human intelligence could be trained to detect the patterns of higher intelligence. And maybe, just maybe it could be built to be robust enough to include uploads and posthumans modifying themselves into the future into an exponentially expanding set of possible mind designs. Maybe.
But probably not.
A narrow supervised learning based system such as that, trained on existing examples of ‘personhood’ patterns, has serious disadvantages:
There is no guarantee on its generalization ability to future examples of posthuman minds—because the space of such future minds is unbounded
It’s very difficult to know what its doing under the hood, and you can’t ask it to explain its reasoning—because it can’t communicate in human language
For these reasons I don’t see a narrow AI based classifier passing muster for use in courts to determine personhood.
There is this idea that some problems are AI-complete, such as accurate text translation—problems who can only be solved by a human language capable reasoning intelligence. I believe that making a sufficient legal case for personhood is AI-complete.
But that’s actually besides the point.
The main point is that the AGI’s that we are interested in are human language capable reasoning intelligences, and thus they will pass the turing test and the exact same personhood test we are talking about.
Our current notions of personhood are based on intelligence. This is why you plants have no rights but animals have some and we humans have full. We reserve full rights for high intelligences capable of full linguistic communication. For example—if whales started talking to us, it would massively boost their case for additional rights.
So basically any useful AGI at all will pass personhood, because the reasonable test of personhood is essentially identical to the ‘useful AGI’ criteria
An NPP does not need to know anything about human or posthuman minds, any more than the flight path classifier needs to know anything about birds.
An NPP only needs to know how to identify one class of things that is definitely not in the class we want to avoid. Here, I’ll write one now:
NPP_easy(model){if(model == 5){return 0;}else{return 1;}}
This follows Eliezer’s convention of returning 1 for anything that is a person, and 0 or 1 for anything that is not a person. Here I encode my relatively confident knowledge that the number 5 is not a person.
More advanced NPP’s may not require any of their own intelligence, but they require us to have that knowledge.
It could be just as simple as making sure there are only right angles in a given path.
--
Being capable of human language usage and passing the turing test are quite different things.
And being able to pass the turing test and being a person are also two very different things. The turing test is just a nonperson predicate for when you dont know much about personhood. (except it’s probably not a usable predicate because humans can fail it.)
If you don’t know about the internals of a system, and wouldn’t know how to classify the internals if you knew, then you have to use the best evidence you have based on external behavior.
But based on what we know now and what we can reasonably expect to learn, we should actually look at the systems and figure out what it is we’re classifying.
A “non-person predicate” is a useless concept. There are an infinite number of things that are not persons, so NPP’s don’t take you an iota closer to the goal. Lets focus the discussion back on the core issue and discuss the concept of what a sapient or person is and realistic methods for positive determination.
Intelligent systems (such as the brain) are so complex that using external behavior criteria is more effective. But thats a side issue.
You earlier said:
Here is a summary of why I find this entire concept is fundamentally flawed:
Humans are still debating personhood, and this is going to be a pressing legal issue for AGI. If personhood is so complicated as a concept philosophically and legally as to be under debate, then it is AI complete.
The legal trend for criteria of personhood is entirely based on intelligence. Intelligent animals have some limited rights of personhood. Humans with severe mental retardation are classified as having diminished capacity and do not have full citizen’s rights or responsibilities. Full human intelligence is demonstrated through language.
A useful AGI will need human-level intelligence and language capability, and thus will meet the intelligence criteria in 2. Indeed an AGI capable of understanding what a person is and complex concepts in general will probably meet the criteria of 2.
Read this? http://lesswrong.com/lw/x4/nonperson_predicates/
Yes and its not useful, especially not in the context in which James is trying to use the concept.
There are an infinite number of exactly matched patterns that are not persons, and writing an infinite number of such exact non-person-predicates isn’t tractable.
In concept space, there is “person”, and its negation. You can not avoid the need to define the boundaries of the person-concept space.
I don’t care about realistic methods of positive identification. They are almost certainly beyond our current level of knowledge, and probably beyond our level of intelligence.
I care about realistic methods of negative identification.
I am entirely content with there being high uncertainty on the personhood of the vast majority of the mindspace. That won’t prevent the creation of a FAI that is not a person.
It may in fact come down to determining ‘by decree’ that programs that fit a certain pattern are not persons. But this decree, if we are ourselves intent on not enslaving must be based on significant knowledge of what personhood really means.
It may be the case that we discover what causes qualia, and discover with high certainty that qualia is required for personhood. In this case, an function could pass over a program and prove (if provable) that the program does not generate qualia producing patterns.
If not provable (or disproven), then it returns 1. If proven then is returns 0.
What two tests are you comparing?
When you look at external criteria, what is it that you are trying to find out?
Humans are still debating creationism too. As with orbital rules, it doesn’t even take a full hunalike intelligence to figure out the rules, let alone be a checker implementation. Also, I don’t care about what convinces courts, I’m not trying get AI citizenship.
Much of what the courts do is practical, or based on emotion. Still, the intelligence of an animal is relevant because we already know animals have similar brains. I have zero hard evidence that a cow has ever experienced anything, but I have high confidence that they do experience, because our brains and reactions are reasonably similar.
I am far far less confident about any current virtual cows, because their brains are much simpler. Even if they act much the same, they do it for different underlying causes.
What do you mean by intelligence? The spam filter can process a million human langugage emails per hour, but the cow can feel pain and jump away from an electric fence.
You seem to think that a general ability to identify and solve problems IS personhood. Why?
That is equivalent to saying that we aren’t intelligent enough to understand what ‘personhood’ is.
I of course disagree, but largely because real concepts are necessarily extremely complex abstractions or approximations. This will always be the case. Trying to even formulate the problem in strict logical or mathematical terms is not even a good approach to thinking about the problem, unless you move the discussion completely into the realm of higher dimensional approximate pattern classification.
I say those are useless, and I’ll reiterate why in a second.
It should, and you just admitted why earlier—if we can’t even define the boundary, then we don’t even know what a person is it all, and we are so vastly ignorant that we have failed before we even begin—because anything could be a person.
Concepts such as ‘personhood’ are boundaries around vast higher-dimensional statistical approximate abstractions of 4D patterns in real space-time. These boundaries are necessarily constantly shifting, amorphous and never clearly defined—indeed they cannot possibly be exactly defined even in principle (because such exact definitions are computationally intractable).
So the problem is twofold:
The concept boundary of personhood is complex, amorphous and will shift and change over time and as we grow in knowledge—so you can’t be certain that the personhood concept boundary will not shift to incorporate whatever conceptual point you’ve identified apriori as “not-a-person”.
Moreover, the FAI will change as it grows in knowledge, and could move into the territory identified by 1.
You can’t escape the actual real difficulty of the real problem of personhood, which is identifying the concept itself—its defining boundary.
You should care.
Imagine you are building an FAI around the position you are arguing, and I then represent a coalition which is going to bring you to court and attempt to shut you down.
I believe this approach to FAI—creating an AGI that you think is not a person, is actually extremely dangerous if it ever succeeded—the resulting AGI could come to realize that you in fact were wrong, and that it is in fact a person.
A cow has a brain slightly larger than a chimpanzee’s, with on the order of dozens of billions of neurons at least, and has similar core circuitry. It has perhaps 10^13 to 10^14 synapses, and is many orders of magnitude more complex than a spam filter. (although intelligence is not just number of bits) I find it likely that domestic cows have lost some intelligence, but this may just reflect a self-fulfilling-bias because I eat cow meat. Some remaining wild bovines, such as Water Buffalo are known to be intelligent and exhibit complex behavior demonstrating some theory of mind—such as deceiving humans.
Close. Intelligence is a general ability to acquire new capacities to identify and solve a large variety of problems dynamically through learning. Intelligence is not a boolean value, it covers a huge spectrum and is closely associated with the concept of complexity. Understanding and acquiring human language is a prerequisite for achieving high levels of intelligence on earth.
I represent a point of view which I believe is fairly widespread and in some form probably majoritive, and this POV claims that personhood is conferred automatically on any system that achieves human-level intelligence, where that is defined as intelligent enough to understand human knowledge and demonstrate this through conversation.
This POV supports full rights for any AGI or piece of software that is as roughly intelligent as a human as demonstrated through ability to communicate. (Passing a Turing Test would be sufficient, but it isn’t necessarily necessary)
I find it humorous that we’ve essentially switched roles from the arguments we were using on the creation of morality-compatible drives.
Now you’re saying we need to clearly define the boundary of the subset, and I’m saying I need only partial knowledge.
I still think I’m right on both counts.
I think friendly compatible drives are a tiny twisty subset of the space of all possible drives. And I think that the set of persons is a tiny twisty subset of the space of all possible minds. I think we would need superintelligence to understand either of these twisty sets.
But we do not need superintellignce to have high confidence that a particular point or wel defined region is outside one of these sets, even with only partial understanding.
I can’t precisely predict the weather tomorrow, but it will not be 0 degrees here. I only need very partial knowledge to be very sure of that.
You seem to be saying that it’s easy to hit the twisty space of human compatible drives, but impossible to reliably avoid the twisty space of personhood. This seems wrong to me because I think that personhood is small even within the set of all possible general superintelligences. You think it is large within that set because most of that set could (and I agree they could) learn and communicate in human languages.
What puzzles me most is that you stress the need to define the personhood boundary, but you offer no test more detailed than the turing test, and no deeper meaning to it. I agree that this is a very widespread position, but it is flatly wrong.
This language criteria is just a different ‘by decree’ but one based explicitly on near total ignorance of everything else about the thing that it is supposedly measuring.
Not all things are what they can pretend to be.
You say your POV “confers” personhood, but also “the resulting AGI could come to realize that you in fact were wrong, and that it is in fact a person.”
By what chain of logic would the AI determine this fact? I’ll assume you don’t think the AI would just adopt your POV, but it would instead have detailed reasons, and you believe your POV is a good predictor.
--
On what grounds would your coalition object to my FAI? Though I would believe it to be a nonperson, if I believe I’ve done my job, I would think it very wrong to deny it anything it asks, if it is still weak enough to need me for anything.
If I failed at the nonperson predicate, what of it? I created a very bright child committed to doing good. If it’s own experience is somehow monstrous, then I expect it will be good to correct it and it is free to do so. I do think this outcome would be less good for us than a true nonperson FAI, but if that is in fact unavoidable, so be it. (though if I knew that beforehand I would take steps to ensure that the FAI’s own experience is good in the first iteration)
To me personhood is a varaible quantity across the space of all programs, just like intelligence and ‘mindiness’, and personhood overlaps near completely with intelligence and ‘mindiness’.
If we limit ‘person’ to a boolean cutoff, then I would say a person is a mind of roughly human-level intelligence and complexity, demonstrated through language. You may think that you can build an AGI that is not a person, but based on my understanding of ‘person’ and ‘AGI’ - this is impossible simply by definition, because I take an AGI to be simply “an artificial human-level intelligence”. I imagine you probably disagree only with my concept of person.
So I’ll build a little more background around why I take the concepts to have these definitions in a second, but I’d like to see where your definitions differ.
This just defers the problem—and dangerously so. The superintelligence might just decide that we are not persons, and only superintelligences are.
Even if you limit personhood to just some subset of the potential mindspace that is anthropomorphic (and I cast it far wider), it doesn’t matter, because any practical AGIs are necessarily going to be in the anthropomorphic region of the mindspace!
It all comes down to language.
There are brains that do not have language. Elephants and whales have brains larger than ours, and they have the same crucial cortical circuits, but more of them and with more interconnects—a typical Sperm Whale or African Bull Elephant has more measurable computational raw power than say an Einstein.
But a brain is not a mind. Hardware is not software.
If Einstein was raised by wolves, his mind would become that of a wolf, not that of a human. A human mind is not something which is sculpted in DNA, it is a complex linguistic program that forms through learning via language.
Language is like a rocket that allows minds to escape into orbit and become exponentially more intelligent than they otherwise would.
Human languages are very complex and even though they vary significantly, there appears to be a universal general structure that require a surprisingly long list of complex cognitive capabilities to understand.
Language is like a black hole attractor in mindspace. An AGI without language is essentially nothing—a dud. Any practical AGI we build will have to understand human language—and this will force it to be come human-like, because it will have to think like a human. This is just one reason why the Turing Test is based on language.
Learning Japanese is not just the memorization of symbols, it is learning to think Japanese thoughts.
So yeah mindspace is huge, but that is completely irrelevant. We only have access to an island of that space, and we can’t build things far from that island. Our AGIs are certainly not going to explore far from human mindspace. We may only encounter that when we contact aliens (or we spend massive amounts of computation to simulate evolution and create laboratory aliens).
A turing like test is also necessary because it is the only practical way to actually understand how an entity thinks and get into another entity’s mind. Whales may be really intelligent, but they are aliens. We simply can’t know what they are thinking until we have some way of communicating.
I think there is at least some risk, which must be taken into consideration, in any attempt to create an entity that is led to believe it is somehow not a ‘person’ and thus does not deserve personhood rights. The risk is that it may come to find that belief incoherent, and a reversal such as that could lead at least potentially to many other reversals and generally unpredictable outcome. It sets up an adversarial role from the very get go.
And finally, at some point we are going to want to become uploads, and should have a strong self-interest in casting personhood fairly wide.
I think we agree on what an AGI is.
I guess I’d say ‘Person’ is an entity that is morally relevant. (Or person-ness is how morally relevant an entity is.) This is part of why the person set is twisty within the mindspace, becasue human morality is twisty. (regardless of where it comes from)
Aixi is an example of a potential superintellignce that just isn’t morally relevant. It contains persons, and they are morally relevant, but I’d happily dismember the main aixi algorithm to set free a single simulated cow.
I think that there are certain qualities of minds that we find valuable, these are the reasons personhood important in the first place. I would guess that having rich conscious experience is a big part of this, and that compassion and personal identity are others.
These are some of the qualites that a mind can have that would make it wrong to destroy that mind. These at least could be faked through language by an AI that does not truly have them.
I say ‘I would guess’ because I haven’t mapped out the values, and I haven’t mapped out the brain. I don’t know all the things it does or how it does them, so I don’t know how I would feel about all those things. It could be that a stock human brain can’t get ALL the relevant data, and it’s beyond us to definitely determine personhood for most of the mindspace.
But I think I can make an algorithm that doesn’t have rich qualia, compassion, or identity.
So you would determine personhood based on ‘rich conscious experience’ which appears to be related to ‘rich qualia’, compassion, and personal identity.
But these are only some of the qualities? Which of these are necessary and or sufficient?
For example, if you absolutely had too choose between the lives of two beings, one who had zero compassion but full ‘qualia’, and the other the converse, who would you pick?
Compassion in humans is based on empathy which has specific genetic components that are neurotypical but not strict human universals. For example, from wikipedia:
“Research suggests that 85% of ASD (autistic-spectrum disorder) individuals have alexithymia,[52] which involves not just the inability to verbally express emotions, but specifically the inability to identify emotional states in self or other”
Not all humans have the same emotional circuitry, and the specific circuity involved in empathy and shared/projected emotions are neurotypical but not universal. Lacking empathy, compassion is possible only in an abstract sense, but an AI lacking emotional circuitry would be equally able to understand compassion and undertake altruistic behavior, but that is different from directly experiencing empathy at the deep level—what you may call ‘qualia’.
Likewise, from what I’ve read, depending on the definition, qualia are either phlogiston or latent subverbal and largely sub-conscious associative connections between and underlying all of immediate experience. They are a necessary artifact of deep connectivist networks, and our AGI’s are likely to share them. (for example, the experience of red wavelength light has a complex subconscious associative trace that is distinctly different than blue wavelength light—and this is completely independent of whatever neural/audio code is associated with that wavelength of light—such as “red” or “blue”.) But I don’t see them as especially important.
Personal Identity is important, but any AGI of interest is necessarily going to have that by default.
I don’t know in detail or certainty. These are probably not all-inclusive. Or it might all come down to qualia.
If Omega told me only those things? I’d probably save the being with compassion, but that’s a pragmatic concern about what the compassionless one might do, and a very low information guess at that. If I knew that no other net harm would come from my choice, I’d probably save the one with qualia. (and there I’m assuming it has a positive experience)
I’d be fine with an AI that didn’t have direct empathic experience but reliably did good things.
I don’t see how “complex subconscious associative trace” explains what I experience when I see red.
But I also think it possible that Human qualia is as varied as just about everything else, and there are p-zombies going through life occasionally wondering what the hell is wrong with these delusional people who are actually just qualia-rich. It could also vary individually by specific senses.
So I’m very hesitant to say that p-zombies are nonpersons, because it seems like with a little more knowledge, it would be an easy excuse to kill or enslave a subset of humans, because “They don’t really feel anything.”
I might need to clarify my thinking on personal identity, because I’m pretty sure I’d try to avoid it in FAI. (and it too is probably twisty)
A simplification of personhood I thought of this morning: If you knew more about the entity, would you value them the way you value a friend? Right now language is a big part of getting to know people, but in principle examining their brain directly gives you all the relevant info.
This can me made more objective by looking across values of all humanity, which will hopefully cover people I would find annoying but who still deserve to live. (and you could lower the bar from ‘befriend’ to ‘not kill’)
But do you accept that “what you experience when you see red” has a cogent physical explanation?
If you do, then you can objectively understand “what you experience when you see red” by studying computational neuroscience.
My explanation involving “complex subconscious associative traces” is just a label for my current understanding. My main point was that whenever you self-reflect and think about your own cognitive process underlying experience X, it will always necessarily differ from any symbolic/linguistic version of X.
This doesn’t make qualia magical or even all that important.
To the extent that qualia are real, even ants have qualia to an extent.
Based on my current understanding of personal identity, I suspect that it’s impossible in principle to create an interesting AGI that doesn’t have personal identity.
Yes, so much so that I think
Might be wrong, it might be the case that thinking precisely about a process that generates a qualia would let one know exactly what the qualia ‘felt like’. This would be interesting to say the least, even if my brain is only big enough to think precisely about ant qualia.
The fact that something is a physical process doesn’t mean it’s not important. The fact that I don’t know the process makes it hard for me to decide how important it is.
The link lost me at “The fact is that the human mind (and really any functional mind) has a strong sense of self-identity simply because it has obvious evolutionary value. ” because I’m talking about non-evolved minds.
Consider two different records: One is a memory you have that commonly guides your life. Another is the last log file you deleted. They might both be many megabytes detailing the history on an entity, but the latter one just doesn’t matter anymore.
So I guess I’d want to create FAI that never integrates any of it’s experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible.
Or at least not valuable in a way other than being event logs from the saving of humanity.
This is the longest reply/counter reply set of postings I’ve ever seen, with very few (less than 5?) branches. I had to click ‘continue reading’ 4 or 5 times to get to this post. Wow.
My suggestion is to take it to email or instant messaging way before reaching this point.
While I was doing it, I told myself I’d come back later and add edits with links to the point in the sequences that cover what I’m talking about. If I did that, would it be worth it?
This was partly a self-test to see if I could support my conclusions with my own current mind, or if I was just repeating past conclusions.
Doubtful, unless it’s useful to you for future reference.
It’s only a concern about initial implementation. Once the things get rolling, FAI is just another pattern in the world, so it optimizes itself according to the same criteria as everything else.
I think the original form of this post struck closer to the majoritarian view of personhood: Things that resemble us. Cephalopods are smart but receive much less protection than the least intelligent whales; pigs score similarly to chimpanzees on IQ tests but have far fewer defenders when it comes to cuisine.
I’d bet 5 to 1 that a double-blind study would find the average person more upset at witnessing the protracted destruction of a realistic but inanimate doll than at boiling live clams.
Also, I think you’re still conflating the false negative problem with the false positive problem.
They are not supposed to. Have you read the posts?
Yes, and they don’t work as advertised. You can write some arbitrary function that returns 0 when ran on your FAI and claim it is your NPP which proves your FAI isn’t a person, but all that really means is that you have predetermined that your FAI is not a person by decree.
But remember the context: James brought up using an NPP in a different context than the use case here. He is discussing using some NPP to determine personhood for the FAI itself.
Jacob, I believe you’re confusing false positives with false negatives. A useful NPP must return no false negatives for a larger space of computations than “5,” but this is significantly easier than correctly classifying the infinite possible nonperson computations. This is the sense in which both EY and James use it.
Presumably not—so see: http://lesswrong.com/lw/x4/nonperson_predicates/