So, why MIRI’s claims aren’t accepted by the mainstream, then?
Because they’ve never heard of them. I am not joking. Most computer scientists are not working in artificial intelligence, have not the slightest idea that there exists a conference on AGI backed by Google and held every single year, and certainly have never heard of Hutter’s “Universal AI” that treats the subject with rigorous mathematics.
In their ignorance, they believe that the principles of intelligence are a highly complex“emergent” phenomenon for neuroscientists to figure out over decades of slow, incremental toil. Since most of the public, including their scientifically-educated colleagues, already believe this, it doesn’t seem to them like a strange belief to hold, and besides, anyone who reads even a layman’s introduction to neuroscience finds out that the human brain is extremely complicated. Given the evidence that the only known actually-existing minds are incredibly complicated, messy things, it is somewhat more rational to believe that minds are all incredibly complicated, messy things, and thus to dismiss anyone talking about working “strong AI” as a science-fiction crackpot.
How are they supposed to know that the actual theory of intelligence is quite simple, and the hard part is fitting it inside realizable, finite computers?
Also, the dual facts that Eliezer has no academic degree in AI and that plenty of people who do have such degrees have turned out to be total crackpots anyway means that the scientific public and the “public public” are really quite entitled to their belief that the base rate of crackpottery among people talking about knowing how AI works is quite high. It is high! But it’s not 100%.
(How did I tell the crackpottery apart from the real science? Well, frankly, I looked for patterns that appeared to have come from the process of doing real science: instead of a grand revelation, I looked for a slow build-up of ideas that were each ground out into multiple publications. I also filtered for AGI theorists who managed to apply their principles of broad AGI to usages in narrower machine-learning problems, resulting again in published papers. I looked for a theory that sounded like programming rather than like psychology. Hence my zeroing in on Schmidhuber, Hutter, Legg, Orneau, etc. as the AGI Theorists With a Clue.
Hutter, by the way, has written a position paper about potential Singularities in which he actually cites Yudkowsky, so hey.)
OK then. Among the scientists who have heard of them and bothered to have an opinion on the topic, does the opinion that MIRI is correct dominate? And if not so, why, given your account that the evidence unambiguously points in only one direction?
actual theory of intelligence is quite simple
I don’t think I’m going to believe you about that. The fact that in some contexts it’s convenient to define intelligence as a cross-domain optimizer does not mean that it is nothing but.
I don’t think I’m going to believe you about that. The fact that in some contexts it’s convenient to define intelligence as a cross-domain optimizer does not mean that it is nothing but.
Then just put the word aside and refer to meanings. New statement: given unlimited compute-power, a cross-domain optimization algorithm is simple. Agreed?
OK then. Among the scientists who have heard of them and bothered to have an opinion on the topic, does the opinion that MIRI is correct dominate?
I honestly do not know of any comprehensive survey or questionnaire, and refuse to speculate in the absence of data. If you know of such a survey, I’d be interested to see it.
New statement: given unlimited compute-power, a cross-domain optimization algorithm is simple. Agreed?
First, I’m not particularly interested in infinities. Truly unlimited computing power implies, for example, that you can just do an exhaustive brute-force search through the entire solution space and be done in an instant. Simple, yes, but not very meaningful.
Second, no, I do not agree. because you’re sweeping under the rug the complexities of, for example, applying your cost function to different domains. You can construct sufficiently simple optimizers, it’s just that they won’t be very… intelligent.
Right, but when dealing with a reinforcement learner like AIXI, it has no fixed cost function that it has to somehow shoehorn into dealing with different computational/conceptual domains. How the environment responds to AIXI’s actions and how the environment rewards AIXI are learned phenomena, so the only planning algorithm is expectimax. The implicit “reward function” being learned might be simple or might be complicated, but that doesn’t matter: AIXI will learn it by updating its distribution of probabilities across Turing machine programs just as well, either way.
it has no fixed cost function that it has to somehow shoehorn into dealing with different computational/conceptual domains. How the environment responds to AIXI’s actions and how the environment rewards AIXI are learned phenomena
The “cost function” here is how each state of the world (=environment) gets converted to a single number (=reward). That does not look simple to me.
Again, it doesn’t get converted at all. To use the terminology of machine learning, it’s not a function computed over the feature-vector, reward is instead represented as a feature itself.
Instead of:
reward = utility_function(world)
You have:
Inductive WorldState w : Type :=
| world : w -> integer -> WorldState w.
With the w being an arbitrary data-type representing the symbol observed on the agent’s input channel and the integer being the reward signal, similarly observed on the agent’s input channel. A full WorldState w datum is then received on the input channel in each interaction cycle.
Since AIXI’s learning model is to perform Solomonoff Induction to thus find the Turing machine that most-probably generated all previously-seen input observations, the task of “decoding” the reward is thus performed as part of Solomonoff Induction.
Really? To remind you, we’re discussing this in the context of a general-purpose super-intelligent AI which, if we get a couple of bits wrong, might just tile the universe with paperclips and possibly construct a hell for all the simulated humans who ever lived, just for kicks. And how does that AI know what to do?
A human operator.
X-D
On a bit more serious note, defining a few of the really hard parts as “somebody else’s problem” does not mean you solved the issue. Remember, this started by you claiming that intelligence is very simple.
Remember, this started by you claiming that intelligence is very simple.
You’ve wasted five replies when you should have just said at the beginning, “I don’t believe cross-domain optimization algorithms can be simple and if you try to show me how AIXI works, I’ll just change what I mean by ‘simple’.”
when you should have just said at the beginning, “I don’t believe cross-domain optimization algorithms can be simple
That’s not true. Cross-domain optimization algorithms can be simple, it’s just that when they are simple they can hardly be described as intelligent. What I don’t believe is that intelligence is nothing but a cross-domain optimizer with a lot of computing power.
GLUTs are simple too. Most people think they are not intelligent, and everyone thinks that interesting one’s can’t exist in our universe. Using “is” to mean “is according to an unrealiseable theory” is not the best of habits.
If the only way to shoehorn theoretically pure intelligence into a finite architecture is to turn it into a messy combination of specialised mindless...then everyone’s right.
How did I tell the crackpottery apart from the real science? Well, frankly, I looked for patterns that appeared to have come from the process of doing real science: instead of a grand revelation, I looked for a slow build-up of ideas that were each ground out into multiple publications.
I am not sure how you could verify any of those beliefs by a literature review. Where ‘verify’ means that the probability of their conjunction is high enough in order to currently call MIRI the most important cause. If that’s not your stance, then please elaborate. My stance is that it is important to keep in mind that general AI could turn out to be very dangerous but that it takes a lot more concrete AI research before action relevant conclusions about the nature and extent of the risk can be drawn.
As someone who is no domain expert I can only think about it informally or ask experts what they think. And currently there is not enough that speaks in favor of MIRI. But this might change. If for example the best minds at Google would thoroughly evaluate MIRI’s claims and agree with MIRI, then that would probably be enough for me to shut up. If MIRI would become a top-charity at GiveWell, then this would also cause me to strongly update in favor of MIRI. There are other possibilities as well. For example strong evidence that general AI is only 5 decades away (e.g. the existence of a robot that could navigate autonomously in a real-world environment and survive real-world threats and attacks with approximately the skill of an insect / an efficient and working emulation of a fly brain).
I am not sure how you could verify any of those beliefs by a literature review. Where ‘verify’ means that the probability of their conjunction is high enough in order to currently call MIRI the most important cause. If that’s not your stance, then please elaborate.
I only consider MIRI the most important cause in AGI, not in the entire world right now. I have nowhere near enough information to rule on what’s the most important cause in the whole damn world.
For example strong evidence that general AI is only 5 decades away (e.g. the existence of a robot that could navigate autonomously in a real-world environment and survive real-world threats and attacks with approximately the skill of an insect / an efficient and working emulation of a fly brain).
You mean the robots Juergen Schmidhuber builds for a living?
You mean the robots Juergen Schmidhuber builds for a living?
That would be scary. But I have to take your word for it. What I had in mind is e.g. something like this. This (the astounding athletic power of quadcopters) looks like the former has already been achieved. But so far I suspected that this only works given a structured environment (not chaotic), and given a narrow set of tasks. From a true insect-level AI I would e.g. expect that it could attack and kill enemy soldiers under real-world combat situations, while avoiding being hit itself. Since this is what insects are capable of.
I don’t want to nitpick though. If you say that Schmidhuber is there, then I’ll have to update. But I’ll also have to keep care that I am not too stunned by what seems like a big breakthrough simply because I don’t understand the details. For example, someone once told me that “Schmidhuber’s system solved Towers of Hanoi on a mere desktop computer using a universal search algorithm with a simple kind of memory.” Sounds stunning. But what am I to make of it? I really can’t judge how much progress this is. Here is a quote:
So Schmidhuber solved this, USING A UNIVERSAL SEARCH ALGORITHM, in 2005, on a mere DESKTOP COMPUTER that’s 100.000 times slower than your brain. Why does this not impress you? Because it’s already been done? Why? I say you should be mightily impressed by this result!!!!
Yes, okay. Naively this sounds like general AI is imminent. But not even MIRI believes this....
You see, I am aware of a lot of exciting stuff. But I can only do my best in estimating the truth. And currently I don’t think that enough speaks in favor of MIRI. That doesn’t mean I have falsified MIRI’s beliefs. But I have a lot of data points and arguments that in my opinion reduce the likelihood of a set of beliefs that already requires extraordinary evidence to take seriously (ignoring expected utility maximization, which tells me to give all my money to MIRI, even if the risk is astronomically low).
Because they’ve never heard of them. I am not joking. Most computer scientists are not working in artificial intelligence, have not the slightest idea that there exists a conference on AGI backed by Google and held every single year, and certainly have never heard of Hutter’s “Universal AI” that treats the subject with rigorous mathematics.
In their ignorance, they believe that the principles of intelligence are a highly complex “emergent” phenomenon for neuroscientists to figure out over decades of slow, incremental toil. Since most of the public, including their scientifically-educated colleagues, already believe this, it doesn’t seem to them like a strange belief to hold, and besides, anyone who reads even a layman’s introduction to neuroscience finds out that the human brain is extremely complicated. Given the evidence that the only known actually-existing minds are incredibly complicated, messy things, it is somewhat more rational to believe that minds are all incredibly complicated, messy things, and thus to dismiss anyone talking about working “strong AI” as a science-fiction crackpot.
How are they supposed to know that the actual theory of intelligence is quite simple, and the hard part is fitting it inside realizable, finite computers?
Also, the dual facts that Eliezer has no academic degree in AI and that plenty of people who do have such degrees have turned out to be total crackpots anyway means that the scientific public and the “public public” are really quite entitled to their belief that the base rate of crackpottery among people talking about knowing how AI works is quite high. It is high! But it’s not 100%.
(How did I tell the crackpottery apart from the real science? Well, frankly, I looked for patterns that appeared to have come from the process of doing real science: instead of a grand revelation, I looked for a slow build-up of ideas that were each ground out into multiple publications. I also filtered for AGI theorists who managed to apply their principles of broad AGI to usages in narrower machine-learning problems, resulting again in published papers. I looked for a theory that sounded like programming rather than like psychology. Hence my zeroing in on Schmidhuber, Hutter, Legg, Orneau, etc. as the AGI Theorists With a Clue.
Hutter, by the way, has written a position paper about potential Singularities in which he actually cites Yudkowsky, so hey.)
OK then. Among the scientists who have heard of them and bothered to have an opinion on the topic, does the opinion that MIRI is correct dominate? And if not so, why, given your account that the evidence unambiguously points in only one direction?
I don’t think I’m going to believe you about that. The fact that in some contexts it’s convenient to define intelligence as a cross-domain optimizer does not mean that it is nothing but.
Then just put the word aside and refer to meanings. New statement: given unlimited compute-power, a cross-domain optimization algorithm is simple. Agreed?
I honestly do not know of any comprehensive survey or questionnaire, and refuse to speculate in the absence of data. If you know of such a survey, I’d be interested to see it.
First, I’m not particularly interested in infinities. Truly unlimited computing power implies, for example, that you can just do an exhaustive brute-force search through the entire solution space and be done in an instant. Simple, yes, but not very meaningful.
Second, no, I do not agree. because you’re sweeping under the rug the complexities of, for example, applying your cost function to different domains. You can construct sufficiently simple optimizers, it’s just that they won’t be very… intelligent.
What cost function? It’s a reinforcement learner.
cost function = utility function = fitness function = reward (all with appropriate signs)
Right, but when dealing with a reinforcement learner like AIXI, it has no fixed cost function that it has to somehow shoehorn into dealing with different computational/conceptual domains. How the environment responds to AIXI’s actions and how the environment rewards AIXI are learned phenomena, so the only planning algorithm is expectimax. The implicit “reward function” being learned might be simple or might be complicated, but that doesn’t matter: AIXI will learn it by updating its distribution of probabilities across Turing machine programs just as well, either way.
The “cost function” here is how each state of the world (=environment) gets converted to a single number (=reward). That does not look simple to me.
Again, it doesn’t get converted at all. To use the terminology of machine learning, it’s not a function computed over the feature-vector, reward is instead represented as a feature itself.
Instead of:
You have:
With the
w
being an arbitrary data-type representing the symbol observed on the agent’s input channel and theinteger
being the reward signal, similarly observed on the agent’s input channel. A fullWorldState w
datum is then received on the input channel in each interaction cycle.Since AIXI’s learning model is to perform Solomonoff Induction to thus find the Turing machine that most-probably generated all previously-seen input observations, the task of “decoding” the reward is thus performed as part of Solomonoff Induction.
So where, then, is reward coming from? What puts it into the AIXI’s input channel?
In AIXI’s design? A human operator.
Really? To remind you, we’re discussing this in the context of a general-purpose super-intelligent AI which, if we get a couple of bits wrong, might just tile the universe with paperclips and possibly construct a hell for all the simulated humans who ever lived, just for kicks. And how does that AI know what to do?
A human operator.
X-D
On a bit more serious note, defining a few of the really hard parts as “somebody else’s problem” does not mean you solved the issue. Remember, this started by you claiming that intelligence is very simple.
You’ve wasted five replies when you should have just said at the beginning, “I don’t believe cross-domain optimization algorithms can be simple and if you try to show me how AIXI works, I’ll just change what I mean by ‘simple’.”
What a jerk.
That’s not true. Cross-domain optimization algorithms can be simple, it’s just that when they are simple they can hardly be described as intelligent. What I don’t believe is that intelligence is nothing but a cross-domain optimizer with a lot of computing power.
I accept your admission of losing :-P
GLUTs are simple too. Most people think they are not intelligent, and everyone thinks that interesting one’s can’t exist in our universe. Using “is” to mean “is according to an unrealiseable theory” is not the best of habits.
MIRIs claims also aren’t accepted by domain experts who have been invited to discuss them here, and so, know about them.
If you’ve got links to those discussions, I’d love to read them and see what I can learn from them.
Les voila!
If the only way to shoehorn theoretically pure intelligence into a finite architecture is to turn it into a messy combination of specialised mindless...then everyone’s right.
As far as I know, MIRI’s main beliefs are listed in the post ‘Five theses, two lemmas, and a couple of strategic implications’.
I am not sure how you could verify any of those beliefs by a literature review. Where ‘verify’ means that the probability of their conjunction is high enough in order to currently call MIRI the most important cause. If that’s not your stance, then please elaborate. My stance is that it is important to keep in mind that general AI could turn out to be very dangerous but that it takes a lot more concrete AI research before action relevant conclusions about the nature and extent of the risk can be drawn.
As someone who is no domain expert I can only think about it informally or ask experts what they think. And currently there is not enough that speaks in favor of MIRI. But this might change. If for example the best minds at Google would thoroughly evaluate MIRI’s claims and agree with MIRI, then that would probably be enough for me to shut up. If MIRI would become a top-charity at GiveWell, then this would also cause me to strongly update in favor of MIRI. There are other possibilities as well. For example strong evidence that general AI is only 5 decades away (e.g. the existence of a robot that could navigate autonomously in a real-world environment and survive real-world threats and attacks with approximately the skill of an insect / an efficient and working emulation of a fly brain).
I only consider MIRI the most important cause in AGI, not in the entire world right now. I have nowhere near enough information to rule on what’s the most important cause in the whole damn world.
You mean the robots Juergen Schmidhuber builds for a living?
That would be scary. But I have to take your word for it. What I had in mind is e.g. something like this. This (the astounding athletic power of quadcopters) looks like the former has already been achieved. But so far I suspected that this only works given a structured environment (not chaotic), and given a narrow set of tasks. From a true insect-level AI I would e.g. expect that it could attack and kill enemy soldiers under real-world combat situations, while avoiding being hit itself. Since this is what insects are capable of.
I don’t want to nitpick though. If you say that Schmidhuber is there, then I’ll have to update. But I’ll also have to keep care that I am not too stunned by what seems like a big breakthrough simply because I don’t understand the details. For example, someone once told me that “Schmidhuber’s system solved Towers of Hanoi on a mere desktop computer using a universal search algorithm with a simple kind of memory.” Sounds stunning. But what am I to make of it? I really can’t judge how much progress this is. Here is a quote:
Yes, okay. Naively this sounds like general AI is imminent. But not even MIRI believes this....
You see, I am aware of a lot of exciting stuff. But I can only do my best in estimating the truth. And currently I don’t think that enough speaks in favor of MIRI. That doesn’t mean I have falsified MIRI’s beliefs. But I have a lot of data points and arguments that in my opinion reduce the likelihood of a set of beliefs that already requires extraordinary evidence to take seriously (ignoring expected utility maximization, which tells me to give all my money to MIRI, even if the risk is astronomically low).