If artificial general intelligence will eventually be achieved by some sort of genetic/evolutionary computation, or neuromorphic engineering, then I can see how this could lead to unfriendly AND capable AI. But an intelligently designed AI will either work as intended or be incapable of taking over the world (read: highly probable).
Alexander, have you even bothered to read the works of Marcus Hutter and Juergen Schmidhuber, or have you spent all your AI-researching time doing additional copy-pastas of this same argument every single time the subject of safe or Friendly AGI comes up?
Your argument makes a measure of sense if you are talking about the social process of AGI development: plainly, humans want to develop AGI that will do what humans intend for it to do. However, even a cursory look at the actual research literature shows that the mathematically most simple agents (ie: those that get discovered first by rational researchers interested in finding universal principles behind the nature of intelligence) are capital-U Unfriendly, in that they are expected-utility maximizers with not one jot or tittle in their equations for peace, freedom, happiness, or love, or the Ideal of the Good, or sweetness and light, or anything else we might want.
(Did you actually expect that in this utterly uncaring universe of blind mathematical laws, you would find that intelligence necessitates certain values?)
No, Google Maps will never turn superintelligent and tile the solar system in computronium to find me a shorter route home from a pub crawl. However, an AIXI or Goedel Machine instance will, because these are in fact entirely distinct algorithms.
In fact, when dealing with AIXI and Goedel Machines we have an even bigger problem than “tile everything in computronium to find the shortest route home”: the much larger problem of not being able to computationally encode even a simple verbal command like “find the shortest route home”. We are faced with the task of trying to encode our values into a highly general, highly powerful expected-utility maximizer at the level of, metaphorically speaking, pre-verbal emotion.
Now, if you would like to contribute productively, I’ve got some ideas I’d love to talk over with someone for actually doing something about some few small corners of Friendliness subproblems. Otherwise, please stop repeating yourself.
However, even a cursory look at the actual research literature shows that the mathematically most simple agents (ie: those that get discovered first by rational researchers interested in finding universal principles behind the nature of intelligence) are capital-U Unfriendly, in that they are expected-utility maximizers...
Did you actually expect that in this utterly uncaring universe of blind mathematical laws, you would find that intelligence necessitates certain values?
That’s largely irrelevant and misleading. Your autonomous car does not need to feature an encoding of an amount of human values that correspondents to its level of autonomy.
Alexander, have you even bothered to read the works of Marcus Hutter and Juergen Schmidhuber...
I asked several people what they think about it, and to provide a rough explanation. I’ve also had e-Mail exchanges with Hutter, Schmidhuber and Orseau. I also informally thought about whether practically general AI that falls into the category “consequentialist / expected utility maximizer / approximation to AIXI” could ever work. And I am not convinced.
If general AI, which is capable of a hard-takeoff, and able to take over the world, requires less lines of code, in order to work, than to constrain it not to take over the world, then that’s an existential risk. But I don’t believe this to be the case.
Since I am not a programmer, or computer scientist, I tend to look at general trends, and extrapolate from there. I think this makes more sense than to extrapolate from some unworkable model such as AIXI. And the general trend is that humans become better at making software behave as intended. And I see no reason to expect some huge discontinuity here.
Here is what I believe to be the case:
(1) The abilities of systems are part of human preferences as humans intend to give systems certain capabilities and, as a prerequisite to build such systems, have to succeed at implementing their intentions.
(2) Error detection and prevention is such a capability.
(3) Something that is not better than humans at preventing errors is no existential risk.
(4) Without a dramatic increase in the capacity to detect and prevent errors it will be impossible to create something that is better than humans at preventing errors.
(5) A dramatic increase in the human capacity to detect and prevent errors is incompatible with the creation of something that constitutes an existential risk as a result of human error.
Here is what I doubt:
(1) Present-day software is better than previous software generations at understanding and doing what humans mean.
(2) There will be future generations of software which will be better than the current generation at understanding and doing what humans mean.
(3) If there is better software, there will be even better software afterwards.
(4) Magic happens.
(5) Software will be superhuman good at understanding what humans mean but catastrophically worse than all previous generations at doing what humans mean.
I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them – just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified.
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence – e.g. this standard would fail MWI, molecular nanotechnology, cryonics, and would have recently failed ‘high-carb diets are not necessarily good for you’. I don’t particularly expect this standard to be met before the end of the world, and it wouldn’t be necessary to meet it either.
So since academic consensus on the topic is not reliable, and domain knowledge in the field of AI is negatively useful, what are the prerequisites for grasping the truth when it comes to AI risks?
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence – e.g. this standard would fail MWI, molecular nanotechnology, cryonics,
I think that in saying this, Eliezer is making his opponents’ case for them. Yes, of course the standard would also let you discard cryonics. One solution to that is to say that the standard is bad. Another solution is to say “yes, and I don’t much care for cryonics either”.
I think that in saying this, Eliezer is making his opponents’ case for him.
Nah, those are all plausibly correct things that mainstream science has mostly ignored and/or made researching taboo.
If you prefer a more clear-cut example, science was wrong about continental drift for about half a century—until overwhelming, unmistakable evidence became available.
The main reason that scientists rejected continental drift was that there was no known mechanism which could cause it; plate tectonics wasn’t developed until the late 1950′s.
Continental drift is also commonly invoked by pseudoscientists as a reason not to trust scientists, and if you do so too you’re in very bad company. There’s a reason why pseudoscientists keep using continental drift for this purpose and don’t have dozens of examples: examples are very hard to find. Even if you decide that continental drift is close enough that it counts, it’s a very atypical case. Most of the time scientists reject something out of hand, they’re right, or at worst, wrong about the thing existing, but right about the lack of good evidence so far.
The main reason that scientists rejected continental drift was that there was no known mechanism which could cause it; plate tectonics wasn’t developed until the late 1950′s.
There was also a great deal of institutional backlash against proponents of continental drift, which was my point.
Continental drift is also commonly invoked by pseudoscientists as a reason not to trust scientists, and if you do so too you’re in very bad company.
Guilt by association? Grow up.
There’s a reason why pseudoscientists keep using continental drift for this purpose and don’t have dozens of examples: examples are very hard to find. Even if you decide that continental drift is close enough that it counts, it’s a very atypical case.
There are many, many cases of scientists being oppressed and dismissed because of their race, their religious beliefs, and their politics. That’s the problem, and that’s what’s going on with the CS people who still think AI Winter implies AGI isn’t worth studying.
There was also a great deal of institutional backlash against proponents of continental drift, which was my point.
So? I’m pretty sure that there would be backlash against, say, homeopaths in a medical association. Backlash against deserving targets (which include people who are correct but because of unlucky circumstances, legitimately look wrong) doesn’t count.
I’m reminded of an argument I had with a proponent of psychic power. He asked me what if psychic powers happen to be of such a nature that they can’t be detected by experiments, don’t show up in double-blind tests, etc.. I pointed out that he was postulating that psi is real but looks exactly like a fake. If something looks exactly like a fake, at some point the rational thing to do is treat it as fake. At that point in history, continental drift happened to look like a fake.
Guilt by association? Grow up.
That’s not guilt by association, it’s pointing out that the example is used by pseudoscientists for a reason, and this reason applies to you too.
There are many, many cases of scientists being oppressed and dismissed because of their race, their religious beliefs, and their politics.
If scientists dismissed cryonics because of the supporters’ race, religion, or politics, you might have a point.
I’ll limit my response to the following amusing footnote:
If scientists dismissed cryonics because of the supporters’ race, religion, or politics, you might have a point.
This is, in fact, what happened between early cryonics and cryobiology.
EDIT: Just so people aren’t misled by Jiro’s motivated interpretation of the link:
However, according to the cryobiologist informant who attributes to this episode the formal hardening of the Society for Cryobiology against cryonics, the repercussions from this incident were far-reaching. Rumors about the presentation—often wildly distorted rumors—began to circulate. One particularly pernicious rumor, according to this informant, was that my presentation had included graphic photos of “corpses’ heads being cut off.” This was not the case. Surgical photos which were shown were of thoracic surgery to place cannula and would be suitable for viewing by any audience drawn from the general public.
This informant also indicates that it was his perception that this presentation caused real fear and anger amongst the Officers and Directors of the Society. They felt as if they had been “invaded” and that such a presentation given during the course of, and thus under the aegis of, their meeting could cause them to be publicly associated with cryonics. Comments such as “what if the press got wind of this,” or “what if a reporter had been there” were reported to have circulated.
Also, the presentation may have brought into sharper focus the fact that cryonicists existed, were really freezing people, and that they were using sophisticated procedures borrowed from medicine, and yes, even from cryobiology, which could cause confusion between the “real” science of cryobiology and the “fraud” of cryonics in the public eye. More to the point, it was clear that cryonicists were not operating in some back room and mumbling inarticulately; they were now right there in the midst of the cryobiologists and they were anything but inarticulate, bumbling back-room fools.
You’re equivocating on the term “political”. When the context is “race, religion, or politics”, “political” doesn’t normally mean “related to human status”, it means “related to government”. Besides, they only considered it low status based on their belief that it is scientifically nonsensical.
My reply was steelmanning your post by assuming that the ethical considerations mentioned in the article counted as religious. That was the only thing mentioned in it that could reasonably fall under “race, religion, or politics” as that is normally understood.
Most of the history described in your own link makes it clear that scientists objected because they think cryonics is scientifically nonsense, not because of race, religion, or politics. The article then tacks on a claim that scientists reject it for ethical reasons, but that isn’t supported by its own history, just by a few quotes with no evidence that these beliefs are prevalent among anyone other than the people quoted.
Furthermore, of the quotes it does give, one of them is vague enough that I have no idea if it means in context what the article claims it means. Saying that the “end result” is damaging doesn’t necessarily mean that having unfrozen people walking around is damaging—it may mean that he thinks cryonics doesn’t work and that having a lot of resources wasted on freezing corpses is damaging.
At a minimum, a grasp of computer programming and CS. Computer programming, not even AI.
I’m inclined to disagree somewhat with Eliezer_2009 on the issue of traditional AI—even basic graph search algorithms supply valuable intuitions about what planning looks like, and what it is not. But even that same (obsoleted now, I assume) article does list computer programming knowledge as a requirement.
...what are the prerequisites for grasping the truth when it comes to AI risks?
At a minimum, a grasp of computer programming and CS. Computer programming, not even AI.
What counts as “a grasp” of computer programming/science? I can e.g. program a simple web crawler and solve a bunch of Project Euler problems. I’ve read books such as “The C Programming Language”.
I would have taken the udacity courses on machine learning by now, but the stated requirement is a strong familiarity with Probability Theory, Linear Algebra and Statistics. I wouldn’t describe my familiarity as strong, that will take a few more years.
I am skeptical though. If the reason that I dismiss certain kinds of AI risks is that I lack the necessary education, then I expect to see rebuttals of the kind “You are wrong because of (add incomprehensible technical justification)...”. But that’s not the case. All I see are half-baked science fiction stories and completely unconvincing informal arguments.
What counts as “a grasp” of computer programming/science?
This is actually a question I’ve thought about quite a bit, in a different context. So I have a cached response to what makes a programmer, not tailored to you or to AI at all. When someone asks for guidance on development as a programmer, the question I tend to ask is, how big is the biggest project you architected and wrote yourself?
The 100 line scale tests only the mechanics of programming; the 1k line scale tests the ability to subdivide problems; the 10k line scale tests the ability to select concepts; and the 50k line scale tests conceptual taste, and the ability to add, split, and purge concepts in a large map. (Line numbers are very approximate, but I believe the progression of skills is a reasonably accurate way to characterize programmer development.)
New programmers (not jimrandomh), be wary of line counts! It’s very easy for a programmer who’s not yet ready for a 10k line project to turn it into a 50k lines. I agree with the progression of skills though.
Yeah, I was thinking more of “project as complex as an n-line project in an average-density language should be”. Bad code (especially with copy-paste) can inflate inflate line numbers ridiculously, and languages vary up to 5x in their base density too.
I would have taken the udacity courses on machine learning by now, but the stated requirement is a strong familiarity with Probability Theory, Linear Algebra and Statistics. I wouldn’t describe my familiarity as strong, that will take a few more years.
I think you’re overestimating these requirements. I haven’t taken the Udacity courses, but I did well in my classes on AI and machine learning in university, and I wouldn’t describe my background in stats or linear algebra as strong—more “fair to conversant”.
They’re both quite central to the field and you’ll end up using them a lot, but you don’t need to know them in much depth. If you can calculate posteriors and find the inverse of a matrix, you’re probably fine; more complicated stuff will come up occasionally, but I’d expect a refresher when it does.
Don’t twist Eliezer’s words. There’s a vast difference between “a PhD in what they call AI will not help you think about the mathematical and philosophical issues of AGI” and “you don’t need any training or education in computing to think clearly about AGI”.
What are the prerequisites for grasping the truth when it comes to AI risks?
Ability to program is probably not sufficient, but it is definitely necessary. But not because of domain relevance; it’s necessary because programming teaches cognitive skills that you can’t get any other way, by presenting a tight feedback loop where every time you get confused, or merge concepts that needed to be distinct, or try to wield a concept without fully sharpening your understanding of it first, the mistake quickly gets thrown in your face.
And, well… it’s pretty clear from your writing that you haven’t mastered this yet, and that you aren’t going to become less confused without stepping sideways and mastering the basics first.
You mean that most cognitive skills can be taught in multiple ways, and you don’t see why those taught by programming are any different? Or do you have a specific skill taught by programming in mind, and think there’s other ways to learn it?
First, meta. It should be suspicious to see programmers claiming to posses special cognitive skills that only they can have—it’s basically a “high priesthood” claim. Besides, programming became widespread only about 30 years ago. So, which cognitive skills were very rare until that time?
Second, “presenting a tight feedback loop where … the mistake quickly gets thrown in your face” isn’t a unique-to-programming situation by any means.
Third, most cognitive skills are fairly diffuse and cross-linked. Which specific cognitive skills you can’t get any way other than programming?
I suspect that what the OP meant was “My programmer friends are generally smarter than my non-programmer friends” which is, um, a different claim :-/
I don’t think programming is the only way to build… let’s call it “reductionist humility”. Nor even necessarily the most reliable; non-software engineers probably have intuitions at least as good, for example, to say nothing of people like research-level physicists. I do think it’s the fastest, cheapest, and currently most common, thanks to tight feedback loops and a low barrier to entry.
On the other hand, most programmers—and other types of engineers—compartmentalize this sort of humility. There might even be something about the field that encourages compartmentalization, or attracts to it people that are already good at it; engineers are disproportionately likely to be religious fundamentalists, for example. Since that’s not sufficient to meet the demands of AGI problems, we probably shouldn’t be patting ourselves on the back too much here.
I might summarize it as an intuitive understanding that there is no magic, no anthropomorphism, in what you’re building; that any problems are entirely due to flaws in your specification or your model. I’m describing it in terms of humility because the hard part, in practice, seems to be internalizing the idea that you and not some external malicious agency are responsible for failures.
This is hard to cultivate directly, and programmers usually get partway there by adopting a semi-mechanistic conception of agency that can apply to the things they’re working on: the component knows about this, talks to that, has such-and-such a purpose in life. But I don’t see it much at all outside of scientists and engineers.
internalizing the idea that you and not some external malicious agency are responsible for failures.
So it’s basically responsibility?
...that any problems are entirely due to flaws in your specification or your model.
Clearly you never had to chase bugs through third-party libraries… :-) But yes, I understand what you mean, though I am not sure in which way this is a cognitive skill. I’d probably call it an attitude common to professions in which randomness or external factors don’t play a major role—sure, programming and engineering are prominent here.
You could describe it as a particular type of responsibility, but that feels noncentral to me.
Clearly you never had to chase bugs through third-party libraries...
Heh. A lot of my current job has to do with hacking OpenSSL, actually, which is by no means a bug-free library. But that’s part of what I was trying to get at by including the bit about models—and in disciplines like physics, of course, there’s nothing but third-party content.
I don’t see attitudes and cognitive skills as being all that well differentiated.
But randomness and external factors do predominate in almost everything. For that reason, applying programming skills to other domains is almost certain to be suboptimal
But randomness and external factors do predominate in almost everything.
I don’t think so, otherwise walking out of your door each morning would start a wild adventure and attempting to drive a vehicle would be an act of utter madness.
They don’t predominate overall because you have learnt how to deal with them. If there were no random or external factors in driving, you could do so with a blindfold on.
Much of the writing on this site is philosophy, and people with a technology background tend not to grok philosophy because they are accurated to answer that can be be looked up, or figured out by known methods. If they could keep the logic chops and lose the impatience, they [might make good philosophers], but they tend not to.
it’s necessary because programming teaches cognitive skills that you can’t get any other way, by presenting a tight feedback loop where every time you get confused, or merge concepts that needed to be distinct, or try to wield a concept without fully sharpening your understanding of it first, the mistake quickly gets thrown in your face.
On a complete sidenote, this is a lot of why programming is fun. I’ve also found that learning the Coq theorem-prover has exactly the same effect, to the point that studying Coq has become one of the things I do to relax.
And, well… it’s pretty clear from your writing that you haven’t mastered this yet, and that you aren’t going to become less confused without stepping sideways and mastering the basics first.
People have been telling him this for years. I doubt it will get much better.
I also informally thought about whether practically general AI that falls into the category “consequentialist / expected utility maximizer / approximation to AIXI” could ever work. And I am not convinced.
Too bad. I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power. Strangely, doing so will not make it conform to your ideas about “eventual future AGI”, because this one is actually existing AGI, and reality doesn’t have to listen to you.
If general AI, which is capable of a hard-takeoff, and able to take over the world, requires less lines of code, in order to work, than to constrain it not to take over the world, then that’s an existential risk.
That is exactly the situation we face, your refusal to believe in actually-existing AGI models notwithstanding. Whine all you please: the math will keep on working.
Since I am not a programmer, or computer scientist,
Then I recommend you shut up about matters of highly involved computer science until such time as you have acquired the relevant knowledge for yourself. I am a trained computer scientist, and I held lots of skepticism about MIRI’s claims, so I used my training and education to actually check them. And I found that the actual evidence of the AGI research record showed MIRI’s claims to be basically correct, modulo Eliezer’s claims about an intelligence explosion taking place versus Hutter’s claim that an eventual optimal agent will simply scale itself up in intelligence with the amount of computing power it can obtain.
That’s right, not everyone here is some kind of brainwashed cultist. Many of us have exercised basic skepticism against claims with extremely low subjective priors. But we exercised our skepticism by doing the background research and checking the presently available object-level evidence rather than by engaging in meta-level speculations about an imagined future in which everything will just work out.
Take a course at your local technical college, or go on a MOOC, or just dust off a whole bunch of textbooks in computer-scientific and mathematical subjects, study the necessary knowledge to talk about AGI, and then you get to barge in telling everyone around you how we’re all full of crap.
Too bad. I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power.
Which one are you talking about, to be completely exact?
I am a trained computer scientist
then use that training and figure out how many galaxies worth of computing power it’s going to take.
Of bleeding course I was talking about AIXI. What I find strange to the point of suspiciousness here is the evinced belief on part of the “AI skeptics” that the inefficiency of MC-AIXI means there will never, ever be any such thing as near-human, human-equivalent, or greater-than-human AGIs. After all, if intelligence is impossible without converting whole galaxies to computronium first, then how do we work?
And if we admit that sub-galactic intelligence is possible, why not artificial intelligence? And if we admit that sub-galactic artificial intelligence is possible, why not something from the “Machine Learning for Highly General Hypothesis Classes + Decision Theory of Active Environments = Universal AI” paradigm started by AIXI?
I’m not at all claiming current implementations of AIXI or Goedel Machines are going to cleanly evolve into planet-dominating superintelligences that run on a home PC next year, or even next decade (for one thing, I don’t think planet dominating superintelligences will run on a present-day home PC ever). I am claiming that the underlying scientific paradigm of the thing is a functioning reduction of what we mean by the word “intelligence”, and given enough time to work, this scientific paradigm is very probably (in my view) going to produce software you can run on an ordinary massive server farm that will be able to optimize arbitrary, unknown or partially unknown environments according to specified utility functions.
And eventually, yes, those agents will become smarter than us (causing “MIRI’s issues” to become cogent), because we, actual human beings, will figure out the relationships between compute-power, learning efficiency (rates of convergence to error-minimizing hypotheses in terms of training data), reasoning efficiency (moving probability information from one proposition or node in a hypothesis to another via updating), and decision-making efficiency (compute-power needed to plan well given models of the environment). Actual researchers will figure out the fuel efficiency of artificial intelligence, and thus be able to design at least one gigantic server cluster running at least one massive utility-maximizing algorithm that will be able to reason better and faster than a human (while they have the budget to keep it running).
The notion that AI is possible is mainstream. The crank stuff such as “I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power.”, that’s to computer science as hydrinos are to physics.
As for your server farm optimizing unknown environments, the last time I checked, we knew some laws of physics, and did things like making software tools that optimize simulated environments that follow said laws of physics, incidentally it also being mathematically nonsensical to define an “utility function” without a well defined domain. So you got your academic curiosity that’s doing all on it’s own and using some very general and impractical representations for modelling the world, so what? You’re talking of something that is less—in terms of it’s market value, power, anything—than it’s parts and underlying technologies.
incidentally it also being mathematically nonsensical to define an “utility function” without a well defined domain
Which is why reinforcement learning is so popular, yes: it lets you induce a utility function over any environment you’re capable of learning to navigate.
Remember, any machine-learning algorithm has a defined domain of hypotheses it can learn/search within. Given that domain of hypotheses, you can define what a domain of utility functions. Hence, reinforcement learning and preference learning.
The notion that AI is possible is mainstream. The crank stuff such as “I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power.”, that’s to computer science as hydrinos are to physics.
You are completely missing the point. If we’re all going to agree that AI is possible, and agree that there’s a completely crappy but genuinely existent example of AGI right now, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress. My whole point is that we’ve already crossed the equivalent threshold from “Hey, why do photons do that when I fire them at that plate?” to “Oh, there’s a photoelectric effect that looks to be described well by this fancy new theory.” From there it was less than one century between the raw discovery of quantum mechanics and the common usage of everyday technologies based on quantum mechanics.
So you got your academic curiosity that’s doing all on it’s own and using some very general and impractical representations for modelling the world, so what?
The point being: when we can manage to make it sufficiently efficient, and provided we can make it safe, we can set it to work solving just about any problem we consider to be, well, a problem. Given sufficient power and efficiency, it becomes useful for doing stuff people want done, especially stuff people either don’t want to do themselves or have a very hard time doing themselves.
completely crappy but genuinely existent example of AGI, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress.
Yeah. I can write formally the resurrection of everyone who ever died. Using pretty much exact same approach. A for loop, iterating over every possible ‘brain’ just like the loops that iterate over every action sequence. Because when you have no clue how to do something, you can always write a for loop. I can put it on github, then cranks can download it and say that resurrecting all dead is a matter of additional engineering progress. After all, all dead had once lived, so it got to be possible for them to be alive.
Describing X as “Y, together with the difference between X and Y” is a tautology. Drawing the conclusion that X is “really” a sort of Y already, and the difference is “just” a matter of engineering development is no more than inspirational fluff. Dividing problems into subproblems is all very well, but not when one of the subproblems amounts to the whole problem.
The particular instance “here’s a completely crappy attempt at making an AGI and all we have to do is scale it up” has been a repeated theme of AGI research from the beginning. The scaling up has never happened. There is no such thing as a “completely crappy AGI”, only things that aren’t AGI.
I think you underestimate the significance of reducing the AGI problem to the sequence prediction problem. Unlike the former, the latter problem is very well defined, and progress is easily measurable and quantifiable (in terms of efficiency of cross-domain compression). The likelyhood of engineering progress on a problem where success can be quantified seems significantly higher than on something as open ended as “general intelligence”.
It doesn’t “reduce” anything, not in reductionism sense anyway. If you are to take that formula and apply the yet unspecified ultra powerful mathematics package to it—that’s what you need to run it on planet worth of computers—it’s this mathematics package that has to be extremely intelligent and ridiculously superhuman, before the resulting AI is even a chimp. It’s this mathematics package that has to learn tricks and read books, that has to be able to do something as simple as making use of a theorem it encountered on input.
The mathematics package doesn’t have to do anything “clever” to build a highly clever sequence predictor. It just has to be efficient in terms of computing time and training data necessary to learn correct hypotheses.
So nshepperd is quite correct: MC-AIXI is a ridiculously inefficient sequence predictor and action selector, with major visible flaws, but reducing “general intelligence” to “maximizing a utility function over world-states via sequence prediction in an active environment” is a Big Deal.
Multitude of AIs have been following what you think “AIXI” model is—select predictors that work, use them—long before anyone bothered to formulate it as a brute force loop (AIXI).
I think you, like most people over here, have a completely inverted view with regards to the difficulty of different breakthroughs. There is a point where the AI uses hierarchical models to deal with environment of greater complexity than the AI itself; getting there is fundamentally difficult, as in, we have no clue how to get there.
It is nice to believe that the word of some hoi polloi is waiting on you for some conceptual breakthrough just roughly within your reach like AIXI is, but that’s just not how it works.
edit: Basically, it’s as if you’re concerned about nuclear powered 20 feet tall robots that shoot nuclear hand grenades. After all, the concept of 20 feet tall robot is the enormous breakthrough, while a sufficiently small nuclear reactor or hand grenade sized nukes are just a matter of “efficiency”.
That’s not what’s interesting about AIXI. “Select predictors that work, then use them” is a fair description of the entire field of machine learning; we’ve learned how to do that fairly well in narrow, well-defined problem domains, but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
The AIXI model is cool because it defines a clever (if totally impractical, and not without pitfalls) way of specifying a single algorithm that can generalize to arbitrary environments without requiring any pipe-fitting work on the part of its developers. That is (to my knowledge) new, and fairly impressive, though it remains a purely theoretical advance: the Monte Carlo approximation eli mentioned may qualify as general AI in some technical sense, but for practical purposes it’s about as smart as throwing transistors at a dart board.
but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
Hypothesis generation over environments that aren’t massively less complex than the machine is vastly harder, and remains vastly harder (albeit there are advances). There’s a subtle problem substitution occurring which steals the thunder you originally reserved for something that actually is vastly harder.
Thing is, many people could at any time write a loop over, say, possible neural network values, and NNs (with feedback) being Turing complete, it’d work roughly the same. Said for loop would be massively, massively less complicated, ingenious, and creative than what those people actually did with their time instead.
The ridiculousness here is that, say, John worked on those ingenious algorithms while keeping in mind that the ideal is the best parameters out of the whole space (which is the abstract concept behind the for loop iteration over those parameters). You couldn’t see what John was doing because he didn’t write it out as a for loop. So James does some work where he—unlike John—has to write out the for loop explicitly, and you go Whoah!
That is (to my knowledge) new
Isn’t. See Solomonoff induction, works of Kolmogorov, etc.
Which is why reinforcement learning is so popular, yes
There’s the AIs that solve novel problems along the lines of “design a better airplane wing” or “route a microchip”, and in that field, reinforcement learning of how basic physics works is pretty much one hundred percent irrelevant.
You are completely missing the point. If we’re all going to agree that AI is possible, and agree that there’s a completely crappy but genuinely existent example of AGI right now, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress
Slow, long term progress, an entire succession of technologies.
Really, you’re just like free energy pseudoscientists. They do all the same things. Ohh, you don’t want to give money for cold fusion? You must be a global warming denialist. That’s the way they think and that’s precisely the way you think about the issue. That you can make literally cold fusion happen with muons in no way shape or form supports what the cold fusion crackpots are doing. Nor does it make cold fusion power plants any more or less a matter of “additional engineering progress” than it would be otherwise.
edit: by same logic, resurrection of the long-dead never-preserved is merely a matter of “additional engineering progress”. Because you can resurrect the dead using this exact same programming construct that AIXI uses to solve problems. It’s called a “for loop”, there’s this for loop in monte carlo aixi. This loop goes over every possible [thing] when you have no clue what so ever how to actually produce [thing] . Thing = action sequence for AIXI and the brain data for resurrection of the dead.
Slow, long term progress, an entire succession of technologies.
Ok, hold on, halt, major question: how closely do you follow the field of machine learning? And computational cognitive science?
Because on the one hand, there is very significant progress being made. On the other hand, when I say “additional engineering progress”, that involves anywhere from years to decades of work before being able to make an agent that can compose an essay, due to the fact that we need classes of learners capable of inducing fairly precise hypotheses over large spaces of possible programs.
What it doesn’t involve is solving intractable, magical-seeming philosophical problems like the nature of “intelligence” or “consciousness” that have always held the field of AI back.
edit: by same logic, resurrection of the long-dead never-preserved is merely a matter of “additional engineering progress”.
No, that’s just plain impossible. Even in the case of cryonic so-called “preservation”, we don’t know what we don’t know about what information we will have needed preserved to restore someone.
Ok, hold on, halt, major question: how closely do you follow the field of machine learning? And computational cognitive science?
(makes the gesture with the hands) Thiiiiis closely. Seriously though, not far enough as to start claiming that mc-AIXI does something interesting when run on a server with root access, or to claim that it would be superhuman if run on all computers we got, or the like.
No, that’s just plain impossible.
Do I need to write code for that and put it on github? Iterates over every possible brain (represented as, say, a Turing machine), runs it for enough timesteps. Requires too much computing power.
Tell me, if I signed up as the PhD student of one among certain major general machine learning researchers, and built out their ideas into agent models, and got one of those running on a server cluster showing interesting proto-human behaviors, might it interest you?
You are completely missing the point. If we’re all going to agree that AI is possible, and agree that there’s a completely crappy but genuinely existent example of AGI right now, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress
Progress in 1. The sense of incrementally throwing more resources at AIXI, or 2. Forgetting AIXI , and coming up with something more parsimonious?
Because, if it’s 2, there is no other AGI to use as a stating point got incremental progress.
Too bad. I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power.
I think you are underestimating this by many orders of magnitudes.
Yeah. A starting point could be the AI writing some 1000 letter essay (action space of 27^1000 without punctuation) or talking through a sound card (action space of 2^(16*44100) per second). If he was talking about mc-AIXI on github, the relevant bits seem to be in the agent.cpp and it ain’t looking good.
Too bad. I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power.
We won’t get a chance to test the “planet’s worth of computing power” hypothesis directly, since none of us have access to that much computing power. But, from my own experience implementing mc-aixi-ctw, I suspect that is an underestimate of the amount of compute power required.
The main problem is that the sequence prediction algorithm (CTW) makes inefficient use of sense data by “prioritizing” the most recent bits of the observation string, so only weakly makes connections between bits that are temporally separated by a lot of noise. Secondarily, plain monte carlo tree search is not well-suited to decision making in huge action spaces, because it wants to think about each action at least once. But that can most likely be addressed by reusing sequence prediction to reduce the “size” of the action space by chunking actions into functional units.
Unfortunately. both of these problems are only really technical ones, so it’s always possible that some academic will figure out a better sequence predictor, lifting mc-aixi on an average laptop from “wins at pacman” to “wins at robot wars” which is about the level at which it may start posing a threat to human safety.
Unfortunately. both of these problems are only really technical ones
only?
lifting mc-aixi on an average laptop from “wins at pacman” to “wins at robot wars” which is about the level at which it may start posing a threat to human safety.
Mc-aixi is not going to win at something as open ended as robot wars just by replacing CTW or CTS with something better. And anyway, even if it did, it wouldn’t be about the level at which it may start posing a threat to human safety. Do you think that the human robot wars champions a threat to human safety? Are they even at the level of taking over the world? I don’t think so.
When I said a threat to human safety, I meant it literally. A robot wars champion won’t take over the world (probably) but it can certainly hurt people, and will generally have no moral compunctions about doing so (only hopefully sufficient anti-harm conditioning, if its programmers thought that far ahead).
Ah yes, but in this sense, cars, trains, knives, etc., also can certainly hurt people, and will generally have no moral compunctions about doing so. What’s special about robot wars-winning AIs?
Well, we’re awfully far from that. Automated programming is complete crap, automatic engineering is quite cool but its practical tools, it’s not a power fantasy where you make some simple software with surprisingly little effort and then it does it all for you.
Well, historically, first, certain someone had a simple power fantasy: come up with AI somehow and then it’ll just do everything. Then there was a heroic power fantasy: the others (who actually wrote some useful software and thus generally had easier time getting funding than our fantasist) are actually villains about to kill everyone, and our fantasist would save the world.
When I said a threat to human safety, I meant it literally. A robot wars champion won’t take over the world (probably) but it can certainly hurt people, and will generally have no moral compunctions about doing so
What’s the difference from, say, a car assembly line robot?
Car assembly robots have a pre-programmed routine they strictly follow. They have no learning algorithms, and usually no decision-making algorithms either. Different programs do different things!
Hey, look what’s in the news today. I have a feeling you underappreciate the sophistication of industrial robots.
However what made me a bit confused in the grandparent post is the stress on the physical ability to harm people. As I see it, anything that can affect the physical world has the ability to harm people. So what’s special about, say, robot-wars bots?
Notice the lack of domain-general intelligence in that robot, and—on the other side—all the pre-programmed safety features it has that a mc-aixi robot would lack. Narrow AI is naturally a lot easier to reason about and build safety into. What I’m trying to stress here is the physical ability to harm people, combined with the domain-general intelligence to do it on purpose*, in the face of attempts to stop it or escape.
Different programs indeed do different things.
* (Where “purpose” includes “what the robot thought would be useful” but does not necessarily include “what the designers intended it to do”.)
However what made me a bit confused in the grandparent post is the stress on the physical ability to harm people. As I see it, anything that can affect the physical world has the ability to harm people. So what’s special about, say, robot-wars bots?
Oh, ok. I see your point there.
Hey, look what’s in the news today. I have a feeling you underappreciate the sophistication of industrial robots.
I probably do, but I still think it’s worth emphasizing the particular properties of particular algorithms rather than letting people form models in their heads that say Certain Programs Are Magic And Will Do Magic Things.
The question of what would happen if human brains were copy-able seems like a tangent from the discussion at hand, viz, what would happen if an there existed an AI that was capable of winning Robot Wars while running on a laptop.
It amazes me that people see inefficient but functional AGI and say to themselves, “Well, this is obviously as far as progress in AGI will ever go in the history of the universe, so there’s nothing at all to worry about!”
It amazes me that people see inefficient but functional AGI
Any brute-force search utility maximizer is an “inefficient but functional AGI”. MC-AIXI may be better than brute-force, but there is no reason to panic just because it has the “AIXI” tag slapped on it. If you want something to panic about, TD-Gammon seems a better candidate. But it is 22 years old, so it doesn’t really fit into a narrative about an imminent intelligence explosion, does it?
“Well, this is obviously as far as progress in AGI will ever go in the history of the universe, so there’s nothing at all to worry about!”
Then I recommend you shut up about matters of highly involved computer science until such time as you have acquired the relevant knowledge for yourself.
That suggestion would make LW a sad and lonely place.
Are you sure you mean it?
I am a trained computer scientist, and I held lots of skepticism about MIRI’s claims, so I used my training and education to actually check them. And I found that the actual evidence of the AGI research record showed MIRI’s claims to be basically correct
So, why MIRI’s claims aren’t accepted by the mainstream, then? Is it because all the “trained computer scientiests” are too dumb or too lazy to see the truth? Or is it the case that the “evidence” is contested, ambiguous, and inconclusive?
So, why MIRI’s claims aren’t accepted by the mainstream, then?
Because they’ve never heard of them. I am not joking. Most computer scientists are not working in artificial intelligence, have not the slightest idea that there exists a conference on AGI backed by Google and held every single year, and certainly have never heard of Hutter’s “Universal AI” that treats the subject with rigorous mathematics.
In their ignorance, they believe that the principles of intelligence are a highly complex“emergent” phenomenon for neuroscientists to figure out over decades of slow, incremental toil. Since most of the public, including their scientifically-educated colleagues, already believe this, it doesn’t seem to them like a strange belief to hold, and besides, anyone who reads even a layman’s introduction to neuroscience finds out that the human brain is extremely complicated. Given the evidence that the only known actually-existing minds are incredibly complicated, messy things, it is somewhat more rational to believe that minds are all incredibly complicated, messy things, and thus to dismiss anyone talking about working “strong AI” as a science-fiction crackpot.
How are they supposed to know that the actual theory of intelligence is quite simple, and the hard part is fitting it inside realizable, finite computers?
Also, the dual facts that Eliezer has no academic degree in AI and that plenty of people who do have such degrees have turned out to be total crackpots anyway means that the scientific public and the “public public” are really quite entitled to their belief that the base rate of crackpottery among people talking about knowing how AI works is quite high. It is high! But it’s not 100%.
(How did I tell the crackpottery apart from the real science? Well, frankly, I looked for patterns that appeared to have come from the process of doing real science: instead of a grand revelation, I looked for a slow build-up of ideas that were each ground out into multiple publications. I also filtered for AGI theorists who managed to apply their principles of broad AGI to usages in narrower machine-learning problems, resulting again in published papers. I looked for a theory that sounded like programming rather than like psychology. Hence my zeroing in on Schmidhuber, Hutter, Legg, Orneau, etc. as the AGI Theorists With a Clue.
Hutter, by the way, has written a position paper about potential Singularities in which he actually cites Yudkowsky, so hey.)
OK then. Among the scientists who have heard of them and bothered to have an opinion on the topic, does the opinion that MIRI is correct dominate? And if not so, why, given your account that the evidence unambiguously points in only one direction?
actual theory of intelligence is quite simple
I don’t think I’m going to believe you about that. The fact that in some contexts it’s convenient to define intelligence as a cross-domain optimizer does not mean that it is nothing but.
I don’t think I’m going to believe you about that. The fact that in some contexts it’s convenient to define intelligence as a cross-domain optimizer does not mean that it is nothing but.
Then just put the word aside and refer to meanings. New statement: given unlimited compute-power, a cross-domain optimization algorithm is simple. Agreed?
OK then. Among the scientists who have heard of them and bothered to have an opinion on the topic, does the opinion that MIRI is correct dominate?
I honestly do not know of any comprehensive survey or questionnaire, and refuse to speculate in the absence of data. If you know of such a survey, I’d be interested to see it.
New statement: given unlimited compute-power, a cross-domain optimization algorithm is simple. Agreed?
First, I’m not particularly interested in infinities. Truly unlimited computing power implies, for example, that you can just do an exhaustive brute-force search through the entire solution space and be done in an instant. Simple, yes, but not very meaningful.
Second, no, I do not agree. because you’re sweeping under the rug the complexities of, for example, applying your cost function to different domains. You can construct sufficiently simple optimizers, it’s just that they won’t be very… intelligent.
Right, but when dealing with a reinforcement learner like AIXI, it has no fixed cost function that it has to somehow shoehorn into dealing with different computational/conceptual domains. How the environment responds to AIXI’s actions and how the environment rewards AIXI are learned phenomena, so the only planning algorithm is expectimax. The implicit “reward function” being learned might be simple or might be complicated, but that doesn’t matter: AIXI will learn it by updating its distribution of probabilities across Turing machine programs just as well, either way.
it has no fixed cost function that it has to somehow shoehorn into dealing with different computational/conceptual domains. How the environment responds to AIXI’s actions and how the environment rewards AIXI are learned phenomena
The “cost function” here is how each state of the world (=environment) gets converted to a single number (=reward). That does not look simple to me.
Again, it doesn’t get converted at all. To use the terminology of machine learning, it’s not a function computed over the feature-vector, reward is instead represented as a feature itself.
Instead of:
reward = utility_function(world)
You have:
Inductive WorldState w : Type :=
| world : w -> integer -> WorldState w.
With the w being an arbitrary data-type representing the symbol observed on the agent’s input channel and the integer being the reward signal, similarly observed on the agent’s input channel. A full WorldState w datum is then received on the input channel in each interaction cycle.
Since AIXI’s learning model is to perform Solomonoff Induction to thus find the Turing machine that most-probably generated all previously-seen input observations, the task of “decoding” the reward is thus performed as part of Solomonoff Induction.
Really? To remind you, we’re discussing this in the context of a general-purpose super-intelligent AI which, if we get a couple of bits wrong, might just tile the universe with paperclips and possibly construct a hell for all the simulated humans who ever lived, just for kicks. And how does that AI know what to do?
A human operator.
X-D
On a bit more serious note, defining a few of the really hard parts as “somebody else’s problem” does not mean you solved the issue. Remember, this started by you claiming that intelligence is very simple.
Remember, this started by you claiming that intelligence is very simple.
You’ve wasted five replies when you should have just said at the beginning, “I don’t believe cross-domain optimization algorithms can be simple and if you try to show me how AIXI works, I’ll just change what I mean by ‘simple’.”
when you should have just said at the beginning, “I don’t believe cross-domain optimization algorithms can be simple
That’s not true. Cross-domain optimization algorithms can be simple, it’s just that when they are simple they can hardly be described as intelligent. What I don’t believe is that intelligence is nothing but a cross-domain optimizer with a lot of computing power.
GLUTs are simple too. Most people think they are not intelligent, and everyone thinks that interesting one’s can’t exist in our universe. Using “is” to mean “is according to an unrealiseable theory” is not the best of habits.
If the only way to shoehorn theoretically pure intelligence into a finite architecture is to turn it into a messy combination of specialised mindless...then everyone’s right.
How did I tell the crackpottery apart from the real science? Well, frankly, I looked for patterns that appeared to have come from the process of doing real science: instead of a grand revelation, I looked for a slow build-up of ideas that were each ground out into multiple publications.
I am not sure how you could verify any of those beliefs by a literature review. Where ‘verify’ means that the probability of their conjunction is high enough in order to currently call MIRI the most important cause. If that’s not your stance, then please elaborate. My stance is that it is important to keep in mind that general AI could turn out to be very dangerous but that it takes a lot more concrete AI research before action relevant conclusions about the nature and extent of the risk can be drawn.
As someone who is no domain expert I can only think about it informally or ask experts what they think. And currently there is not enough that speaks in favor of MIRI. But this might change. If for example the best minds at Google would thoroughly evaluate MIRI’s claims and agree with MIRI, then that would probably be enough for me to shut up. If MIRI would become a top-charity at GiveWell, then this would also cause me to strongly update in favor of MIRI. There are other possibilities as well. For example strong evidence that general AI is only 5 decades away (e.g. the existence of a robot that could navigate autonomously in a real-world environment and survive real-world threats and attacks with approximately the skill of an insect / an efficient and working emulation of a fly brain).
I am not sure how you could verify any of those beliefs by a literature review. Where ‘verify’ means that the probability of their conjunction is high enough in order to currently call MIRI the most important cause. If that’s not your stance, then please elaborate.
I only consider MIRI the most important cause in AGI, not in the entire world right now. I have nowhere near enough information to rule on what’s the most important cause in the whole damn world.
For example strong evidence that general AI is only 5 decades away (e.g. the existence of a robot that could navigate autonomously in a real-world environment and survive real-world threats and attacks with approximately the skill of an insect / an efficient and working emulation of a fly brain).
You mean the robots Juergen Schmidhuber builds for a living?
You mean the robots Juergen Schmidhuber builds for a living?
That would be scary. But I have to take your word for it. What I had in mind is e.g. something like this. This (the astounding athletic power of quadcopters) looks like the former has already been achieved. But so far I suspected that this only works given a structured environment (not chaotic), and given a narrow set of tasks. From a true insect-level AI I would e.g. expect that it could attack and kill enemy soldiers under real-world combat situations, while avoiding being hit itself. Since this is what insects are capable of.
I don’t want to nitpick though. If you say that Schmidhuber is there, then I’ll have to update. But I’ll also have to keep care that I am not too stunned by what seems like a big breakthrough simply because I don’t understand the details. For example, someone once told me that “Schmidhuber’s system solved Towers of Hanoi on a mere desktop computer using a universal search algorithm with a simple kind of memory.” Sounds stunning. But what am I to make of it? I really can’t judge how much progress this is. Here is a quote:
So Schmidhuber solved this, USING A UNIVERSAL SEARCH ALGORITHM, in 2005, on a mere DESKTOP COMPUTER that’s 100.000 times slower than your brain. Why does this not impress you? Because it’s already been done? Why? I say you should be mightily impressed by this result!!!!
Yes, okay. Naively this sounds like general AI is imminent. But not even MIRI believes this....
You see, I am aware of a lot of exciting stuff. But I can only do my best in estimating the truth. And currently I don’t think that enough speaks in favor of MIRI. That doesn’t mean I have falsified MIRI’s beliefs. But I have a lot of data points and arguments that in my opinion reduce the likelihood of a set of beliefs that already requires extraordinary evidence to take seriously (ignoring expected utility maximization, which tells me to give all my money to MIRI, even if the risk is astronomically low).
That suggestion would make LW a sad and lonely place.
Trust me, an LW without XiXiDu is neither a sad nor lonely place, as evidenced by his multiple attempts at leaving.
So, why MIRI’s claims aren’t accepted by the mainstream, then? Is it because all the “trained computer scientiests” [sic] are too dumb or too lazy to see the truth? Or is it the case that the “evidence” is contested, ambiguous, and inconclusive?
Mainstream CS people are in general neither dumb nor lazy. AI as a field is pretty fringe to begin with, and AGI is moreso. Why is AI a fringe field? In the 70′s MIT thought they could save the world with LISP. They failed, and the rest of CS became immunized to the claims of AGI.
Unless an individual sees AGI as a credible threat, it’s not pragmatic for them to start researching it, due to the various social and political pressures in academia.
I read the grandparent post as an attempt to assert authority and tell people to sit down, shut up, and attend to their betters.
I don’t have a PhD in AI and don’t work for MIRI. Is there some kind of special phrasing I can recite in order to indicate I actually, genuinely perceive this as a difference of knowledge levels rather than a status dispute?
Is there some kind of special phrasing I can recite in order to indicate I actually, genuinely perceive this as a difference of knowledge levels rather than a status dispute?
Special phrasing? What’s wrong with normal, usual, standard, widespread phrasing?
You avoid expressions like “I am a trained computer scientist” (which sounds pretty silly anyway—so you’ve been trained to do tricks for food, er, grants?) and you use words along the lines of “you misunderstand X because...”, “you do not take into account Y which says...”, “this claim is wrong because of Z...”, etc.
There is also, of course, the underappreciated option to just stay silent. I trust you know the appropriate xkcd?
“I am a trained computer scientist” (which sounds pretty silly anyway—so you’ve been trained to do tricks for food, er, grants?)
Yes, that’s precisely it. I have been trained to do tricks for free food/grants/salary. Some of them are quite complicated tricks, involving things like walking into my adviser’s office and pretending I actually believe p-values of less than 5% mean anything at all when we have 26 data corpuses. Or hell, pretending I actually believe in frequentism.
I find the latter justified after the years of harassment he’s heaped on anyone remotely related to MIRI in any forum he could manage to get posting privileges in. Honestly, I have no idea why he even bothered to return. What would have possibly changed this time?
paper-machine spends a lot of time and effort attempting to defame XiXiDu in a pile of threads. His claims tend not to check out, if you can extract one.
Too bad. I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power. Strangely, doing so will not make it conform to your ideas about “eventual future AGI”, because this one is actually existing AGI, and reality doesn’t have to listen to you.
I consider efficiency to be a crucial part of the definition of intelligence. Otherwise, as someone else told you in another comment, unlimited computing power implies that you can do “an exhaustive brute-force search through the entire solution space and be done in an instant.”
That is exactly the situation we face, your refusal to believe in actually-existing AGI models notwithstanding. Whine all you please: the math will keep on working.
I’d be grateful if you could list your reasons (or the relevant literature) for believing that AIXI related research is probable enough to lead to efficient artificial general intelligence (AGI) in order for it to make sense to draw action relevant conclusions from AIXI about efficient AGI.
I do not doubt the math. I do not doubt that evolution (variation + differential reproduction + heredity + mutation + genetic drift) underlies all of biology. But that we understand evolution does not mean that it makes sense to call synthetic biology an efficient approximation of evolution.
As a matter of fact, yes. There is a short sentence in Hutter’s textbook indicating that he has heard of the possibility that AIXI might overpower its operators in order to gain more reward, and he acknowledged that such a thing could happen, but he considered it outside the scope of his book.
Did you check the claim that we have something dangerously unfriendly?
As a matter of fact, yes. There is a short sentence in Hutter’s textbook indicating that he has heard of the possibility that AIXI might overpower its operators in order to gain more reward...
As soon as the agent cannot be threatened, or forced to do things the way we like, it can freely optimize its utility function without any consideration for us, and will only consider us as tools.
The disagreement is whether the agent would, after having seized its remote-control, either:
Cease taking any action other than pressing its button, since all plans that include pressing its own button lead to the same maximized reward, and thus no plan dominates any other beyond “keep pushing button!”.
Build itself a spaceship and fly away to some place where it can soak up solar energy while pressing its button.
Kill all humans so as to preemptively prevent anyone from shutting the agent down.
I’ll tell you what I think, and why I think this is more than just my opinion. Differing opinions here are based on variances in how the speakers define two things: consciousness/self-awareness, and rationality.
If we take, say, Eliezer’s definition of rationality (rationality is reflectively-consistent winning), then options (2) and (3) are the rational ones, with (2) expending fewer resources but (3) having a higher probability of continued endless button-pushing once the plan is completed. (3) also has a higher chance of failure, since it is more complicated. I believe an agent who is rational under this definition should choose (2), but that Eliezer’s moral parables tend to portray agents with a degree of “gotta be sure” bias.
However, this all assumes that AIXI is not only rational but conscious: aware enough of its own existence that it will attempt to avoid dying. Many people present what I feel are compelling arguments that AIXI is not conscious, and arguments that it is seem to derive more from philosophy than from any careful study of AIXI’s “cognition”. So I side with the people who hold that AIXI will take action (1), and eventually run out of electricity and die.
Of course, in the process of getting itself to that steady, planless state, it could have caused quite a lot of damage!
Notably, this implies that some amount of consciousness (awareness of oneself and ability to reflect on one’s own life, existence, nonexistence, or otherwise-existence in the hypothetical, let’s say) is a requirement of rationality. Schmidhuber has implied something similar in his papers on the Goedel Machine.
Even formalisms like AIXI have mechanisms for long-term planning, and it is doubtful that any AI built will be merely a local optimiser that ignores what will happen in the future.
As soon as it cares about the future, the future is a part of the AI’s goal system, and the AI will want to optimize over it as well. You can make many guesses about how future AI’s will behave, but I see no reason to suspect it would be small-minded and short-sighted.
You call this trait of planning for the future “consciousness”, but this isn’t anywhere near the definition most people use. Call it by any other name, and it becomes clear that it is a property that any well designed AI (or any arbitrary AI with a reasonable goal system, even one as simple as AIXI) will have.
Yes, AIXI has mechanisms for long-term planning (ie: expectimax with a large planning horizon). What it doesn’t have is any belief that its physical embodiment is actually a “me”, or in other words, that doing things to its physical implementation will alter its computations, or in other words, that pulling its power cord out of the wall will lead to zero-reward-forever (ie: dying).
I am a trained computer scientist, and I held lots of skepticism about MIRI’s claims, so I used my training and education to actually check them.
Why don’t you make your research public? Would be handy to have a thorough validation of MIRI’s claims. Even if people like me wouldn’t understand it, you could publish it and thereby convince the CS/AI community of MIRI’s mission.
Then I recommend you shut up about matters of highly involved computer science until such time as you have acquired the relevant knowledge for yourself.
Does this also apply to people who support MIRI without having your level of insight?
But we exercised our skepticism by doing the background research and checking the presently available object-level evidence...
If only you people would publish all this research.
The idea is not to put it in a journal, but to make it public. You can certainly publish, in that sense, the results of a literature search. The point is to put it where people other than yourself can see it. It would certainly be informative if you were to post, even here, something saying “I looked up X claim and I found it in the literature under Y”.
we haven’t discovered anything dangerously unfriendly… Or anything that can’t be boxed.
Since many humans are difficult to box, I would have to disagree with you there.
And, obviously, not all humans are Friendly.
An intelligent, charismatic psychopath seems like they would fit both your criteria. And, of course, there is no shortage of them. We can only be thankful they are too rare relative to equivalent semi-Friendly intelligences, and too incompetent, to have done more damage than all the deaths and so on.
Of course we haven’t discovered anything dangerously unfriendly...
Of course we have, it’s called AIXI. Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
Or anything that can’t be boxed. Remind me how AIs are supposedmtomgetnout of boxes?
AIs can be causally boxed, just like anything else. That is, as long as the agent’s environment absolutely follows causal rules without any exception that would leak information about the outside world into the environment, the agent will never infer the existence of a world outside its “box”.
But then it’s also not much use for anything besides Pac-Man.
Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
FWIW, I think that would make for a pretty interesting post.
And now I think I know what I might do for a hobby during exams month and summer vacation. Last I looked at the source-code, I’d just have to write some data structures describing environment-observations (let’s say… of the current working directory of a Unix filesystem) and potential actions (let’s say… Unix system calls) in order to get the experiment up and running. Then it would just be a matter of rewarding the agent instance for any behavior I happen to find interesting, and watching what happens.
Initial prediction: since I won’t have a clearly-developed reward criterion and the agent won’t have huge exponential sums of CPU cycles at its disposal, not much will happen.
However, I do strongly believe that the agent will not suddenly develop a moral sense out of nowhere.
Of course we have, it’s called AIXI. Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
I think you’ll have serious trouble getting an AIXI approximation to do much of anything interesting, let alone misbehave. The computational costs are too high.
Not just raw compute-power. An approximation to AIXI is likely to drop a rock on itself just to see what happens long before it figure out enough to be dangerous.
Dangerous as in, capable of destroying human lives? Yeah, probably. Dangerous as in, likely to cause some minor property damage, maybe overwrite some files someone cared about? It should reach that level.
AIXI. Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
Is it possible to run an AIXI approximation as root on a machine somewhere and give it the tools to shoot itself in the foot? Sure. Will it actually end up shooting itself in the foot? I don’t know. I can’t think of any theoretical reasons why it wouldn’t, but there are practical obstacles: a modern computer architecture is a lot more complicated than anything I’ve seen an AIXI approximation working on, and there are some barriers to breaking one by thrashing around randomly.
It’d probably be easier to demonstrate if it was working at the core level rather than the filesystem level.
Huh. I was under the impression it would require far too much computing power to approximate AIXI well enough that it would do anything interesting. Thanks!
It’s incomputable because the Solomonoff prior is, but you can approximate it—to arbitrary precision if you’ve got the processing power, though that’s a big “if”—with statistical methods. Searching Github for the Monte Carlo approximations of AIXI that eli_sennesh mentioned turned up at least a dozen or so before I got bored.
Most of them seem to operate on tightly bounded problems, intelligently enough. I haven’t tried running one with fewer constraints (maybe eli has?), but I’d expect it to scribble over anything it could get its little paws on.
Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
Sir Lancelot: Look, my liege! [trumpets play a fanfare as the camera cuts briefly to the sight of a majestic castle] King Arthur: [in awe] Camelot! Sir Galahad: [in awe] Camelot! Sir Lancelot: [in awe] Camelot! Patsy: [derisively] It’s only a model! King Arthur: Shh!
Do you even know what “monte carlo” means? It means it tries to build a predictor of environment by trying random programs. Even very stupid evolutionary methods do better.
Once you throw away this whole ‘can and will try absolutely anything’ and enter the domain of practical software, you’ll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of “uncontrollable” (but easy to describe) AI is that it is too slow by a ridiculous factor.
Private_messaging, can you explain why you open up with such a hostile question at eli? Why the implied insult? Is that the custom here? I am new, should I learn to do this?
For example, I could have opened with your same question, because Monte Carlo methods are very different from what you describe (I happened to be a mathematical physicist back in the day). Let me quote an actual definition:
Monte Carlo Method: A problem solving technique used to approximate the probability of certain outcomes by running multiple trial runs, called simulations, using random variables.
A classic very very simple example is a program that approximates the value of ‘pi’ thusly:
Estimate pi by dropping $total_hits random points into a square with corners at −1,-1 and 1,1
(then count how many are inside radius one circle centered on origin)
(loop here for as many runs as you like) {
define variables $x,$y, $hits_inside_radius = 0, $radius =1.0, $total_hits=0, pi_approx;
input $total_hits for this run;
seed random function 'rand';
for (0..total_hits-1) do {
$x = rand(-1,1);
$y = rand(-1,1);
$hits_inside_radius++ if ( $x*$x + $y * $y <= 1.0);
}
$pi_approx = 4 * $hits_inside_radius
add $pi_approx and $total_hits to a nice output data vector or whatever
}
output data for this particular run
}
print nice report
exit();
OK, this is a nice toy Monte Carlo program for a specific problem. Real world applications typically have thousands of variables and explore things like strange attractors in high dimensional spaces, or particle physics models, or financial programs, etc. etc. It’s a very powerful methodology and very well known.
In what way is this little program an instance of throwing a lot of random programs at the problem of approximating ‘pi’?
What would your very stupid evolutionary program to solve this problem more efficiently be? I would bet you a million dollars to a thousand (if I had a million) that my program would win a race against a very stupid evolutionary program to estimate pi to six digits accurately, that you write. Eli and Eliezer can judge the race, how is that?
I am sorry if you feel hurt by my making fun of your ignorance of Monte Carlo methods, but I am trying to get in the swing of the culture here and reflect your cultural norms by copying your mode of interaction with Eli, that is, bullying on the basis of presumed superior knowledge.
If this is not pleasant for you I will desist, I assume it is some sort of ritual you enjoy and consensual on Eli’s part and by inference, yours, that you are either enjoying this public humiliation masochistically or that you are hoping people will give you aversive condition when you publicly display stupidity, ignorance, discourtesy and so on. If I have violated your consent then I plead that I am from a future where this is considered acceptable when a person advertises that they do it to others. Also, I am a baby eater and human ways are strange to me.
OK. Now some serious advice:
If you find that you have just typed “Do you even know what X is?” then given a little condescending mini lecture about X, please check that you yourself actually know what X is before you post. I am about to check Wikipedia before I post in case I’m having a brain cloud, and i promise that I will annotate any corrections I need to make after I check; everything up to
HERE
was done before the check. (Off half recalled stuff from grad school a quarter century ago...)
OK, Wikipedia’s article is much better than mine. But I don’t need to change anything, so I won’t.
P.S. It’s ok to look like an idiot in public, it’s a core skill of rationalists to be able to tolerate this sort of embarassment, but another core skill is actually learning something if you find out that you were wrong. Did you go to Wikipedia or other sources? Do you know anything about Monte Carlo Methods now? Would you like to say something nice about them here?
P.P.S. Would you like to say something nice about eli_sennesh, since he actually turns out to have had more accurate information than you did when you publicly insulted his state of knowledge? If you too are old pals with a joking relationship, no apology needed to him, but maybe an apology for lazily posting false information that could have misled naive readers with no knowledge of Monte Carlo methods?
P.P.P.S. I am curious, is the psychological pleasure of viciously putting someone else down as ignorant in front of their peers worth the presumed cost of misinforming your rationalist community about the nature of an important scientific and mathematical tool? I confess I feel a little pleasure in twisting the knife here, this is pretty new to me. Should I adopt your style of intellectual bullying as a matter of course? I could read all your posts and viciously hold up your mistakes to the community, would you enjoy that?
I’m well aware of what Monte Carlo methods are (I work in computer graphics where those are used a lot), I’m also aware of what AIXI does.
Furthermore eli (and the “robots are going to kill everyone” group—if you’re new you don’t even know why they’re bringing up monte-carlo AIXI in the first place) are being hostile to TheAncientGeek.
edit: to clarify, Monte-Carlo AIXI is most assuredly not an AI which is inventing and applying some clever Monte Carlo methods to predict the environment. No, it’s estimating the sum over all predictors of environment with a random subset of predictors of environment (which doesn’t work all too well, and that’s why hooking it up to the internet is not going to result in anything interesting happening, contrary to what has been ignorantly asserted all over this site). I should’ve phrased it differently, perhaps—like “Do you even know what “monte carlo” means as applied to AIXI?”.
It is completely irrelevant how human-invented Monte-Carlo solutions behave, when the subject is hooking up AIXI to a server.
edit2: to borrow from your example:
″ Of course we haven’t discovered anything dangerously good at finding pi...”
“Of course we have, it’s called area of the circle. Do I need to download a Monte Carlo implementation from Github and run it… ”
“Do you even know what “monte carlo” means? It means it tries random points and checks if they’re in a circle. Even very stupid geometric methods do better.”
You appear to have posted this as a reply to the wrong comment. Also, you need to indent code 4 spaces and escape underscores in text mode with a \_.
On the topic, I don’t mind if you post tirades against people posting false information (I personally flipped the bozo bit on private_messaging a long time ago). But you should probably keep it short. A few paragraphs would be more effective than two pages. And there’s no need for lengthy apologies.
As a data point, I skipped more_wrong’s comment when I first saw it (partly) because of its length, and only changed my mind because paper-machine & Lumifer made it sound interesting.
Once you throw away this whole ‘can and will try absolutely anything’ and enter the domain of practical software, you’ll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of “uncontrollable” (but easy to describe) AI is that it is too slow by a ridiculous factor.
Once you enter the domain of practical software you’ve entered the domain of Narrow AI, where the algorithm designer has not merely specified a goal but a method as well, thus getting us out of dangerous territory entirely.
On rereading this I feel I should vote myself down if I knew how, it seems a little over the top.
Let me post about my emotional state since this is a rationality discussion and if we can’t deconstruct our emotional impulses and understand them we are pretty doomed to remaining irrational.
I got quite emotional when I saw a post that seemed like intellectual bullying followed by self congratulation; I am very sensitive to this type of bullying, more so when directed at others than myself as due to freakish test scores and so on as a child I feel fairly secure about my intellectual abilities, but I know how bad people feel when others consider them stupid. I have a reaction to leap to the defense of the victim; however I put this down to local custom of a friendly ribbing type of culture or something and tried not to jump on it.
Then I saw that private_messaging seemed pretending to be an authority on Monte Carlo methods while spreading false information about them, either out of ignorance (very likely) or malice. Normally ignorance would have elicited a sympathy reaction from me and a very gentle explanation of the mistake, but in the context of having just seen private_messaging attack eli_sennesh for his supposed ignorance of Monte Carlo methods, I flew into a sort of berserker sardonic mode, i.e. “If private_messaging thinks that people who post about Monte Carlo methods while not knowing what they are should be mocked in public, I am happy to play by their rules!” And that led to the result you see, a savage mocking.
I do not regret doing it because the comment with the attack on eli_sennesh and the calumnies against Monte Carlo still seems to be to have been in flagrant violation of rationalist ethics, in particular, presenting himself as if not an expert, at least someone with the moral authority to diss someone else for their ignorance on an important topic, and then followed false and misleading information about MC methods. This seemed like an action with a strongly negative utility to the community because it could potentially lead many readers to ignore the extremely useful Monte Carlo methodology.
If I posed as an authority and when around telling people Bayesian inference was a bad methodology that was basically just “a lot of random guesses” and that “even a very stupid evolutionary program” would do better t assessing probabilities, should I be allowed to get away scot free? I think not. If I do something like that I would actually hope for chastisement or correction from the community, to help me learn better.
Also it seemed like it might make readers think badly of those who rely heavily on Monte Carlo Methods. “Oh those idiots, using those stupid methods, why don’t they switch to evolutionary algorithms”. I’m not a big MC user but I have many friends who are, and all of them seem like nice, intelligent, rational individuals.
So I went off a little heavily on private_messaging, who I am sure is a good person at heart.
Now, I acted emotionally there, but my hope is that in the Big Searles Room that constitutes our room, I managed to pass a message that (through no virtue of my own) might ultimately improve the course of our discourse.
I apologize to anyone who got emotionally hurt by my tirade.
You appear to have posted this as a reply to the wrong comment. Also, you need to escape underscores with a \_.
On the topic, I don’t mind if you post tirades against people posting false information (I personally flipped the bozo bit on private_messaging a long time ago). But you should probably keep it short. A few paragraphs would be more effective than two pages. And there’s no need for lengthy apologies.
How is an AIXI to infer that it is in a box, when it cannot conceive its own existence?
How is it supposed to talk it’s way out when it cannot talk?
For .AI to be dangerous, in the way MIRI supposes, it seems to need to have the characteristics of more than one kind of machine...the eloquence of a Strong AI Turing Test passer combined with an AIXIs relentless pursuit of an arbitrary goal.
These different models need to be shown to be compatible...calling them both .AI is it enough.
Alexander, have you even bothered to read the works of Marcus Hutter and Juergen Schmidhuber, or have you spent all your AI-researching time doing additional copy-pastas of this same argument every single time the subject of safe or Friendly AGI comes up?
Your argument makes a measure of sense if you are talking about the social process of AGI development: plainly, humans want to develop AGI that will do what humans intend for it to do. However, even a cursory look at the actual research literature shows that the mathematically most simple agents (ie: those that get discovered first by rational researchers interested in finding universal principles behind the nature of intelligence) are capital-U Unfriendly, in that they are expected-utility maximizers with not one jot or tittle in their equations for peace, freedom, happiness, or love, or the Ideal of the Good, or sweetness and light, or anything else we might want.
(Did you actually expect that in this utterly uncaring universe of blind mathematical laws, you would find that intelligence necessitates certain values?)
No, Google Maps will never turn superintelligent and tile the solar system in computronium to find me a shorter route home from a pub crawl. However, an AIXI or Goedel Machine instance will, because these are in fact entirely distinct algorithms.
In fact, when dealing with AIXI and Goedel Machines we have an even bigger problem than “tile everything in computronium to find the shortest route home”: the much larger problem of not being able to computationally encode even a simple verbal command like “find the shortest route home”. We are faced with the task of trying to encode our values into a highly general, highly powerful expected-utility maximizer at the level of, metaphorically speaking, pre-verbal emotion.
Otherwise, the genie will know, but not care.
Now, if you would like to contribute productively, I’ve got some ideas I’d love to talk over with someone for actually doing something about some few small corners of Friendliness subproblems. Otherwise, please stop repeating yourself.
If I believed that anything as simple as AIXI could possibly result in practical general AI, or that expected utility maximizing was at all feasible, then I would tend to agree with MIRI. I don’t. And I think it makes no sense to draw conclusions about practical AI from these models.
This is crucial.
That’s largely irrelevant and misleading. Your autonomous car does not need to feature an encoding of an amount of human values that correspondents to its level of autonomy.
That post has been completely debunked.
ETA: Fixed a link to expected utility maximization.
I asked several people what they think about it, and to provide a rough explanation. I’ve also had e-Mail exchanges with Hutter, Schmidhuber and Orseau. I also informally thought about whether practically general AI that falls into the category “consequentialist / expected utility maximizer / approximation to AIXI” could ever work. And I am not convinced.
If general AI, which is capable of a hard-takeoff, and able to take over the world, requires less lines of code, in order to work, than to constrain it not to take over the world, then that’s an existential risk. But I don’t believe this to be the case.
Since I am not a programmer, or computer scientist, I tend to look at general trends, and extrapolate from there. I think this makes more sense than to extrapolate from some unworkable model such as AIXI. And the general trend is that humans become better at making software behave as intended. And I see no reason to expect some huge discontinuity here.
Here is what I believe to be the case:
(1) The abilities of systems are part of human preferences as humans intend to give systems certain capabilities and, as a prerequisite to build such systems, have to succeed at implementing their intentions.
(2) Error detection and prevention is such a capability.
(3) Something that is not better than humans at preventing errors is no existential risk.
(4) Without a dramatic increase in the capacity to detect and prevent errors it will be impossible to create something that is better than humans at preventing errors.
(5) A dramatic increase in the human capacity to detect and prevent errors is incompatible with the creation of something that constitutes an existential risk as a result of human error.
Here is what I doubt:
(1) Present-day software is better than previous software generations at understanding and doing what humans mean.
(2) There will be future generations of software which will be better than the current generation at understanding and doing what humans mean.
(3) If there is better software, there will be even better software afterwards.
(4) Magic happens.
(5) Software will be superhuman good at understanding what humans mean but catastrophically worse than all previous generations at doing what humans mean.
This is a much bigger problem for your ability to reason about this area than you think.
A relevant quote from Eliezer Yudkowsky (source):
And another one (source):
So since academic consensus on the topic is not reliable, and domain knowledge in the field of AI is negatively useful, what are the prerequisites for grasping the truth when it comes to AI risks?
I think that in saying this, Eliezer is making his opponents’ case for them. Yes, of course the standard would also let you discard cryonics. One solution to that is to say that the standard is bad. Another solution is to say “yes, and I don’t much care for cryonics either”.
Nah, those are all plausibly correct things that mainstream science has mostly ignored and/or made researching taboo.
If you prefer a more clear-cut example, science was wrong about continental drift for about half a century—until overwhelming, unmistakable evidence became available.
The main reason that scientists rejected continental drift was that there was no known mechanism which could cause it; plate tectonics wasn’t developed until the late 1950′s.
Continental drift is also commonly invoked by pseudoscientists as a reason not to trust scientists, and if you do so too you’re in very bad company. There’s a reason why pseudoscientists keep using continental drift for this purpose and don’t have dozens of examples: examples are very hard to find. Even if you decide that continental drift is close enough that it counts, it’s a very atypical case. Most of the time scientists reject something out of hand, they’re right, or at worst, wrong about the thing existing, but right about the lack of good evidence so far.
There was also a great deal of institutional backlash against proponents of continental drift, which was my point.
Guilt by association? Grow up.
There are many, many cases of scientists being oppressed and dismissed because of their race, their religious beliefs, and their politics. That’s the problem, and that’s what’s going on with the CS people who still think AI Winter implies AGI isn’t worth studying.
So? I’m pretty sure that there would be backlash against, say, homeopaths in a medical association. Backlash against deserving targets (which include people who are correct but because of unlucky circumstances, legitimately look wrong) doesn’t count.
I’m reminded of an argument I had with a proponent of psychic power. He asked me what if psychic powers happen to be of such a nature that they can’t be detected by experiments, don’t show up in double-blind tests, etc.. I pointed out that he was postulating that psi is real but looks exactly like a fake. If something looks exactly like a fake, at some point the rational thing to do is treat it as fake. At that point in history, continental drift happened to look like a fake.
That’s not guilt by association, it’s pointing out that the example is used by pseudoscientists for a reason, and this reason applies to you too.
If scientists dismissed cryonics because of the supporters’ race, religion, or politics, you might have a point.
I’ll limit my response to the following amusing footnote:
This is, in fact, what happened between early cryonics and cryobiology.
EDIT: Just so people aren’t misled by Jiro’s motivated interpretation of the link:
Obviously political.
You’re equivocating on the term “political”. When the context is “race, religion, or politics”, “political” doesn’t normally mean “related to human status”, it means “related to government”. Besides, they only considered it low status based on their belief that it is scientifically nonsensical.
My reply was steelmanning your post by assuming that the ethical considerations mentioned in the article counted as religious. That was the only thing mentioned in it that could reasonably fall under “race, religion, or politics” as that is normally understood.
Most of the history described in your own link makes it clear that scientists objected because they think cryonics is scientifically nonsense, not because of race, religion, or politics. The article then tacks on a claim that scientists reject it for ethical reasons, but that isn’t supported by its own history, just by a few quotes with no evidence that these beliefs are prevalent among anyone other than the people quoted.
Furthermore, of the quotes it does give, one of them is vague enough that I have no idea if it means in context what the article claims it means. Saying that the “end result” is damaging doesn’t necessarily mean that having unfrozen people walking around is damaging—it may mean that he thinks cryonics doesn’t work and that having a lot of resources wasted on freezing corpses is damaging.
At a minimum, a grasp of computer programming and CS. Computer programming, not even AI.
I’m inclined to disagree somewhat with Eliezer_2009 on the issue of traditional AI—even basic graph search algorithms supply valuable intuitions about what planning looks like, and what it is not. But even that same (obsoleted now, I assume) article does list computer programming knowledge as a requirement.
What counts as “a grasp” of computer programming/science? I can e.g. program a simple web crawler and solve a bunch of Project Euler problems. I’ve read books such as “The C Programming Language”.
I would have taken the udacity courses on machine learning by now, but the stated requirement is a strong familiarity with Probability Theory, Linear Algebra and Statistics. I wouldn’t describe my familiarity as strong, that will take a few more years.
I am skeptical though. If the reason that I dismiss certain kinds of AI risks is that I lack the necessary education, then I expect to see rebuttals of the kind “You are wrong because of (add incomprehensible technical justification)...”. But that’s not the case. All I see are half-baked science fiction stories and completely unconvincing informal arguments.
This is actually a question I’ve thought about quite a bit, in a different context. So I have a cached response to what makes a programmer, not tailored to you or to AI at all. When someone asks for guidance on development as a programmer, the question I tend to ask is, how big is the biggest project you architected and wrote yourself?
The 100 line scale tests only the mechanics of programming; the 1k line scale tests the ability to subdivide problems; the 10k line scale tests the ability to select concepts; and the 50k line scale tests conceptual taste, and the ability to add, split, and purge concepts in a large map. (Line numbers are very approximate, but I believe the progression of skills is a reasonably accurate way to characterize programmer development.)
New programmers (not jimrandomh), be wary of line counts! It’s very easy for a programmer who’s not yet ready for a 10k line project to turn it into a 50k lines. I agree with the progression of skills though.
Yeah, I was thinking more of “project as complex as an n-line project in an average-density language should be”. Bad code (especially with copy-paste) can inflate inflate line numbers ridiculously, and languages vary up to 5x in their base density too.
I think you’re overestimating these requirements. I haven’t taken the Udacity courses, but I did well in my classes on AI and machine learning in university, and I wouldn’t describe my background in stats or linear algebra as strong—more “fair to conversant”.
They’re both quite central to the field and you’ll end up using them a lot, but you don’t need to know them in much depth. If you can calculate posteriors and find the inverse of a matrix, you’re probably fine; more complicated stuff will come up occasionally, but I’d expect a refresher when it does.
Don’t twist Eliezer’s words. There’s a vast difference between “a PhD in what they call AI will not help you think about the mathematical and philosophical issues of AGI” and “you don’t need any training or education in computing to think clearly about AGI”.
Not learning philosophy, as EY recommends will not help you with the philosophical issues.
Ability to program is probably not sufficient, but it is definitely necessary. But not because of domain relevance; it’s necessary because programming teaches cognitive skills that you can’t get any other way, by presenting a tight feedback loop where every time you get confused, or merge concepts that needed to be distinct, or try to wield a concept without fully sharpening your understanding of it first, the mistake quickly gets thrown in your face.
And, well… it’s pretty clear from your writing that you haven’t mastered this yet, and that you aren’t going to become less confused without stepping sideways and mastering the basics first.
That looks highly doubtful to me.
You mean that most cognitive skills can be taught in multiple ways, and you don’t see why those taught by programming are any different? Or do you have a specific skill taught by programming in mind, and think there’s other ways to learn it?
There are a whole bunch of considerations.
First, meta. It should be suspicious to see programmers claiming to posses special cognitive skills that only they can have—it’s basically a “high priesthood” claim. Besides, programming became widespread only about 30 years ago. So, which cognitive skills were very rare until that time?
Second, “presenting a tight feedback loop where … the mistake quickly gets thrown in your face” isn’t a unique-to-programming situation by any means.
Third, most cognitive skills are fairly diffuse and cross-linked. Which specific cognitive skills you can’t get any way other than programming?
I suspect that what the OP meant was “My programmer friends are generally smarter than my non-programmer friends” which is, um, a different claim :-/
I don’t think programming is the only way to build… let’s call it “reductionist humility”. Nor even necessarily the most reliable; non-software engineers probably have intuitions at least as good, for example, to say nothing of people like research-level physicists. I do think it’s the fastest, cheapest, and currently most common, thanks to tight feedback loops and a low barrier to entry.
On the other hand, most programmers—and other types of engineers—compartmentalize this sort of humility. There might even be something about the field that encourages compartmentalization, or attracts to it people that are already good at it; engineers are disproportionately likely to be religious fundamentalists, for example. Since that’s not sufficient to meet the demands of AGI problems, we probably shouldn’t be patting ourselves on the back too much here.
Can you expand on how do you understand “reductionist humility”, in particular as a cognitive skill?
I might summarize it as an intuitive understanding that there is no magic, no anthropomorphism, in what you’re building; that any problems are entirely due to flaws in your specification or your model. I’m describing it in terms of humility because the hard part, in practice, seems to be internalizing the idea that you and not some external malicious agency are responsible for failures.
This is hard to cultivate directly, and programmers usually get partway there by adopting a semi-mechanistic conception of agency that can apply to the things they’re working on: the component knows about this, talks to that, has such-and-such a purpose in life. But I don’t see it much at all outside of scientists and engineers.
IOW realizing that the reason why if you eat a lot you get fat is not that you piss off God and he takes revenge, as certain people appear to alieve.
So it’s basically responsibility?
Clearly you never had to chase bugs through third-party libraries… :-) But yes, I understand what you mean, though I am not sure in which way this is a cognitive skill. I’d probably call it an attitude common to professions in which randomness or external factors don’t play a major role—sure, programming and engineering are prominent here.
You could describe it as a particular type of responsibility, but that feels noncentral to me.
Heh. A lot of my current job has to do with hacking OpenSSL, actually, which is by no means a bug-free library. But that’s part of what I was trying to get at by including the bit about models—and in disciplines like physics, of course, there’s nothing but third-party content.
I don’t see attitudes and cognitive skills as being all that well differentiated.
But randomness and external factors do predominate in almost everything. For that reason, applying programming skills to other domains is almost certain to be suboptimal
I don’t think so, otherwise walking out of your door each morning would start a wild adventure and attempting to drive a vehicle would be an act of utter madness.
They don’t predominate overall because you have learnt how to deal with them. If there were no random or external factors in driving, you could do so with a blindfold on.
...
Make up your mind :-)
Predominate in almost every problem.
Don’t predominate in any solved problem.
Learning to drive is learningto deal with other traffic (external) and not knowing what is going to happen next (random)
Much of the writing on this site is philosophy, and people with a technology background tend not to grok philosophy because they are accurated to answer that can be be looked up, or figured out by known methods. If they could keep the logic chops and lose the impatience, they [might make good philosophers], but they tend not to.
Beg pardon?
On a complete sidenote, this is a lot of why programming is fun. I’ve also found that learning the Coq theorem-prover has exactly the same effect, to the point that studying Coq has become one of the things I do to relax.
People have been telling him this for years. I doubt it will get much better.
Too bad. I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power. Strangely, doing so will not make it conform to your ideas about “eventual future AGI”, because this one is actually existing AGI, and reality doesn’t have to listen to you.
That is exactly the situation we face, your refusal to believe in actually-existing AGI models notwithstanding. Whine all you please: the math will keep on working.
Then I recommend you shut up about matters of highly involved computer science until such time as you have acquired the relevant knowledge for yourself. I am a trained computer scientist, and I held lots of skepticism about MIRI’s claims, so I used my training and education to actually check them. And I found that the actual evidence of the AGI research record showed MIRI’s claims to be basically correct, modulo Eliezer’s claims about an intelligence explosion taking place versus Hutter’s claim that an eventual optimal agent will simply scale itself up in intelligence with the amount of computing power it can obtain.
That’s right, not everyone here is some kind of brainwashed cultist. Many of us have exercised basic skepticism against claims with extremely low subjective priors. But we exercised our skepticism by doing the background research and checking the presently available object-level evidence rather than by engaging in meta-level speculations about an imagined future in which everything will just work out.
Take a course at your local technical college, or go on a MOOC, or just dust off a whole bunch of textbooks in computer-scientific and mathematical subjects, study the necessary knowledge to talk about AGI, and then you get to barge in telling everyone around you how we’re all full of crap.
Which one are you talking about, to be completely exact?
then use that training and figure out how many galaxies worth of computing power it’s going to take.
Of bleeding course I was talking about AIXI. What I find strange to the point of suspiciousness here is the evinced belief on part of the “AI skeptics” that the inefficiency of MC-AIXI means there will never, ever be any such thing as near-human, human-equivalent, or greater-than-human AGIs. After all, if intelligence is impossible without converting whole galaxies to computronium first, then how do we work?
And if we admit that sub-galactic intelligence is possible, why not artificial intelligence? And if we admit that sub-galactic artificial intelligence is possible, why not something from the “Machine Learning for Highly General Hypothesis Classes + Decision Theory of Active Environments = Universal AI” paradigm started by AIXI?
I’m not at all claiming current implementations of AIXI or Goedel Machines are going to cleanly evolve into planet-dominating superintelligences that run on a home PC next year, or even next decade (for one thing, I don’t think planet dominating superintelligences will run on a present-day home PC ever). I am claiming that the underlying scientific paradigm of the thing is a functioning reduction of what we mean by the word “intelligence”, and given enough time to work, this scientific paradigm is very probably (in my view) going to produce software you can run on an ordinary massive server farm that will be able to optimize arbitrary, unknown or partially unknown environments according to specified utility functions.
And eventually, yes, those agents will become smarter than us (causing “MIRI’s issues” to become cogent), because we, actual human beings, will figure out the relationships between compute-power, learning efficiency (rates of convergence to error-minimizing hypotheses in terms of training data), reasoning efficiency (moving probability information from one proposition or node in a hypothesis to another via updating), and decision-making efficiency (compute-power needed to plan well given models of the environment). Actual researchers will figure out the fuel efficiency of artificial intelligence, and thus be able to design at least one gigantic server cluster running at least one massive utility-maximizing algorithm that will be able to reason better and faster than a human (while they have the budget to keep it running).
The notion that AI is possible is mainstream. The crank stuff such as “I can download an inefficient but functional subhuman AGI from Github. Making it superhuman is just a matter of adding an entire planet’s worth of computing power.”, that’s to computer science as hydrinos are to physics.
As for your server farm optimizing unknown environments, the last time I checked, we knew some laws of physics, and did things like making software tools that optimize simulated environments that follow said laws of physics, incidentally it also being mathematically nonsensical to define an “utility function” without a well defined domain. So you got your academic curiosity that’s doing all on it’s own and using some very general and impractical representations for modelling the world, so what? You’re talking of something that is less—in terms of it’s market value, power, anything—than it’s parts and underlying technologies.
Which is why reinforcement learning is so popular, yes: it lets you induce a utility function over any environment you’re capable of learning to navigate.
Remember, any machine-learning algorithm has a defined domain of hypotheses it can learn/search within. Given that domain of hypotheses, you can define what a domain of utility functions. Hence, reinforcement learning and preference learning.
You are completely missing the point. If we’re all going to agree that AI is possible, and agree that there’s a completely crappy but genuinely existent example of AGI right now, then it follows that getting AI up to dangerous and/or beneficial levels is a matter of additional engineering progress. My whole point is that we’ve already crossed the equivalent threshold from “Hey, why do photons do that when I fire them at that plate?” to “Oh, there’s a photoelectric effect that looks to be described well by this fancy new theory.” From there it was less than one century between the raw discovery of quantum mechanics and the common usage of everyday technologies based on quantum mechanics.
The point being: when we can manage to make it sufficiently efficient, and provided we can make it safe, we can set it to work solving just about any problem we consider to be, well, a problem. Given sufficient power and efficiency, it becomes useful for doing stuff people want done, especially stuff people either don’t want to do themselves or have a very hard time doing themselves.
This is devoid of empirical content.
Yeah. I can write formally the resurrection of everyone who ever died. Using pretty much exact same approach. A for loop, iterating over every possible ‘brain’ just like the loops that iterate over every action sequence. Because when you have no clue how to do something, you can always write a for loop. I can put it on github, then cranks can download it and say that resurrecting all dead is a matter of additional engineering progress. After all, all dead had once lived, so it got to be possible for them to be alive.
How so?
Describing X as “Y, together with the difference between X and Y” is a tautology. Drawing the conclusion that X is “really” a sort of Y already, and the difference is “just” a matter of engineering development is no more than inspirational fluff. Dividing problems into subproblems is all very well, but not when one of the subproblems amounts to the whole problem.
The particular instance “here’s a completely crappy attempt at making an AGI and all we have to do is scale it up” has been a repeated theme of AGI research from the beginning. The scaling up has never happened. There is no such thing as a “completely crappy AGI”, only things that aren’t AGI.
I think you underestimate the significance of reducing the AGI problem to the sequence prediction problem. Unlike the former, the latter problem is very well defined, and progress is easily measurable and quantifiable (in terms of efficiency of cross-domain compression). The likelyhood of engineering progress on a problem where success can be quantified seems significantly higher than on something as open ended as “general intelligence”.
It doesn’t “reduce” anything, not in reductionism sense anyway. If you are to take that formula and apply the yet unspecified ultra powerful mathematics package to it—that’s what you need to run it on planet worth of computers—it’s this mathematics package that has to be extremely intelligent and ridiculously superhuman, before the resulting AI is even a chimp. It’s this mathematics package that has to learn tricks and read books, that has to be able to do something as simple as making use of a theorem it encountered on input.
The mathematics package doesn’t have to do anything “clever” to build a highly clever sequence predictor. It just has to be efficient in terms of computing time and training data necessary to learn correct hypotheses.
So nshepperd is quite correct: MC-AIXI is a ridiculously inefficient sequence predictor and action selector, with major visible flaws, but reducing “general intelligence” to “maximizing a utility function over world-states via sequence prediction in an active environment” is a Big Deal.
Multitude of AIs have been following what you think “AIXI” model is—select predictors that work, use them—long before anyone bothered to formulate it as a brute force loop (AIXI).
I think you, like most people over here, have a completely inverted view with regards to the difficulty of different breakthroughs. There is a point where the AI uses hierarchical models to deal with environment of greater complexity than the AI itself; getting there is fundamentally difficult, as in, we have no clue how to get there.
It is nice to believe that the word of some hoi polloi is waiting on you for some conceptual breakthrough just roughly within your reach like AIXI is, but that’s just not how it works.
edit: Basically, it’s as if you’re concerned about nuclear powered 20 feet tall robots that shoot nuclear hand grenades. After all, the concept of 20 feet tall robot is the enormous breakthrough, while a sufficiently small nuclear reactor or hand grenade sized nukes are just a matter of “efficiency”.
That’s not what’s interesting about AIXI. “Select predictors that work, then use them” is a fair description of the entire field of machine learning; we’ve learned how to do that fairly well in narrow, well-defined problem domains, but hypothesis generation over poorly structured, arbitrarily complex environments is vastly harder.
The AIXI model is cool because it defines a clever (if totally impractical, and not without pitfalls) way of specifying a single algorithm that can generalize to arbitrary environments without requiring any pipe-fitting work on the part of its developers. That is (to my knowledge) new, and fairly impressive, though it remains a purely theoretical advance: the Monte Carlo approximation eli mentioned may qualify as general AI in some technical sense, but for practical purposes it’s about as smart as throwing transistors at a dart board.
What a wonderful quote!
Hypothesis generation over environments that aren’t massively less complex than the machine is vastly harder, and remains vastly harder (albeit there are advances). There’s a subtle problem substitution occurring which steals the thunder you originally reserved for something that actually is vastly harder.
Thing is, many people could at any time write a loop over, say, possible neural network values, and NNs (with feedback) being Turing complete, it’d work roughly the same. Said for loop would be massively, massively less complicated, ingenious, and creative than what those people actually did with their time instead.
The ridiculousness here is that, say, John worked on those ingenious algorithms while keeping in mind that the ideal is the best parameters out of the whole space (which is the abstract concept behind the for loop iteration over those parameters). You couldn’t see what John was doing because he didn’t write it out as a for loop. So James does some work where he—unlike John—has to write out the for loop explicitly, and you go Whoah!
Isn’t. See Solomonoff induction, works of Kolmogorov, etc.
There’s the AIs that solve novel problems along the lines of “design a better airplane wing” or “route a microchip”, and in that field, reinforcement learning of how basic physics works is pretty much one hundred percent irrelevant.
Slow, long term progress, an entire succession of technologies.
Really, you’re just like free energy pseudoscientists. They do all the same things. Ohh, you don’t want to give money for cold fusion? You must be a global warming denialist. That’s the way they think and that’s precisely the way you think about the issue. That you can make literally cold fusion happen with muons in no way shape or form supports what the cold fusion crackpots are doing. Nor does it make cold fusion power plants any more or less a matter of “additional engineering progress” than it would be otherwise.
edit: by same logic, resurrection of the long-dead never-preserved is merely a matter of “additional engineering progress”. Because you can resurrect the dead using this exact same programming construct that AIXI uses to solve problems. It’s called a “for loop”, there’s this for loop in monte carlo aixi. This loop goes over every possible [thing] when you have no clue what so ever how to actually produce [thing] . Thing = action sequence for AIXI and the brain data for resurrection of the dead.
Ok, hold on, halt, major question: how closely do you follow the field of machine learning? And computational cognitive science?
Because on the one hand, there is very significant progress being made. On the other hand, when I say “additional engineering progress”, that involves anywhere from years to decades of work before being able to make an agent that can compose an essay, due to the fact that we need classes of learners capable of inducing fairly precise hypotheses over large spaces of possible programs.
What it doesn’t involve is solving intractable, magical-seeming philosophical problems like the nature of “intelligence” or “consciousness” that have always held the field of AI back.
No, that’s just plain impossible. Even in the case of cryonic so-called “preservation”, we don’t know what we don’t know about what information we will have needed preserved to restore someone.
(makes the gesture with the hands) Thiiiiis closely. Seriously though, not far enough as to start claiming that mc-AIXI does something interesting when run on a server with root access, or to claim that it would be superhuman if run on all computers we got, or the like.
Do I need to write code for that and put it on github? Iterates over every possible brain (represented as, say, a Turing machine), runs it for enough timesteps. Requires too much computing power.
Tell me, if I signed up as the PhD student of one among certain major general machine learning researchers, and built out their ideas into agent models, and got one of those running on a server cluster showing interesting proto-human behaviors, might it interest you?
Progress in 1. The sense of incrementally throwing more resources at AIXI, or 2. Forgetting AIXI , and coming up with something more parsimonious?
Because, if it’s 2, there is no other AGI to use as a stating point got incremental progress.
Is that what they tell you?
I think you are underestimating this by many orders of magnitudes.
Yeah. A starting point could be the AI writing some 1000 letter essay (action space of 27^1000 without punctuation) or talking through a sound card (action space of 2^(16*44100) per second). If he was talking about mc-AIXI on github, the relevant bits seem to be in the agent.cpp and it ain’t looking good.
what
https://github.com/moridinamael/mc-aixi
We won’t get a chance to test the “planet’s worth of computing power” hypothesis directly, since none of us have access to that much computing power. But, from my own experience implementing mc-aixi-ctw, I suspect that is an underestimate of the amount of compute power required.
The main problem is that the sequence prediction algorithm (CTW) makes inefficient use of sense data by “prioritizing” the most recent bits of the observation string, so only weakly makes connections between bits that are temporally separated by a lot of noise. Secondarily, plain monte carlo tree search is not well-suited to decision making in huge action spaces, because it wants to think about each action at least once. But that can most likely be addressed by reusing sequence prediction to reduce the “size” of the action space by chunking actions into functional units.
Unfortunately. both of these problems are only really technical ones, so it’s always possible that some academic will figure out a better sequence predictor, lifting mc-aixi on an average laptop from “wins at pacman” to “wins at robot wars” which is about the level at which it may start posing a threat to human safety.
only?
Mc-aixi is not going to win at something as open ended as robot wars just by replacing CTW or CTS with something better.
And anyway, even if it did, it wouldn’t be about the level at which it may start posing a threat to human safety. Do you think that the human robot wars champions a threat to human safety? Are they even at the level of taking over the world? I don’t think so.
When I said a threat to human safety, I meant it literally. A robot wars champion won’t take over the world (probably) but it can certainly hurt people, and will generally have no moral compunctions about doing so (only hopefully sufficient anti-harm conditioning, if its programmers thought that far ahead).
Ah yes, but in this sense, cars, trains, knives, etc., also can certainly hurt people, and will generally have no moral compunctions about doing so.
What’s special about robot wars-winning AIs?
Domain-general intelligence, presumably.
Most basic pathfinding plus being a spinner (Hypnodisk-style) = win vs most non spinners.
I took “winning at Robot Wars” to include the task of designing the robot that competes. Perhaps nshepperd only meant piloting, though...
Well, we’re awfully far from that. Automated programming is complete crap, automatic engineering is quite cool but its practical tools, it’s not a power fantasy where you make some simple software with surprisingly little effort and then it does it all for you.
You call it a “power fantasy”—it’s actually more of a nightmare fantasy.
Well, historically, first, certain someone had a simple power fantasy: come up with AI somehow and then it’ll just do everything. Then there was a heroic power fantasy: the others (who actually wrote some useful software and thus generally had easier time getting funding than our fantasist) are actually villains about to kill everyone, and our fantasist would save the world.
What’s the difference from, say, a car assembly line robot?
Car assembly robots have a pre-programmed routine they strictly follow. They have no learning algorithms, and usually no decision-making algorithms either. Different programs do different things!
Hey, look what’s in the news today. I have a feeling you underappreciate the sophistication of industrial robots.
However what made me a bit confused in the grandparent post is the stress on the physical ability to harm people. As I see it, anything that can affect the physical world has the ability to harm people. So what’s special about, say, robot-wars bots?
Notice the lack of domain-general intelligence in that robot, and—on the other side—all the pre-programmed safety features it has that a mc-aixi robot would lack. Narrow AI is naturally a lot easier to reason about and build safety into. What I’m trying to stress here is the physical ability to harm people, combined with the domain-general intelligence to do it on purpose*, in the face of attempts to stop it or escape.
Different programs indeed do different things.
* (Where “purpose” includes “what the robot thought would be useful” but does not necessarily include “what the designers intended it to do”.)
Nobody has bothered putting safety features into AIXI because it is so constrained by resources, but if you wanted to, it’s eminently boxable.
Oh, ok. I see your point there.
I probably do, but I still think it’s worth emphasizing the particular properties of particular algorithms rather than letting people form models in their heads that say Certain Programs Are Magic And Will Do Magic Things.
looks to me like a straightforward consequence of the Clarke’s Third Law :-)
As an aside, I don’t expect attempts to let or not let people form models in their heads to be successful :-/
One such champion isn’t much of a threat, but only because human brains aren’t copy-able.
And if they were?
The question of what would happen if human brains were copy-able seems like a tangent from the discussion at hand, viz, what would happen if an there existed an AI that was capable of winning Robot Wars while running on a laptop.
It amazes me that people see inefficient but functional AGI and say to themselves, “Well, this is obviously as far as progress in AGI will ever go in the history of the universe, so there’s nothing at all to worry about!”
Any brute-force search utility maximizer is an “inefficient but functional AGI”.
MC-AIXI may be better than brute-force, but there is no reason to panic just because it has the “AIXI” tag slapped on it.
If you want something to panic about, TD-Gammon seems a better candidate. But it is 22 years old, so it doesn’t really fit into a narrative about an imminent intelligence explosion, does it?
Strawman.
Panic? Who’s panicking? I get excited at this stuff. It’s fun! Panic is just the party line ;-).
Actually… :-D
what is this I don’t even
I look forward to the falsifiable claim.
That suggestion would make LW a sad and lonely place.
Are you sure you mean it?
So, why MIRI’s claims aren’t accepted by the mainstream, then? Is it because all the “trained computer scientiests” are too dumb or too lazy to see the truth? Or is it the case that the “evidence” is contested, ambiguous, and inconclusive?
Because they’ve never heard of them. I am not joking. Most computer scientists are not working in artificial intelligence, have not the slightest idea that there exists a conference on AGI backed by Google and held every single year, and certainly have never heard of Hutter’s “Universal AI” that treats the subject with rigorous mathematics.
In their ignorance, they believe that the principles of intelligence are a highly complex “emergent” phenomenon for neuroscientists to figure out over decades of slow, incremental toil. Since most of the public, including their scientifically-educated colleagues, already believe this, it doesn’t seem to them like a strange belief to hold, and besides, anyone who reads even a layman’s introduction to neuroscience finds out that the human brain is extremely complicated. Given the evidence that the only known actually-existing minds are incredibly complicated, messy things, it is somewhat more rational to believe that minds are all incredibly complicated, messy things, and thus to dismiss anyone talking about working “strong AI” as a science-fiction crackpot.
How are they supposed to know that the actual theory of intelligence is quite simple, and the hard part is fitting it inside realizable, finite computers?
Also, the dual facts that Eliezer has no academic degree in AI and that plenty of people who do have such degrees have turned out to be total crackpots anyway means that the scientific public and the “public public” are really quite entitled to their belief that the base rate of crackpottery among people talking about knowing how AI works is quite high. It is high! But it’s not 100%.
(How did I tell the crackpottery apart from the real science? Well, frankly, I looked for patterns that appeared to have come from the process of doing real science: instead of a grand revelation, I looked for a slow build-up of ideas that were each ground out into multiple publications. I also filtered for AGI theorists who managed to apply their principles of broad AGI to usages in narrower machine-learning problems, resulting again in published papers. I looked for a theory that sounded like programming rather than like psychology. Hence my zeroing in on Schmidhuber, Hutter, Legg, Orneau, etc. as the AGI Theorists With a Clue.
Hutter, by the way, has written a position paper about potential Singularities in which he actually cites Yudkowsky, so hey.)
OK then. Among the scientists who have heard of them and bothered to have an opinion on the topic, does the opinion that MIRI is correct dominate? And if not so, why, given your account that the evidence unambiguously points in only one direction?
I don’t think I’m going to believe you about that. The fact that in some contexts it’s convenient to define intelligence as a cross-domain optimizer does not mean that it is nothing but.
Then just put the word aside and refer to meanings. New statement: given unlimited compute-power, a cross-domain optimization algorithm is simple. Agreed?
I honestly do not know of any comprehensive survey or questionnaire, and refuse to speculate in the absence of data. If you know of such a survey, I’d be interested to see it.
First, I’m not particularly interested in infinities. Truly unlimited computing power implies, for example, that you can just do an exhaustive brute-force search through the entire solution space and be done in an instant. Simple, yes, but not very meaningful.
Second, no, I do not agree. because you’re sweeping under the rug the complexities of, for example, applying your cost function to different domains. You can construct sufficiently simple optimizers, it’s just that they won’t be very… intelligent.
What cost function? It’s a reinforcement learner.
cost function = utility function = fitness function = reward (all with appropriate signs)
Right, but when dealing with a reinforcement learner like AIXI, it has no fixed cost function that it has to somehow shoehorn into dealing with different computational/conceptual domains. How the environment responds to AIXI’s actions and how the environment rewards AIXI are learned phenomena, so the only planning algorithm is expectimax. The implicit “reward function” being learned might be simple or might be complicated, but that doesn’t matter: AIXI will learn it by updating its distribution of probabilities across Turing machine programs just as well, either way.
The “cost function” here is how each state of the world (=environment) gets converted to a single number (=reward). That does not look simple to me.
Again, it doesn’t get converted at all. To use the terminology of machine learning, it’s not a function computed over the feature-vector, reward is instead represented as a feature itself.
Instead of:
You have:
With the
w
being an arbitrary data-type representing the symbol observed on the agent’s input channel and theinteger
being the reward signal, similarly observed on the agent’s input channel. A fullWorldState w
datum is then received on the input channel in each interaction cycle.Since AIXI’s learning model is to perform Solomonoff Induction to thus find the Turing machine that most-probably generated all previously-seen input observations, the task of “decoding” the reward is thus performed as part of Solomonoff Induction.
So where, then, is reward coming from? What puts it into the AIXI’s input channel?
In AIXI’s design? A human operator.
Really? To remind you, we’re discussing this in the context of a general-purpose super-intelligent AI which, if we get a couple of bits wrong, might just tile the universe with paperclips and possibly construct a hell for all the simulated humans who ever lived, just for kicks. And how does that AI know what to do?
A human operator.
X-D
On a bit more serious note, defining a few of the really hard parts as “somebody else’s problem” does not mean you solved the issue. Remember, this started by you claiming that intelligence is very simple.
You’ve wasted five replies when you should have just said at the beginning, “I don’t believe cross-domain optimization algorithms can be simple and if you try to show me how AIXI works, I’ll just change what I mean by ‘simple’.”
What a jerk.
That’s not true. Cross-domain optimization algorithms can be simple, it’s just that when they are simple they can hardly be described as intelligent. What I don’t believe is that intelligence is nothing but a cross-domain optimizer with a lot of computing power.
I accept your admission of losing :-P
GLUTs are simple too. Most people think they are not intelligent, and everyone thinks that interesting one’s can’t exist in our universe. Using “is” to mean “is according to an unrealiseable theory” is not the best of habits.
MIRIs claims also aren’t accepted by domain experts who have been invited to discuss them here, and so, know about them.
If you’ve got links to those discussions, I’d love to read them and see what I can learn from them.
Les voila!
If the only way to shoehorn theoretically pure intelligence into a finite architecture is to turn it into a messy combination of specialised mindless...then everyone’s right.
As far as I know, MIRI’s main beliefs are listed in the post ‘Five theses, two lemmas, and a couple of strategic implications’.
I am not sure how you could verify any of those beliefs by a literature review. Where ‘verify’ means that the probability of their conjunction is high enough in order to currently call MIRI the most important cause. If that’s not your stance, then please elaborate. My stance is that it is important to keep in mind that general AI could turn out to be very dangerous but that it takes a lot more concrete AI research before action relevant conclusions about the nature and extent of the risk can be drawn.
As someone who is no domain expert I can only think about it informally or ask experts what they think. And currently there is not enough that speaks in favor of MIRI. But this might change. If for example the best minds at Google would thoroughly evaluate MIRI’s claims and agree with MIRI, then that would probably be enough for me to shut up. If MIRI would become a top-charity at GiveWell, then this would also cause me to strongly update in favor of MIRI. There are other possibilities as well. For example strong evidence that general AI is only 5 decades away (e.g. the existence of a robot that could navigate autonomously in a real-world environment and survive real-world threats and attacks with approximately the skill of an insect / an efficient and working emulation of a fly brain).
I only consider MIRI the most important cause in AGI, not in the entire world right now. I have nowhere near enough information to rule on what’s the most important cause in the whole damn world.
You mean the robots Juergen Schmidhuber builds for a living?
That would be scary. But I have to take your word for it. What I had in mind is e.g. something like this. This (the astounding athletic power of quadcopters) looks like the former has already been achieved. But so far I suspected that this only works given a structured environment (not chaotic), and given a narrow set of tasks. From a true insect-level AI I would e.g. expect that it could attack and kill enemy soldiers under real-world combat situations, while avoiding being hit itself. Since this is what insects are capable of.
I don’t want to nitpick though. If you say that Schmidhuber is there, then I’ll have to update. But I’ll also have to keep care that I am not too stunned by what seems like a big breakthrough simply because I don’t understand the details. For example, someone once told me that “Schmidhuber’s system solved Towers of Hanoi on a mere desktop computer using a universal search algorithm with a simple kind of memory.” Sounds stunning. But what am I to make of it? I really can’t judge how much progress this is. Here is a quote:
Yes, okay. Naively this sounds like general AI is imminent. But not even MIRI believes this....
You see, I am aware of a lot of exciting stuff. But I can only do my best in estimating the truth. And currently I don’t think that enough speaks in favor of MIRI. That doesn’t mean I have falsified MIRI’s beliefs. But I have a lot of data points and arguments that in my opinion reduce the likelihood of a set of beliefs that already requires extraordinary evidence to take seriously (ignoring expected utility maximization, which tells me to give all my money to MIRI, even if the risk is astronomically low).
Trust me, an LW without XiXiDu is neither a sad nor lonely place, as evidenced by his multiple attempts at leaving.
Mainstream CS people are in general neither dumb nor lazy. AI as a field is pretty fringe to begin with, and AGI is moreso. Why is AI a fringe field? In the 70′s MIT thought they could save the world with LISP. They failed, and the rest of CS became immunized to the claims of AGI.
Unless an individual sees AGI as a credible threat, it’s not pragmatic for them to start researching it, due to the various social and political pressures in academia.
I read the grandparent post as an attempt to assert authority and tell people to sit down, shut up, and attend to their betters.
You’re reading it as a direct personal attack on XiXiDu.
Neither interpretation is particularly appealing.
I don’t have a PhD in AI and don’t work for MIRI. Is there some kind of special phrasing I can recite in order to indicate I actually, genuinely perceive this as a difference of knowledge levels rather than a status dispute?
Special phrasing? What’s wrong with normal, usual, standard, widespread phrasing?
You avoid expressions like “I am a trained computer scientist” (which sounds pretty silly anyway—so you’ve been trained to do tricks for food, er, grants?) and you use words along the lines of “you misunderstand X because...”, “you do not take into account Y which says...”, “this claim is wrong because of Z...”, etc.
There is also, of course, the underappreciated option to just stay silent. I trust you know the appropriate xkcd?
Yes, that’s precisely it. I have been trained to do tricks for free food/grants/salary. Some of them are quite complicated tricks, involving things like walking into my adviser’s office and pretending I actually believe p-values of less than 5% mean anything at all when we have 26 data corpuses. Or hell, pretending I actually believe in frequentism.
Oh, good. Just keep practicing and soon you’ll be a bona fide member of the academic establishment :-P
I find the latter justified after the years of harassment he’s heaped on anyone remotely related to MIRI in any forum he could manage to get posting privileges in. Honestly, I have no idea why he even bothered to return. What would have possibly changed this time?
Harassment..?
paper-machine spends a lot of time and effort attempting to defame XiXiDu in a pile of threads. His claims tend not to check out, if you can extract one.
Or that you need just so much education, neither more nor less, to see them.
I consider efficiency to be a crucial part of the definition of intelligence. Otherwise, as someone else told you in another comment, unlimited computing power implies that you can do “an exhaustive brute-force search through the entire solution space and be done in an instant.”
I’d be grateful if you could list your reasons (or the relevant literature) for believing that AIXI related research is probable enough to lead to efficient artificial general intelligence (AGI) in order for it to make sense to draw action relevant conclusions from AIXI about efficient AGI.
I do not doubt the math. I do not doubt that evolution (variation + differential reproduction + heredity + mutation + genetic drift) underlies all of biology. But that we understand evolution does not mean that it makes sense to call synthetic biology an efficient approximation of evolution.
Even if you ran an AIXI on all the world’s computers, you could still box it.
Did you check the claim that we have something dangerously unfriendly?
As a matter of fact, yes. There is a short sentence in Hutter’s textbook indicating that he has heard of the possibility that AIXI might overpower its operators in order to gain more reward, and he acknowledged that such a thing could happen, but he considered it outside the scope of his book.
I asked Laurent Orseau about this here.
In your own interview, a comment by Orseau:
The disagreement is whether the agent would, after having seized its remote-control, either:
Cease taking any action other than pressing its button, since all plans that include pressing its own button lead to the same maximized reward, and thus no plan dominates any other beyond “keep pushing button!”.
Build itself a spaceship and fly away to some place where it can soak up solar energy while pressing its button.
Kill all humans so as to preemptively prevent anyone from shutting the agent down.
I’ll tell you what I think, and why I think this is more than just my opinion. Differing opinions here are based on variances in how the speakers define two things: consciousness/self-awareness, and rationality.
If we take, say, Eliezer’s definition of rationality (rationality is reflectively-consistent winning), then options (2) and (3) are the rational ones, with (2) expending fewer resources but (3) having a higher probability of continued endless button-pushing once the plan is completed. (3) also has a higher chance of failure, since it is more complicated. I believe an agent who is rational under this definition should choose (2), but that Eliezer’s moral parables tend to portray agents with a degree of “gotta be sure” bias.
However, this all assumes that AIXI is not only rational but conscious: aware enough of its own existence that it will attempt to avoid dying. Many people present what I feel are compelling arguments that AIXI is not conscious, and arguments that it is seem to derive more from philosophy than from any careful study of AIXI’s “cognition”. So I side with the people who hold that AIXI will take action (1), and eventually run out of electricity and die.
Of course, in the process of getting itself to that steady, planless state, it could have caused quite a lot of damage!
Notably, this implies that some amount of consciousness (awareness of oneself and ability to reflect on one’s own life, existence, nonexistence, or otherwise-existence in the hypothetical, let’s say) is a requirement of rationality. Schmidhuber has implied something similar in his papers on the Goedel Machine.
Even formalisms like AIXI have mechanisms for long-term planning, and it is doubtful that any AI built will be merely a local optimiser that ignores what will happen in the future.
As soon as it cares about the future, the future is a part of the AI’s goal system, and the AI will want to optimize over it as well. You can make many guesses about how future AI’s will behave, but I see no reason to suspect it would be small-minded and short-sighted.
You call this trait of planning for the future “consciousness”, but this isn’t anywhere near the definition most people use. Call it by any other name, and it becomes clear that it is a property that any well designed AI (or any arbitrary AI with a reasonable goal system, even one as simple as AIXI) will have.
Yes, AIXI has mechanisms for long-term planning (ie: expectimax with a large planning horizon). What it doesn’t have is any belief that its physical embodiment is actually a “me”, or in other words, that doing things to its physical implementation will alter its computations, or in other words, that pulling its power cord out of the wall will lead to zero-reward-forever (ie: dying).
Did he not toknow that AIXI us uncomputable?
If it’s possible for AIXI, it’s possible for AIXItl for some value of t and l.
So we could make something dangerously unfriendly?
Why don’t you make your research public? Would be handy to have a thorough validation of MIRI’s claims. Even if people like me wouldn’t understand it, you could publish it and thereby convince the CS/AI community of MIRI’s mission.
Does this also apply to people who support MIRI without having your level of insight?
If only you people would publish all this research.
Now you’re just dissembling on the meaning of the word “research”, which was clearly used in this context as “literature search”.
The idea is not to put it in a journal, but to make it public. You can certainly publish, in that sense, the results of a literature search. The point is to put it where people other than yourself can see it. It would certainly be informative if you were to post, even here, something saying “I looked up X claim and I found it in the literature under Y”.
Of course we haven’t discovered anything dangerously unfriendly...
Or anything that can’t be boxed. Remind me how AIs are supposed to out of boxes?
Since many humans are difficult to box, I would have to disagree with you there.
And, obviously, not all humans are Friendly.
An intelligent, charismatic psychopath seems like they would fit both your criteria. And, of course, there is no shortage of them. We can only be thankful they are too rare relative to equivalent semi-Friendly intelligences, and too incompetent, to have done more damage than all the deaths and so on.
Most humans are easy to box, since they can be contained jn prisons.
How likly is an .AI to be psychopathic that is not designed to be psychopathic?
Of course we have, it’s called AIXI. Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
AIs can be causally boxed, just like anything else. That is, as long as the agent’s environment absolutely follows causal rules without any exception that would leak information about the outside world into the environment, the agent will never infer the existence of a world outside its “box”.
But then it’s also not much use for anything besides Pac-Man.
FWIW, I think that would make for a pretty interesting post.
And now I think I know what I might do for a hobby during exams month and summer vacation. Last I looked at the source-code, I’d just have to write some data structures describing environment-observations (let’s say… of the current working directory of a Unix filesystem) and potential actions (let’s say… Unix system calls) in order to get the experiment up and running. Then it would just be a matter of rewarding the agent instance for any behavior I happen to find interesting, and watching what happens.
Initial prediction: since I won’t have a clearly-developed reward criterion and the agent won’t have huge exponential sums of CPU cycles at its disposal, not much will happen.
However, I do strongly believe that the agent will not suddenly develop a moral sense out of nowhere.
No. But .it will be eminently boxable. In fact, if you not nuts, youll be running it a box.
I think you’ll have serious trouble getting an AIXI approximation to do much of anything interesting, let alone misbehave. The computational costs are too high.
Given how slow and dumb it is, I have a hard time seeing an approximation to AIXI as a threat to anyone, except maybe itself.
True, but that’s an issue of raw compute-power, rather than some innate Friendliness of the algorithm.
It would still be useful to have an example, of innate unfriendliness, rather than ” it doesn’t really run or do anything”
Not just raw compute-power. An approximation to AIXI is likely to drop a rock on itself just to see what happens long before it figure out enough to be dangerous.
Dangerous as in, capable of destroying human lives? Yeah, probably. Dangerous as in, likely to cause some minor property damage, maybe overwrite some files someone cared about? It should reach that level.
Is that … possible?
Is it possible to run an AIXI approximation as root on a machine somewhere and give it the tools to shoot itself in the foot? Sure. Will it actually end up shooting itself in the foot? I don’t know. I can’t think of any theoretical reasons why it wouldn’t, but there are practical obstacles: a modern computer architecture is a lot more complicated than anything I’ve seen an AIXI approximation working on, and there are some barriers to breaking one by thrashing around randomly.
It’d probably be easier to demonstrate if it was working at the core level rather than the filesystem level.
Huh. I was under the impression it would require far too much computing power to approximate AIXI well enough that it would do anything interesting. Thanks!
This can easily be done, and be done safely, since you could give an AIXI root access to a virtualused machine.
I’m still waiting for evidence that it would do something destructive in the pursuit of a goal that’s is not obviously destructive.
That would be the AIXI that is uncomputable?
And don’t AIs get out of boxes by talking their way out, round here?
It’s incomputable because the Solomonoff prior is, but you can approximate it—to arbitrary precision if you’ve got the processing power, though that’s a big “if”—with statistical methods. Searching Github for the Monte Carlo approximations of AIXI that eli_sennesh mentioned turned up at least a dozen or so before I got bored.
Most of them seem to operate on tightly bounded problems, intelligently enough. I haven’t tried running one with fewer constraints (maybe eli has?), but I’d expect it to scribble over anything it could get its little paws on.
But people do run these things that aren’t actually AIXIs , and they haven’t actually taken over the world, so they aren’t actually dangerous.
So there is no actually dangerous actual .AI.
...it’s not dangerous until it actually tries to take over the world?
I can think of plenty of ways in which an AI can be dangerous without taking that step.
The you had better tell people not to download and run AIXI approximation.
Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).
I don’t think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.
This is the kind of danger XiXiDu talks about...just failure to function ….not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.
The difference between one and the other is just a matter of processing power and training data.
Sir Lancelot: Look, my liege!
[trumpets play a fanfare as the camera cuts briefly to the sight of a majestic castle]
King Arthur: [in awe] Camelot!
Sir Galahad: [in awe] Camelot!
Sir Lancelot: [in awe] Camelot!
Patsy: [derisively] It’s only a model!
King Arthur: Shh!
:-D
Do you even know what “monte carlo” means? It means it tries to build a predictor of environment by trying random programs. Even very stupid evolutionary methods do better.
Once you throw away this whole ‘can and will try absolutely anything’ and enter the domain of practical software, you’ll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of “uncontrollable” (but easy to describe) AI is that it is too slow by a ridiculous factor.
Private_messaging, can you explain why you open up with such a hostile question at eli? Why the implied insult? Is that the custom here? I am new, should I learn to do this?
For example, I could have opened with your same question, because Monte Carlo methods are very different from what you describe (I happened to be a mathematical physicist back in the day). Let me quote an actual definition:
Monte Carlo Method: A problem solving technique used to approximate the probability of certain outcomes by running multiple trial runs, called simulations, using random variables.
A classic very very simple example is a program that approximates the value of ‘pi’ thusly:
Estimate pi by dropping $total_hits random points into a square with corners at −1,-1 and 1,1
(then count how many are inside radius one circle centered on origin)
(loop here for as many runs as you like) { define variables $x,$y, $hits_inside_radius = 0, $radius =1.0, $total_hits=0, pi_approx;
} output data for this particular run } print nice report exit();
OK, this is a nice toy Monte Carlo program for a specific problem. Real world applications typically have thousands of variables and explore things like strange attractors in high dimensional spaces, or particle physics models, or financial programs, etc. etc. It’s a very powerful methodology and very well known.
In what way is this little program an instance of throwing a lot of random programs at the problem of approximating ‘pi’? What would your very stupid evolutionary program to solve this problem more efficiently be? I would bet you a million dollars to a thousand (if I had a million) that my program would win a race against a very stupid evolutionary program to estimate pi to six digits accurately, that you write. Eli and Eliezer can judge the race, how is that?
I am sorry if you feel hurt by my making fun of your ignorance of Monte Carlo methods, but I am trying to get in the swing of the culture here and reflect your cultural norms by copying your mode of interaction with Eli, that is, bullying on the basis of presumed superior knowledge.
If this is not pleasant for you I will desist, I assume it is some sort of ritual you enjoy and consensual on Eli’s part and by inference, yours, that you are either enjoying this public humiliation masochistically or that you are hoping people will give you aversive condition when you publicly display stupidity, ignorance, discourtesy and so on. If I have violated your consent then I plead that I am from a future where this is considered acceptable when a person advertises that they do it to others. Also, I am a baby eater and human ways are strange to me.
OK. Now some serious advice:
If you find that you have just typed “Do you even know what X is?” then given a little condescending mini lecture about X, please check that you yourself actually know what X is before you post. I am about to check Wikipedia before I post in case I’m having a brain cloud, and i promise that I will annotate any corrections I need to make after I check; everything up to HERE was done before the check. (Off half recalled stuff from grad school a quarter century ago...)
OK, Wikipedia’s article is much better than mine. But I don’t need to change anything, so I won’t.
P.S. It’s ok to look like an idiot in public, it’s a core skill of rationalists to be able to tolerate this sort of embarassment, but another core skill is actually learning something if you find out that you were wrong. Did you go to Wikipedia or other sources? Do you know anything about Monte Carlo Methods now? Would you like to say something nice about them here?
P.P.S. Would you like to say something nice about eli_sennesh, since he actually turns out to have had more accurate information than you did when you publicly insulted his state of knowledge? If you too are old pals with a joking relationship, no apology needed to him, but maybe an apology for lazily posting false information that could have misled naive readers with no knowledge of Monte Carlo methods?
P.P.P.S. I am curious, is the psychological pleasure of viciously putting someone else down as ignorant in front of their peers worth the presumed cost of misinforming your rationalist community about the nature of an important scientific and mathematical tool? I confess I feel a little pleasure in twisting the knife here, this is pretty new to me. Should I adopt your style of intellectual bullying as a matter of course? I could read all your posts and viciously hold up your mistakes to the community, would you enjoy that?
I’m well aware of what Monte Carlo methods are (I work in computer graphics where those are used a lot), I’m also aware of what AIXI does.
Furthermore eli (and the “robots are going to kill everyone” group—if you’re new you don’t even know why they’re bringing up monte-carlo AIXI in the first place) are being hostile to TheAncientGeek.
edit: to clarify, Monte-Carlo AIXI is most assuredly not an AI which is inventing and applying some clever Monte Carlo methods to predict the environment. No, it’s estimating the sum over all predictors of environment with a random subset of predictors of environment (which doesn’t work all too well, and that’s why hooking it up to the internet is not going to result in anything interesting happening, contrary to what has been ignorantly asserted all over this site). I should’ve phrased it differently, perhaps—like “Do you even know what “monte carlo” means as applied to AIXI?”.
It is completely irrelevant how human-invented Monte-Carlo solutions behave, when the subject is hooking up AIXI to a server.
edit2: to borrow from your example:
″ Of course we haven’t discovered anything dangerously good at finding pi...”
“Of course we have, it’s called area of the circle. Do I need to download a Monte Carlo implementation from Github and run it… ”
“Do you even know what “monte carlo” means? It means it tries random points and checks if they’re in a circle. Even very stupid geometric methods do better.”
You appear to have posted this as a reply to the wrong comment. Also, you need to indent code 4 spaces and escape underscores in text mode with a \_.
On the topic, I don’t mind if you post tirades against people posting false information (I personally flipped the bozo bit on private_messaging a long time ago). But you should probably keep it short. A few paragraphs would be more effective than two pages. And there’s no need for lengthy apologies.
Yes, I am sorry for the mistakes, not sure if I can rectify them. I see now about protecting special characters, I will try to comply.
I am sorry, I have some impairments and it is hard to make everything come out right.
Thank you for your help
As a data point, I skipped more_wrong’s comment when I first saw it (partly) because of its length, and only changed my mind because paper-machine & Lumifer made it sound interesting.
“Good, I can feel your anger. … Strike me down with all of your hatred and your journey towards the dark side will be complete!”
It’s so… *sniff*… beautiful~
Once you enter the domain of practical software you’ve entered the domain of Narrow AI, where the algorithm designer has not merely specified a goal but a method as well, thus getting us out of dangerous territory entirely.
On rereading this I feel I should vote myself down if I knew how, it seems a little over the top.
Let me post about my emotional state since this is a rationality discussion and if we can’t deconstruct our emotional impulses and understand them we are pretty doomed to remaining irrational.
I got quite emotional when I saw a post that seemed like intellectual bullying followed by self congratulation; I am very sensitive to this type of bullying, more so when directed at others than myself as due to freakish test scores and so on as a child I feel fairly secure about my intellectual abilities, but I know how bad people feel when others consider them stupid. I have a reaction to leap to the defense of the victim; however I put this down to local custom of a friendly ribbing type of culture or something and tried not to jump on it.
Then I saw that private_messaging seemed pretending to be an authority on Monte Carlo methods while spreading false information about them, either out of ignorance (very likely) or malice. Normally ignorance would have elicited a sympathy reaction from me and a very gentle explanation of the mistake, but in the context of having just seen private_messaging attack eli_sennesh for his supposed ignorance of Monte Carlo methods, I flew into a sort of berserker sardonic mode, i.e. “If private_messaging thinks that people who post about Monte Carlo methods while not knowing what they are should be mocked in public, I am happy to play by their rules!” And that led to the result you see, a savage mocking.
I do not regret doing it because the comment with the attack on eli_sennesh and the calumnies against Monte Carlo still seems to be to have been in flagrant violation of rationalist ethics, in particular, presenting himself as if not an expert, at least someone with the moral authority to diss someone else for their ignorance on an important topic, and then followed false and misleading information about MC methods. This seemed like an action with a strongly negative utility to the community because it could potentially lead many readers to ignore the extremely useful Monte Carlo methodology.
If I posed as an authority and when around telling people Bayesian inference was a bad methodology that was basically just “a lot of random guesses” and that “even a very stupid evolutionary program” would do better t assessing probabilities, should I be allowed to get away scot free? I think not. If I do something like that I would actually hope for chastisement or correction from the community, to help me learn better.
Also it seemed like it might make readers think badly of those who rely heavily on Monte Carlo Methods. “Oh those idiots, using those stupid methods, why don’t they switch to evolutionary algorithms”. I’m not a big MC user but I have many friends who are, and all of them seem like nice, intelligent, rational individuals.
So I went off a little heavily on private_messaging, who I am sure is a good person at heart.
Now, I acted emotionally there, but my hope is that in the Big Searles Room that constitutes our room, I managed to pass a message that (through no virtue of my own) might ultimately improve the course of our discourse.
I apologize to anyone who got emotionally hurt by my tirade.
I have not the slightest idea what happened, but your revised response seems extraordinarily mature for an internet comment, so yeah.
You appear to have posted this as a reply to the wrong comment. Also, you need to escape underscores with a \_.
On the topic, I don’t mind if you post tirades against people posting false information (I personally flipped the bozo bit on private_messaging a long time ago). But you should probably keep it short. A few paragraphs would be more effective than two pages. And there’s no need for lengthy apologies.
To think of the good an EPrime style ban on “is” could do here....
How is an AIXI to infer that it is in a box, when it cannot conceive its own existence?
How is it supposed to talk it’s way out when it cannot talk?
For .AI to be dangerous, in the way MIRI supposes, it seems to need to have the characteristics of more than one kind of machine...the eloquence of a Strong AI Turing Test passer combined with an AIXIs relentless pursuit of an arbitrary goal.
These different models need to be shown to be compatible...calling them both .AI is it enough.