Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”.
Seriously, FYI (where perhaps the Y stands for “Yudkowsky’s”): that document (or a similar one) really rubbed me the wrong way the first time I read it. It just smacked of “only the cool kids can play with us”. I realize that’s probably because I don’t run into very many people who think they can easily solve FAI, whereas Eliezer runs into them constantly; but still.
It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added—“We will probably, but not definitely, end up working in Java”.
...I don’t know if that’s a bad joke or a hint that the writer isn’t being serious. Well, if it’s a joke, it’s bad and not funny. Now I’ll have nightmares of the best programmers Planet Earth could field failing to write a FAI because they used Java of all things.
Yup. The logic at the time went something like, “I want something that will be reasonably fast and scale to lots of multiple processors and runs in a tight sandbox and has been thoroughly debugged with enterprise-scale muscle behind it, and which above all is not C++, and in a few years (note: HAH!) when we start coding, Java will probably be it.” There were lots of better-designed languages out there but they didn’t have the promise of enterprise-scale muscle behind their implementation of things like parallelism.
Also at that time, I was thinking in terms of a much larger eventual codebase, and was much more desperate to use something that wasn’t C++. Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.
Mostly in that era there weren’t any good choices, so far as I knew then. Ben Goertzel, who was trying to scale a large AI codebase, was working in a mix of C/C++ and a custom language running on top of C/C++ (I forget which), which I think he had transitioned either out of Java or something else, because nothing else was fast enough or handled parallelism correctly. Lisp, he said at that time, would have been way too slow.
I fully agree that C++ is much, much, worse than Java. The wonder is that people still use it for major new projects today. At least there are better options than Java available now (I don’t know what the state of art was in 2002 that well).
If you got together an “above-genius-level” programming team, they could design and implement their own language while they were waiting for your FAI theory. Probably they would do it anyway on their own initiative. Programmers build languages all the time—a majority of today’s popular languages started as a master programmer’s free time hobby. (Tellingly, Java is among the few that didn’t.)
A custom language built and maintained by a star team would be at least as good as any existing general-purpose one, because you would borrow design you liked and because programming language design is a relatively well explored area (incl. such things as compiler design). And you could fit the design to the FAI project’s requirements: choosing a pre-existing language means finding one that happens to match your requirements.
Incidentally, all the good things about Java—including the parallelism support—are actually properties of the JVM, not of the Java the language; they’re best used from other languages that compile to the JVM. If you said “we’ll probably run on the JVM”, that would have sounded much better than “we’ll probably write in Java”. Then you’ll only have to contend with the CLR and LLVM fans :-)
I don’t think it will mostly be a coding problem. I think there’ll be some algorithms, potentially quite complicated ones, that one will wish to implement at high speed, preferably with reproducible results (even in the face of multithreading and locks and such). And there will be a problem of reflecting on that code, and having the AI prove things about that code. But mostly, I suspect that most of the human-shaped content of the AI will not be low-level code.
I think it’s pretty fair to say that no language or runtime is that great on concurrency today. Coming up with a better way to program for many-core machines is probably the major area of research in language design today and there doesn’t appear to be a consensus on the best approach yet.
I think a case could be made that the best problem a genius-level programmer could devote themselves to right now is how to effectively program for many-core architectures.
Speaking of things that aren’t Java but run on the JVM, Scala is one such (really nice) language. It’s designed and implemented by one of the people behind the javac compiler, Martin Odersky. The combination of excellent support for concurrency and functional programming would make it my language of choice for anything that I would have used Java for previously, and it seems like it would be worth considering for AI programming as well.
It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added—“We will probably, but not definitely, end up working in Java”
I had the same thought—how incongruous! (Not that I’m necessarily particularly qualified to critique the choice, but it just sounded...inappropriate. Like describing a project to build a time machine and then solemnly announcing that the supplies would be purchased at Target.)
I assume, needless to say, that (at least) that part is no longer representative of Eliezer’s current thinking.
That’s true. But inasfar the requirements of the FAI project are objective, independent of PL development in the industry, they should be the main point of reference. Developing your own language is a viable alternative and was even more attractive years ago—that’s what I meant to imply.
It depends on whether you want to take advantage of resources like editors, IDEs, refactoring tools, lint tools—and a pool of developers.
Unless you have a very good reason to do so, inventing your own language is a large quantity of work—and one of its main effects is to cut you off from the pool of other developers—making it harder to find other people to work on your project and restricting your choice of programming tools to ones you can roll for yourself.
Anecdotally, half the benefit of inventing your own language is cutting yourself off from the pool of other, inferior developers :-)
Remember that Eliezer’s assumption is that he’d be starting with a team of super-genius developers. They wouldn’t have a problem with rolling their own tools.
Well, it’s not that it’s impossible, it’s more that it drains off energy from your project into building tools. If your project is enormous, that kind of expense might be justified. Or if you think you can make a language for your application domain which works much better than the best of the world’s professional language designers.
However, in most cases, these kinds of proposals are a recipe for disaster. You spend a lot of your project resources pointlessly reinventing the wheel in terms of lint, refactoring, editing and code-generation technology—and you make it difficult for other developers to help you out. I think this sort of thing is only rather rarely a smart move.
That’s if you want to be an FAI developer and on the Final Programming Team of the End of the World, not if you want to work for SIAI in any capacity whatsoever. If you’re writing to myself, rather than Anna, then yes, mentioning e.g. the International Math Olympiad will help to get my attention. (Though I’m certain the document does need updating—I haven’t looked at it myself in a long while.)
It does kinda give the impression that a) donors and b) programmers are all that SIAI has a use for, though. It mentions that if you want to help but aren’t a genius, sure, you can be a donor, or you can see if you get into a limited number of slots for non-genius programmers, but that’s it.
I’m also one of the people who’s been discouraged from the thought of being useful for SIAI by that document, though. (Fortunately people have afterwards been giving the impression I might be of some use after all. Submitted an application today.)
Anna, and in general the Vassarian lineage, are more effective cooperators than I am. The people who I have the ability to cooperate with, form a much more restricted set than those who they can cooperate with.
SYWTBASAIP always makes me think of Reid Barton—which I imagine is probably quite a bit higher that EY meant to convey as a lower bound—so I know what you mean.
Well, who can blame them?
Seriously, FYI (where perhaps the Y stands for “Yudkowsky’s”): that document (or a similar one) really rubbed me the wrong way the first time I read it. It just smacked of “only the cool kids can play with us”. I realize that’s probably because I don’t run into very many people who think they can easily solve FAI, whereas Eliezer runs into them constantly; but still.
It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added—“We will probably, but not definitely, end up working in Java”.
...I don’t know if that’s a bad joke or a hint that the writer isn’t being serious. Well, if it’s a joke, it’s bad and not funny. Now I’ll have nightmares of the best programmers Planet Earth could field failing to write a FAI because they used Java of all things.
This was written circa 2002 when Java was at least worthy of consideration compared to the other options out there.
Yup. The logic at the time went something like, “I want something that will be reasonably fast and scale to lots of multiple processors and runs in a tight sandbox and has been thoroughly debugged with enterprise-scale muscle behind it, and which above all is not C++, and in a few years (note: HAH!) when we start coding, Java will probably be it.” There were lots of better-designed languages out there but they didn’t have the promise of enterprise-scale muscle behind their implementation of things like parallelism.
Also at that time, I was thinking in terms of a much larger eventual codebase, and was much more desperate to use something that wasn’t C++. Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.
Mostly in that era there weren’t any good choices, so far as I knew then. Ben Goertzel, who was trying to scale a large AI codebase, was working in a mix of C/C++ and a custom language running on top of C/C++ (I forget which), which I think he had transitioned either out of Java or something else, because nothing else was fast enough or handled parallelism correctly. Lisp, he said at that time, would have been way too slow.
I’d rather the AI have a very low probability of overwriting its supergoal by way of a buffer overflow.
Proving no buffer overflows would be nothing next to the other formal verification you’d be doing (I hope).
Exactly—which is why the sentence sounded so odd.
Well, yes, Yudkowsky-2002 is supposed to sound odd to a modern LW reader.
I fully agree that C++ is much, much, worse than Java. The wonder is that people still use it for major new projects today. At least there are better options than Java available now (I don’t know what the state of art was in 2002 that well).
If you got together an “above-genius-level” programming team, they could design and implement their own language while they were waiting for your FAI theory. Probably they would do it anyway on their own initiative. Programmers build languages all the time—a majority of today’s popular languages started as a master programmer’s free time hobby. (Tellingly, Java is among the few that didn’t.)
A custom language built and maintained by a star team would be at least as good as any existing general-purpose one, because you would borrow design you liked and because programming language design is a relatively well explored area (incl. such things as compiler design). And you could fit the design to the FAI project’s requirements: choosing a pre-existing language means finding one that happens to match your requirements.
Incidentally, all the good things about Java—including the parallelism support—are actually properties of the JVM, not of the Java the language; they’re best used from other languages that compile to the JVM. If you said “we’ll probably run on the JVM”, that would have sounded much better than “we’ll probably write in Java”. Then you’ll only have to contend with the CLR and LLVM fans :-)
I don’t think it will mostly be a coding problem. I think there’ll be some algorithms, potentially quite complicated ones, that one will wish to implement at high speed, preferably with reproducible results (even in the face of multithreading and locks and such). And there will be a problem of reflecting on that code, and having the AI prove things about that code. But mostly, I suspect that most of the human-shaped content of the AI will not be low-level code.
How’s the JVM on concurrency these days? My loose impression was that it wasn’t actually all that hot.
I think it’s pretty fair to say that no language or runtime is that great on concurrency today. Coming up with a better way to program for many-core machines is probably the major area of research in language design today and there doesn’t appear to be a consensus on the best approach yet.
I think a case could be made that the best problem a genius-level programmer could devote themselves to right now is how to effectively program for many-core architectures.
My impression is that JVM is worse at concurrency than every other approach that’s been tried so far.
Haskell and other functional programming languages has many promising ideas but isn’t widely used in the industry AFAIK.
This presentation gives a good short overview of the current state of concurrency approaches.
Speaking of things that aren’t Java but run on the JVM, Scala is one such (really nice) language. It’s designed and implemented by one of the people behind the javac compiler, Martin Odersky. The combination of excellent support for concurrency and functional programming would make it my language of choice for anything that I would have used Java for previously, and it seems like it would be worth considering for AI programming as well.
I had the same thought—how incongruous! (Not that I’m necessarily particularly qualified to critique the choice, but it just sounded...inappropriate. Like describing a project to build a time machine and then solemnly announcing that the supplies would be purchased at Target.)
I assume, needless to say, that (at least) that part is no longer representative of Eliezer’s current thinking.
I can’t understand how it could ever have been part of his thinking. (Java was even worse years ago!)
Not relative to its competitors, surely. Many of them didn’t exist back then.
That’s true. But inasfar the requirements of the FAI project are objective, independent of PL development in the industry, they should be the main point of reference. Developing your own language is a viable alternative and was even more attractive years ago—that’s what I meant to imply.
It depends on whether you want to take advantage of resources like editors, IDEs, refactoring tools, lint tools—and a pool of developers.
Unless you have a very good reason to do so, inventing your own language is a large quantity of work—and one of its main effects is to cut you off from the pool of other developers—making it harder to find other people to work on your project and restricting your choice of programming tools to ones you can roll for yourself.
Anecdotally, half the benefit of inventing your own language is cutting yourself off from the pool of other, inferior developers :-)
Remember that Eliezer’s assumption is that he’d be starting with a team of super-genius developers. They wouldn’t have a problem with rolling their own tools.
Well, it’s not that it’s impossible, it’s more that it drains off energy from your project into building tools. If your project is enormous, that kind of expense might be justified. Or if you think you can make a language for your application domain which works much better than the best of the world’s professional language designers.
However, in most cases, these kinds of proposals are a recipe for disaster. You spend a lot of your project resources pointlessly reinventing the wheel in terms of lint, refactoring, editing and code-generation technology—and you make it difficult for other developers to help you out. I think this sort of thing is only rather rarely a smart move.
It’s mentioned twice, so I doubt it’s a joke.
That’s if you want to be an FAI developer and on the Final Programming Team of the End of the World, not if you want to work for SIAI in any capacity whatsoever. If you’re writing to myself, rather than Anna, then yes, mentioning e.g. the International Math Olympiad will help to get my attention. (Though I’m certain the document does need updating—I haven’t looked at it myself in a long while.)
It does kinda give the impression that a) donors and b) programmers are all that SIAI has a use for, though. It mentions that if you want to help but aren’t a genius, sure, you can be a donor, or you can see if you get into a limited number of slots for non-genius programmers, but that’s it.
I’m also one of the people who’s been discouraged from the thought of being useful for SIAI by that document, though. (Fortunately people have afterwards been giving the impression I might be of some use after all. Submitted an application today.)
Anna, and in general the Vassarian lineage, are more effective cooperators than I am. The people who I have the ability to cooperate with, form a much more restricted set than those who they can cooperate with.
I once had that impression too, almost certainly in part from SYWTBASAIP.
FWIW, I and almost everyone outside SIAI who I’ve discussed it with have had this misconception; in my case, SYWTBASAIP did support it.
SYWTBASAIP always makes me think of Reid Barton—which I imagine is probably quite a bit higher that EY meant to convey as a lower bound—so I know what you mean.