Fairly technical would be good. IEM and the sociological work are somewhat outside my interests. Attending a workshop would unfortunately be problematic; anxiety issues make travelling difficult, especially air travel (I live in Australia). Writing up comments on the research papers is an excellent idea; I will certainly start doing that regardless of what project I do. Of the subjects listed, I am familiar (in roughly decreasing order) with functional programming, efficient algorithms, parallel computing, discrete math, numerical analysis, linear algebra, and the basics of set theory and mathematical logic. I have “Naive Set Theory”, “Introduction to Mathematical Logic”, and “Godel, Escher, Bach” sitting on my desk at the moment, and I am currently taking courses in theory of computation, and intelligent systems (a combination AI/machine learning/data mining course). The areas I had planned to learn after the above are incompleteness/undecidability, model theory, and category theory. In terms of how my prospective advisor could affect things, he’s mostly interested in cognitive science based AI, with some side interest in theory of computation.
In that case, I think you’ll want to study mathematical logic, theory of computation, incompleteness/undecidability and model theory, to improve your ability to contribute to the open problems that Eliezer thinks are most plausibly relevant to Friendly AI. Skimming our recent technical papers (definability of truth, robust cooperation, tiling agents) should also give you a sense of what you’d need to learn to contribute at the cutting edge.
A few years from now, I hope to have write-ups of a lot more open problems, including ones that don’t rely so heavily on mathematical logic.
Something closer to cognitive science based AI, which Paul Christiano and Andreas Stuhlmuller (and perhaps others) think is plausibly relevant to FAI, is concept learning. The idea is that this will be needed at some point for getting AIs to “do what I mean.” The September workshop participants spent some time working on this. You could email Stuhlmuller to ask for more details, preferably after reading the paper linked above.
Sorry for the late reply; my mid-semester break just started, which of course meant I came down with a cold :). I’ve (re-)read the recent papers, and was rather surprised at how much of the maths I was able to understand. I’m feeling less confidant about my mathematical ability after reading the papers, but that is probably a result of spending a few hours reading papers I don’t fully understand rather an accurate assessment of my ability. Concept learning seems to be a good backup option, especially since it sounds like something my supervisor would love (except for the part where it’s a form of supervised learning, but that’s unlikely to be a problem).
I vaguely remember EY mentioning something about there needing to be research into better operating systems and/or better programming languages (in terms of reliability/security/correctness), but this may have been a while ago. I have quite a bit of interest in this area, and some experience as well. Is this something that you think would be valuable (and if so, how valuable compared to work on the main open problems)?
Do you know which of the open problems MIRI is likely to attack first? I’d like to avoid duplication of effort, though I know with the unpredictability of mathematical insight that’s not always feasible.
UPDATE:
I just had a meeting with my supervisor, and he was pretty happy with all of the options I presented, so that won’t be a problem. An idea I had this morning, which I’m pretty excited about, is potentially applying the method from the probabalistic reflection paper to the Halting Problem, since it seems to share the same self-referential structure.
I vaguely remember EY mentioning something about there needing to be research into better operating systems and/or better programming languages (in terms of reliability/security/correctness), but this may have been a while ago. I have quite a bit of interest in this area, and some experience as well. Is this something that you think would be valuable (and if so, how valuable compared to work on the main open problems)?
Three areas I would look into are distributed capability based security systems (example: Amoeba), formally verified kernels (example: seL4), and formal verification of user programs (example: Singularity OS). Programming language research isn’t really needed—haskell is the language I would choose, for its practical and theoretical advantages, but there are other options too. Where the work would be needed is in integration: making the GHC compiler output proofs (haskell is well suited to this, but there is not to my knowledge a complete framework for doing so), making the operating system / distributed environment runtime verify them, and most importantly of all, choosing what invariants to enforce.
better operating systems and/or better programming languages
I doubt this is worth pushing on now. If it’s useful, it’ll be useful when we’re closer to doing engineering rather than philosophy and math.
Do you know which of the open problems MIRI is likely to attack first?
In the immediate future we’ll keep tackling problems addressed in our past workshops. Other than that, I’m not sure which problems we’ll tackle next. We’ll have to wait and see what comes of Eliezer’s other “open problems” write-ups, and which ideas workshop participants bring to the November and December workshops.
potentially applying the method from the probabilistic reflection paper to the Halting Problem, since it seems to share the same self-referential structure
The participants of our April workshop checked this, and after some time decided they could probably break Tarski, Godel, and Lob with probabilistic reflection, but not the Halting Problem, despite the similarities in structure. You could ask (e.g.) Qiaochu if you want to know more.
Fairly technical would be good. IEM and the sociological work are somewhat outside my interests. Attending a workshop would unfortunately be problematic; anxiety issues make travelling difficult, especially air travel (I live in Australia). Writing up comments on the research papers is an excellent idea; I will certainly start doing that regardless of what project I do. Of the subjects listed, I am familiar (in roughly decreasing order) with functional programming, efficient algorithms, parallel computing, discrete math, numerical analysis, linear algebra, and the basics of set theory and mathematical logic. I have “Naive Set Theory”, “Introduction to Mathematical Logic”, and “Godel, Escher, Bach” sitting on my desk at the moment, and I am currently taking courses in theory of computation, and intelligent systems (a combination AI/machine learning/data mining course). The areas I had planned to learn after the above are incompleteness/undecidability, model theory, and category theory. In terms of how my prospective advisor could affect things, he’s mostly interested in cognitive science based AI, with some side interest in theory of computation.
In that case, I think you’ll want to study mathematical logic, theory of computation, incompleteness/undecidability and model theory, to improve your ability to contribute to the open problems that Eliezer thinks are most plausibly relevant to Friendly AI. Skimming our recent technical papers (definability of truth, robust cooperation, tiling agents) should also give you a sense of what you’d need to learn to contribute at the cutting edge.
A few years from now, I hope to have write-ups of a lot more open problems, including ones that don’t rely so heavily on mathematical logic.
Something closer to cognitive science based AI, which Paul Christiano and Andreas Stuhlmuller (and perhaps others) think is plausibly relevant to FAI, is concept learning. The idea is that this will be needed at some point for getting AIs to “do what I mean.” The September workshop participants spent some time working on this. You could email Stuhlmuller to ask for more details, preferably after reading the paper linked above.
Sorry for the late reply; my mid-semester break just started, which of course meant I came down with a cold :). I’ve (re-)read the recent papers, and was rather surprised at how much of the maths I was able to understand. I’m feeling less confidant about my mathematical ability after reading the papers, but that is probably a result of spending a few hours reading papers I don’t fully understand rather an accurate assessment of my ability. Concept learning seems to be a good backup option, especially since it sounds like something my supervisor would love (except for the part where it’s a form of supervised learning, but that’s unlikely to be a problem).
I vaguely remember EY mentioning something about there needing to be research into better operating systems and/or better programming languages (in terms of reliability/security/correctness), but this may have been a while ago. I have quite a bit of interest in this area, and some experience as well. Is this something that you think would be valuable (and if so, how valuable compared to work on the main open problems)?
Do you know which of the open problems MIRI is likely to attack first? I’d like to avoid duplication of effort, though I know with the unpredictability of mathematical insight that’s not always feasible.
UPDATE: I just had a meeting with my supervisor, and he was pretty happy with all of the options I presented, so that won’t be a problem. An idea I had this morning, which I’m pretty excited about, is potentially applying the method from the probabalistic reflection paper to the Halting Problem, since it seems to share the same self-referential structure.
Three areas I would look into are distributed capability based security systems (example: Amoeba), formally verified kernels (example: seL4), and formal verification of user programs (example: Singularity OS). Programming language research isn’t really needed—haskell is the language I would choose, for its practical and theoretical advantages, but there are other options too. Where the work would be needed is in integration: making the GHC compiler output proofs (haskell is well suited to this, but there is not to my knowledge a complete framework for doing so), making the operating system / distributed environment runtime verify them, and most importantly of all, choosing what invariants to enforce.
I doubt this is worth pushing on now. If it’s useful, it’ll be useful when we’re closer to doing engineering rather than philosophy and math.
In the immediate future we’ll keep tackling problems addressed in our past workshops. Other than that, I’m not sure which problems we’ll tackle next. We’ll have to wait and see what comes of Eliezer’s other “open problems” write-ups, and which ideas workshop participants bring to the November and December workshops.
The participants of our April workshop checked this, and after some time decided they could probably break Tarski, Godel, and Lob with probabilistic reflection, but not the Halting Problem, despite the similarities in structure. You could ask (e.g.) Qiaochu if you want to know more.