I can address the other questions later on, but I am actually interested in looking to complexity limits for FAI problems. My initial reaction to Yudkowsky’s post about cohesive extrapolated volition was that such a thing is probably not efficiently computable, and even if it is, it is probably not stable (in the control theory sense; i.e. a tiny error in CEV yields a disastrously large error in terms of the eventual outcome). It isn’t like there is just one single time that we have to have a mathematically comprehensible description of volition. As computational resources grow, I imagine the problem of CEV will be faced many times in a row on rapidly larger scales, and I’m interested in knowing how a reasonable CEV computation scales asymptotically in the size of the projected future generation’s computing capabilities. Very very naively, for example, let’s say that the number of processors N of some future AI system plays a major role in the mathematical structure of my description of my volition that I need to be prepared to hand to it to convince it to help me along (I know this is a shortsighted way of looking at it, but it illustrates the point). How does the calculation of CEV grow with N. If computing the CEV in a mathematically comprehensible way grows faster than my compute power, then even if I can create the initial CEV, somewhere down the chain I won’t be able to. Similarly, if CEV is viewed as a set of control instructions, then above all it has to be stable. If mis-specifying CEV by a tiny percentage yields a dramatically bad outcome, then the whole problem of friendliness may itself be moot. It may be intrinsically unstable.
As far as “math teaching at a respected research university” goes, there are a few reasons. I have a high aesthetic preference for both mathematics and the human light-bulb-going-off effect when students overcome mathematical difficulties, so the job feels very rewarding to me without needing to offer me much in the way of money. I enjoy creating tools that can be used constructively to accomplish things, but I don’t enjoy being confined to a desk and needing to focus on a computer screen. The most rewarding experience I have found along these lines is developing novel applied mathematical tools that can then be leveraged by engineers and scientists who have less aversion to code writing. Moreover, I have found that I function much better in environments where there is a vigorous pace to publishing work. At slower places, I tend to chameleonize and become slower myself, but at vibrant, fast-paced places, I seem to function on all cylinders, so to speak. This is why a “respected research university” is much more appealing than a community college or smaller state level college.
I’m very disillusioned with the incentive scheme for academia as a whole. Applied mathematics with an emphasis on theoretical tools is one domain where a lot of the negative aspects have been kept at bay. Unfortunately, it’s also a field where statistically it is very hard to get a reasonably stable job. As far as areas of math go, I greatly enjoy theoretical computer science, probability theory, and continuous math that’s useful for signal processing (complex analysis, Fourier series, functional analysis, machine learning, etc.)
I had not seen the previous post on career choice and will look into it. But the main reason for this thread was that I think that as far as getting a job and sustaining myself goes, I’m better off trying to hack my preferences and causing myself to actually enjoy computer programming, instead of finding it loathsome as I do now. This is based on a non-trivial amount of interaction with people in the start-up community, in academia, and at government research labs.
In one of the previous discussions, I suggested taking a job as a database/web developer at a university department. I think you don’t actually need to hack yourself to enjoy computer programming to do this, because if you’re a fast programmer you can finish your assignments in a small fraction of the time that’s usually assigned, which leaves you plenty of time to do whatever else you want. So if you just want to get a job and sustain yourself, that seems like something you should consider.
But that advice doesn’t take into account your interest in FAI and “I have found that I function much better in environments where there is a vigorous pace to publishing work”. If you think you might have the potential to make progress in FAI-related research, you should check out whether that’s actually the case, and make further decisions based on that.
For one thing, I am not a very fast programmer. I only know Python, Matlab, and a tiny bit of C/C++. Most of the programming I do is rapid prototyping of scientific algorithms. The reason why I hate that sort of thing is that I feel more like I am just scanning the literature for any way to hack at an engineering solution that solves a problem in a glitzy way in the short term. Professors seem to need to do this because their careers rest on being able to attract attention to their work. Prototyping the state-of-the-art algorithms of your peers is an excellent way to do this since you end up citing a peer’s research without needing to develop anything fundamentally new on your own. If you can envision a zany new data set and spend a small amount of money to collect the zany data and have grad students or Mechanical Turkers annotate it for you, then you can be reasonably assured that you can crank out “state of the art” performance on this zany data set just by leveraging any recent advances in machine learning. Add a little twist by throwing in an algorithm from some tangentially related field, and presto, you’ve got a main event conference presentation that garners lots of media attention.
That cycle depresses me because it does not fundamentally lead to the generation of new knowledge or expertise. Machine learning research is a bit like a Chinese takeout menu. You pick a generic framework, a generic decision function class, some generic bootstrapping / cross-validation scheme, etc., and pull a lever, and out pops some new “state of the art” surveillance tool, or face recognition tool, or social network data mining tool, etc. None of this causes us to have more knowledge in a fundamental sense; but it does pander to short term commercial prospects.
Also, after working as a radar analyst at a government lab for two years, I don’t think that the suggestion of taking some kind of mindless programming day job just to fund my “research hobbie” is actually viable for very many people. When I developed algorithms all day, it zapped my creativity and it really felt soul crushingly terrible all day every day. The literal requirement that I sit in front of a computer and type code just killed all motivation. I was very lucky if I was able just to read interesting books when I went home at night. The idea that I could do my work quickly and eek out little bits of time to “do research” seems pretty naive to the actual task of research. To be effective, you’ve got to explore, to read, to muck around at a white board for two hours and be ready to pull your hair out over just not quite getting the result you anticipated, etc. I wouldn’t want to half-ass my passion and also half-ass my job. That would be the worst of both worlds.
As for FAI research, I feel that the rational thing to do is to not pursue it. Not because I am against it or disinterested, but because it is such a cloistered and closed-off field. As much as FAI researchers want to describe themselves as investing in long-term, high-risk ideas, they won’t do that for motivated potential researchers. There’s such little money in FAI research that it would be comparable to taking out a multi-hundred thousand dollar loan to self-fund a graduate degree in law from an obscure, rural university. Law degress do not allow you to feed yourself unless you leave the field of law and work very hard to gain skills in a different area, or you go to the best law schools in the country and ride the prestige, usually still into non-law jobs.
This is why I think the self-hacking is necessary. If I work for a startup company, a research lab, government research, etc., then I am only going to be paid to write computer code. Since tenure track faculty jobs are diminishing so rapidly, even being at a prestigious university does not give you much of a chance to obtain one. If you study science in grad school and you want to earn more than $30,000 per year, your primary job will most likely be writing computer code (or you can leave science entirely and do scummy things like corporate finance or consulting, but my aversion to those is so large that I can effectively ignore them as options).
I can address the other questions later on, but I am actually interested in looking to complexity limits for FAI problems. My initial reaction to Yudkowsky’s post about cohesive extrapolated volition was that such a thing is probably not efficiently computable, and even if it is, it is probably not stable (in the control theory sense; i.e. a tiny error in CEV yields a disastrously large error in terms of the eventual outcome). It isn’t like there is just one single time that we have to have a mathematically comprehensible description of volition. As computational resources grow, I imagine the problem of CEV will be faced many times in a row on rapidly larger scales, and I’m interested in knowing how a reasonable CEV computation scales asymptotically in the size of the projected future generation’s computing capabilities. Very very naively, for example, let’s say that the number of processors N of some future AI system plays a major role in the mathematical structure of my description of my volition that I need to be prepared to hand to it to convince it to help me along (I know this is a shortsighted way of looking at it, but it illustrates the point). How does the calculation of CEV grow with N. If computing the CEV in a mathematically comprehensible way grows faster than my compute power, then even if I can create the initial CEV, somewhere down the chain I won’t be able to. Similarly, if CEV is viewed as a set of control instructions, then above all it has to be stable. If mis-specifying CEV by a tiny percentage yields a dramatically bad outcome, then the whole problem of friendliness may itself be moot. It may be intrinsically unstable.
As far as “math teaching at a respected research university” goes, there are a few reasons. I have a high aesthetic preference for both mathematics and the human light-bulb-going-off effect when students overcome mathematical difficulties, so the job feels very rewarding to me without needing to offer me much in the way of money. I enjoy creating tools that can be used constructively to accomplish things, but I don’t enjoy being confined to a desk and needing to focus on a computer screen. The most rewarding experience I have found along these lines is developing novel applied mathematical tools that can then be leveraged by engineers and scientists who have less aversion to code writing. Moreover, I have found that I function much better in environments where there is a vigorous pace to publishing work. At slower places, I tend to chameleonize and become slower myself, but at vibrant, fast-paced places, I seem to function on all cylinders, so to speak. This is why a “respected research university” is much more appealing than a community college or smaller state level college.
I’m very disillusioned with the incentive scheme for academia as a whole. Applied mathematics with an emphasis on theoretical tools is one domain where a lot of the negative aspects have been kept at bay. Unfortunately, it’s also a field where statistically it is very hard to get a reasonably stable job. As far as areas of math go, I greatly enjoy theoretical computer science, probability theory, and continuous math that’s useful for signal processing (complex analysis, Fourier series, functional analysis, machine learning, etc.)
I had not seen the previous post on career choice and will look into it. But the main reason for this thread was that I think that as far as getting a job and sustaining myself goes, I’m better off trying to hack my preferences and causing myself to actually enjoy computer programming, instead of finding it loathsome as I do now. This is based on a non-trivial amount of interaction with people in the start-up community, in academia, and at government research labs.
In one of the previous discussions, I suggested taking a job as a database/web developer at a university department. I think you don’t actually need to hack yourself to enjoy computer programming to do this, because if you’re a fast programmer you can finish your assignments in a small fraction of the time that’s usually assigned, which leaves you plenty of time to do whatever else you want. So if you just want to get a job and sustain yourself, that seems like something you should consider.
But that advice doesn’t take into account your interest in FAI and “I have found that I function much better in environments where there is a vigorous pace to publishing work”. If you think you might have the potential to make progress in FAI-related research, you should check out whether that’s actually the case, and make further decisions based on that.
For one thing, I am not a very fast programmer. I only know Python, Matlab, and a tiny bit of C/C++. Most of the programming I do is rapid prototyping of scientific algorithms. The reason why I hate that sort of thing is that I feel more like I am just scanning the literature for any way to hack at an engineering solution that solves a problem in a glitzy way in the short term. Professors seem to need to do this because their careers rest on being able to attract attention to their work. Prototyping the state-of-the-art algorithms of your peers is an excellent way to do this since you end up citing a peer’s research without needing to develop anything fundamentally new on your own. If you can envision a zany new data set and spend a small amount of money to collect the zany data and have grad students or Mechanical Turkers annotate it for you, then you can be reasonably assured that you can crank out “state of the art” performance on this zany data set just by leveraging any recent advances in machine learning. Add a little twist by throwing in an algorithm from some tangentially related field, and presto, you’ve got a main event conference presentation that garners lots of media attention.
That cycle depresses me because it does not fundamentally lead to the generation of new knowledge or expertise. Machine learning research is a bit like a Chinese takeout menu. You pick a generic framework, a generic decision function class, some generic bootstrapping / cross-validation scheme, etc., and pull a lever, and out pops some new “state of the art” surveillance tool, or face recognition tool, or social network data mining tool, etc. None of this causes us to have more knowledge in a fundamental sense; but it does pander to short term commercial prospects.
Also, after working as a radar analyst at a government lab for two years, I don’t think that the suggestion of taking some kind of mindless programming day job just to fund my “research hobbie” is actually viable for very many people. When I developed algorithms all day, it zapped my creativity and it really felt soul crushingly terrible all day every day. The literal requirement that I sit in front of a computer and type code just killed all motivation. I was very lucky if I was able just to read interesting books when I went home at night. The idea that I could do my work quickly and eek out little bits of time to “do research” seems pretty naive to the actual task of research. To be effective, you’ve got to explore, to read, to muck around at a white board for two hours and be ready to pull your hair out over just not quite getting the result you anticipated, etc. I wouldn’t want to half-ass my passion and also half-ass my job. That would be the worst of both worlds.
As for FAI research, I feel that the rational thing to do is to not pursue it. Not because I am against it or disinterested, but because it is such a cloistered and closed-off field. As much as FAI researchers want to describe themselves as investing in long-term, high-risk ideas, they won’t do that for motivated potential researchers. There’s such little money in FAI research that it would be comparable to taking out a multi-hundred thousand dollar loan to self-fund a graduate degree in law from an obscure, rural university. Law degress do not allow you to feed yourself unless you leave the field of law and work very hard to gain skills in a different area, or you go to the best law schools in the country and ride the prestige, usually still into non-law jobs.
This is why I think the self-hacking is necessary. If I work for a startup company, a research lab, government research, etc., then I am only going to be paid to write computer code. Since tenure track faculty jobs are diminishing so rapidly, even being at a prestigious university does not give you much of a chance to obtain one. If you study science in grad school and you want to earn more than $30,000 per year, your primary job will most likely be writing computer code (or you can leave science entirely and do scummy things like corporate finance or consulting, but my aversion to those is so large that I can effectively ignore them as options).