Why do you really like “math theory and teaching at a respected research university”? Is it for the money, the status, contributing to scientific progress in a “meaningful way”, or benefiting society (in a utilitarian sense)? Do you intrinsically like doing research or teaching, and if so do you care about what area of math you research or teach? Which of these would you be most willing to give up if you had to?
(One reason I ask these questions is that I’m not sure scientific/technological progress is generally a good thing at this point. The default Singularity scenario is probably a bad one, and most scientific/technological progress just brings the Singularity closer without making a positive scenario more likely. It would be nicer to have a few more decades of pre-Singularity time to solve FAI-related philosophical problems, for example.)
Depending on your answers, there might be something else you should do instead of hacking yourself to like “writing computer code to build widgets for customers”. Also, have you seen the previous LW discussions on career choice?
The default Singularity scenario is probably a bad one, and most scientific/technological progress just brings the Singularity closer without making a positive scenario more likely.
How much of modern science brings one closer to a potential intelligence explosion type Singularity event? If such an event is something that is likely to occur it would need to not be dependent on a lot of different technologies.
So what technologies could actively be a problem?
Well one obvious one is faster computers. The nightmare scenario is that we find some little clever trick we’re missing to run smart AI and the first one we turn on thinks hundreds of times faster than us at the start.
The next possible really bad set of technologies are nanotech stuff. If the AI finds an easy way to get access to highly flexible nanotech based on methods we have then we’re sort of screwed. (This one seems extremely unlikely to me. The vast majority of people in nanotech keep emphasizing how difficult any sort of constructor bot would be.) The next issue is possible advanced mathematical algorithms. The really bad case here is that an AI gets to look at the arXiv and quickly sees a set of papers which when put together give something like a general SAT solver that solves 3-SAT with n conditionals in Kn^2 steps for some really small constant K. This is bad.
Similar remarks apply to an AI that finds a really fast quantum algorithm to effectively solve some NP hard problem. Seriously, one of the worst possible ideas you can have in the world is to run an AI on a functioning quantum computer or give it access to one. Please don’t do this. I’m someone who considers fooming-AI to be unlikely, and believe that BQP is a proper subset of NP and this possibility makes me want to run out and scream at people like Roger Penrose who specifically want to see if we need a quantum computer for intelligence to work. Let’s not test this.
But outside these four possibilities the remaining issues are all more exotic and less likely. For example, I’m not worried that an AI will right out of the box figure out a way to make small wormholes and take advantage of closed-time like curves simply because if it has that sort of tech level then it has already won.
So the vast majority of scientific research seems to do very little for helping an AI go foom.
But it does seem that continued scientific research does help us understand which sort of AI threats are more likely. For example if we end up proving some very strong version of P!=NP, then this will make the clever algorithms attack much less likely. If BQP is strictly less than NP in a strong sense, and room temperature strong nanotech turns out to be not doable then most of the nasty foom scenarios go away. Similarly improving computer security directly reduces the chance that an AI will manage to get access to internet things it shouldn’t (although again, basic sanity says anything like a strong AI should not be turned on with internet access, so if it gets internet access it has possibly already won. This is reduces the chance of a problem in one specific scenario that isn’t terribly likely but it does reduce it.)
Furthermore, scientific progress helps us deal with other existential risks as well as get more of a handle on which existential risks are a problem. Astronomy, astrophysics and astrobiology all help us get a better handle on whether the great filter lies behind us or in front of us and what the main causes are. It wouldn’t for example surprise me if in 30 or 40 years we will have good enough telescopes that we can not only see Earth like planets we can see if they had massive nuclear wars (indicating that that might be a possible major filtration event) or that the planet’s surface is somehow covered with something like diamond (indicating some point possibly in the very far past, a serious nanotech disaster occurred). A better space program also helps deal with astronomical existential risks like asteroids.
So, overall it seems that most science is neutral to a Singularity situation. Of the remainder some might increase the chance of a near term Singularity and some might decrease it. A lot of science though helps deal with other existential risks and associated problems.
So the vast majority of scientific research seems to do very little for helping an AI go foom.
I guess it wasn’t clear but I also consider a Hansonian/Malthusian upload-driven Singularity to be bad.
So, overall it seems that most science is neutral to a Singularity situation.
The mechanism I had in mind was that most scientific/technological progress (like p4wnc608′s field of machine vision for example) has the effect of increasing the demand for computing hardware and growing the overall economy, which allows continued research and investment into more powerful computers, bringing both types of Singularity closer.
I can address the other questions later on, but I am actually interested in looking to complexity limits for FAI problems. My initial reaction to Yudkowsky’s post about cohesive extrapolated volition was that such a thing is probably not efficiently computable, and even if it is, it is probably not stable (in the control theory sense; i.e. a tiny error in CEV yields a disastrously large error in terms of the eventual outcome). It isn’t like there is just one single time that we have to have a mathematically comprehensible description of volition. As computational resources grow, I imagine the problem of CEV will be faced many times in a row on rapidly larger scales, and I’m interested in knowing how a reasonable CEV computation scales asymptotically in the size of the projected future generation’s computing capabilities. Very very naively, for example, let’s say that the number of processors N of some future AI system plays a major role in the mathematical structure of my description of my volition that I need to be prepared to hand to it to convince it to help me along (I know this is a shortsighted way of looking at it, but it illustrates the point). How does the calculation of CEV grow with N. If computing the CEV in a mathematically comprehensible way grows faster than my compute power, then even if I can create the initial CEV, somewhere down the chain I won’t be able to. Similarly, if CEV is viewed as a set of control instructions, then above all it has to be stable. If mis-specifying CEV by a tiny percentage yields a dramatically bad outcome, then the whole problem of friendliness may itself be moot. It may be intrinsically unstable.
As far as “math teaching at a respected research university” goes, there are a few reasons. I have a high aesthetic preference for both mathematics and the human light-bulb-going-off effect when students overcome mathematical difficulties, so the job feels very rewarding to me without needing to offer me much in the way of money. I enjoy creating tools that can be used constructively to accomplish things, but I don’t enjoy being confined to a desk and needing to focus on a computer screen. The most rewarding experience I have found along these lines is developing novel applied mathematical tools that can then be leveraged by engineers and scientists who have less aversion to code writing. Moreover, I have found that I function much better in environments where there is a vigorous pace to publishing work. At slower places, I tend to chameleonize and become slower myself, but at vibrant, fast-paced places, I seem to function on all cylinders, so to speak. This is why a “respected research university” is much more appealing than a community college or smaller state level college.
I’m very disillusioned with the incentive scheme for academia as a whole. Applied mathematics with an emphasis on theoretical tools is one domain where a lot of the negative aspects have been kept at bay. Unfortunately, it’s also a field where statistically it is very hard to get a reasonably stable job. As far as areas of math go, I greatly enjoy theoretical computer science, probability theory, and continuous math that’s useful for signal processing (complex analysis, Fourier series, functional analysis, machine learning, etc.)
I had not seen the previous post on career choice and will look into it. But the main reason for this thread was that I think that as far as getting a job and sustaining myself goes, I’m better off trying to hack my preferences and causing myself to actually enjoy computer programming, instead of finding it loathsome as I do now. This is based on a non-trivial amount of interaction with people in the start-up community, in academia, and at government research labs.
In one of the previous discussions, I suggested taking a job as a database/web developer at a university department. I think you don’t actually need to hack yourself to enjoy computer programming to do this, because if you’re a fast programmer you can finish your assignments in a small fraction of the time that’s usually assigned, which leaves you plenty of time to do whatever else you want. So if you just want to get a job and sustain yourself, that seems like something you should consider.
But that advice doesn’t take into account your interest in FAI and “I have found that I function much better in environments where there is a vigorous pace to publishing work”. If you think you might have the potential to make progress in FAI-related research, you should check out whether that’s actually the case, and make further decisions based on that.
For one thing, I am not a very fast programmer. I only know Python, Matlab, and a tiny bit of C/C++. Most of the programming I do is rapid prototyping of scientific algorithms. The reason why I hate that sort of thing is that I feel more like I am just scanning the literature for any way to hack at an engineering solution that solves a problem in a glitzy way in the short term. Professors seem to need to do this because their careers rest on being able to attract attention to their work. Prototyping the state-of-the-art algorithms of your peers is an excellent way to do this since you end up citing a peer’s research without needing to develop anything fundamentally new on your own. If you can envision a zany new data set and spend a small amount of money to collect the zany data and have grad students or Mechanical Turkers annotate it for you, then you can be reasonably assured that you can crank out “state of the art” performance on this zany data set just by leveraging any recent advances in machine learning. Add a little twist by throwing in an algorithm from some tangentially related field, and presto, you’ve got a main event conference presentation that garners lots of media attention.
That cycle depresses me because it does not fundamentally lead to the generation of new knowledge or expertise. Machine learning research is a bit like a Chinese takeout menu. You pick a generic framework, a generic decision function class, some generic bootstrapping / cross-validation scheme, etc., and pull a lever, and out pops some new “state of the art” surveillance tool, or face recognition tool, or social network data mining tool, etc. None of this causes us to have more knowledge in a fundamental sense; but it does pander to short term commercial prospects.
Also, after working as a radar analyst at a government lab for two years, I don’t think that the suggestion of taking some kind of mindless programming day job just to fund my “research hobbie” is actually viable for very many people. When I developed algorithms all day, it zapped my creativity and it really felt soul crushingly terrible all day every day. The literal requirement that I sit in front of a computer and type code just killed all motivation. I was very lucky if I was able just to read interesting books when I went home at night. The idea that I could do my work quickly and eek out little bits of time to “do research” seems pretty naive to the actual task of research. To be effective, you’ve got to explore, to read, to muck around at a white board for two hours and be ready to pull your hair out over just not quite getting the result you anticipated, etc. I wouldn’t want to half-ass my passion and also half-ass my job. That would be the worst of both worlds.
As for FAI research, I feel that the rational thing to do is to not pursue it. Not because I am against it or disinterested, but because it is such a cloistered and closed-off field. As much as FAI researchers want to describe themselves as investing in long-term, high-risk ideas, they won’t do that for motivated potential researchers. There’s such little money in FAI research that it would be comparable to taking out a multi-hundred thousand dollar loan to self-fund a graduate degree in law from an obscure, rural university. Law degress do not allow you to feed yourself unless you leave the field of law and work very hard to gain skills in a different area, or you go to the best law schools in the country and ride the prestige, usually still into non-law jobs.
This is why I think the self-hacking is necessary. If I work for a startup company, a research lab, government research, etc., then I am only going to be paid to write computer code. Since tenure track faculty jobs are diminishing so rapidly, even being at a prestigious university does not give you much of a chance to obtain one. If you study science in grad school and you want to earn more than $30,000 per year, your primary job will most likely be writing computer code (or you can leave science entirely and do scummy things like corporate finance or consulting, but my aversion to those is so large that I can effectively ignore them as options).
Why do you really like “math theory and teaching at a respected research university”? Is it for the money, the status, contributing to scientific progress in a “meaningful way”, or benefiting society (in a utilitarian sense)? Do you intrinsically like doing research or teaching, and if so do you care about what area of math you research or teach? Which of these would you be most willing to give up if you had to?
(One reason I ask these questions is that I’m not sure scientific/technological progress is generally a good thing at this point. The default Singularity scenario is probably a bad one, and most scientific/technological progress just brings the Singularity closer without making a positive scenario more likely. It would be nicer to have a few more decades of pre-Singularity time to solve FAI-related philosophical problems, for example.)
Depending on your answers, there might be something else you should do instead of hacking yourself to like “writing computer code to build widgets for customers”. Also, have you seen the previous LW discussions on career choice?
How much of modern science brings one closer to a potential intelligence explosion type Singularity event? If such an event is something that is likely to occur it would need to not be dependent on a lot of different technologies.
So what technologies could actively be a problem?
Well one obvious one is faster computers. The nightmare scenario is that we find some little clever trick we’re missing to run smart AI and the first one we turn on thinks hundreds of times faster than us at the start.
The next possible really bad set of technologies are nanotech stuff. If the AI finds an easy way to get access to highly flexible nanotech based on methods we have then we’re sort of screwed. (This one seems extremely unlikely to me. The vast majority of people in nanotech keep emphasizing how difficult any sort of constructor bot would be.) The next issue is possible advanced mathematical algorithms. The really bad case here is that an AI gets to look at the arXiv and quickly sees a set of papers which when put together give something like a general SAT solver that solves 3-SAT with n conditionals in Kn^2 steps for some really small constant K. This is bad.
Similar remarks apply to an AI that finds a really fast quantum algorithm to effectively solve some NP hard problem. Seriously, one of the worst possible ideas you can have in the world is to run an AI on a functioning quantum computer or give it access to one. Please don’t do this. I’m someone who considers fooming-AI to be unlikely, and believe that BQP is a proper subset of NP and this possibility makes me want to run out and scream at people like Roger Penrose who specifically want to see if we need a quantum computer for intelligence to work. Let’s not test this.
But outside these four possibilities the remaining issues are all more exotic and less likely. For example, I’m not worried that an AI will right out of the box figure out a way to make small wormholes and take advantage of closed-time like curves simply because if it has that sort of tech level then it has already won.
So the vast majority of scientific research seems to do very little for helping an AI go foom.
But it does seem that continued scientific research does help us understand which sort of AI threats are more likely. For example if we end up proving some very strong version of P!=NP, then this will make the clever algorithms attack much less likely. If BQP is strictly less than NP in a strong sense, and room temperature strong nanotech turns out to be not doable then most of the nasty foom scenarios go away. Similarly improving computer security directly reduces the chance that an AI will manage to get access to internet things it shouldn’t (although again, basic sanity says anything like a strong AI should not be turned on with internet access, so if it gets internet access it has possibly already won. This is reduces the chance of a problem in one specific scenario that isn’t terribly likely but it does reduce it.)
Furthermore, scientific progress helps us deal with other existential risks as well as get more of a handle on which existential risks are a problem. Astronomy, astrophysics and astrobiology all help us get a better handle on whether the great filter lies behind us or in front of us and what the main causes are. It wouldn’t for example surprise me if in 30 or 40 years we will have good enough telescopes that we can not only see Earth like planets we can see if they had massive nuclear wars (indicating that that might be a possible major filtration event) or that the planet’s surface is somehow covered with something like diamond (indicating some point possibly in the very far past, a serious nanotech disaster occurred). A better space program also helps deal with astronomical existential risks like asteroids.
So, overall it seems that most science is neutral to a Singularity situation. Of the remainder some might increase the chance of a near term Singularity and some might decrease it. A lot of science though helps deal with other existential risks and associated problems.
I guess it wasn’t clear but I also consider a Hansonian/Malthusian upload-driven Singularity to be bad.
The mechanism I had in mind was that most scientific/technological progress (like p4wnc608′s field of machine vision for example) has the effect of increasing the demand for computing hardware and growing the overall economy, which allows continued research and investment into more powerful computers, bringing both types of Singularity closer.
I can address the other questions later on, but I am actually interested in looking to complexity limits for FAI problems. My initial reaction to Yudkowsky’s post about cohesive extrapolated volition was that such a thing is probably not efficiently computable, and even if it is, it is probably not stable (in the control theory sense; i.e. a tiny error in CEV yields a disastrously large error in terms of the eventual outcome). It isn’t like there is just one single time that we have to have a mathematically comprehensible description of volition. As computational resources grow, I imagine the problem of CEV will be faced many times in a row on rapidly larger scales, and I’m interested in knowing how a reasonable CEV computation scales asymptotically in the size of the projected future generation’s computing capabilities. Very very naively, for example, let’s say that the number of processors N of some future AI system plays a major role in the mathematical structure of my description of my volition that I need to be prepared to hand to it to convince it to help me along (I know this is a shortsighted way of looking at it, but it illustrates the point). How does the calculation of CEV grow with N. If computing the CEV in a mathematically comprehensible way grows faster than my compute power, then even if I can create the initial CEV, somewhere down the chain I won’t be able to. Similarly, if CEV is viewed as a set of control instructions, then above all it has to be stable. If mis-specifying CEV by a tiny percentage yields a dramatically bad outcome, then the whole problem of friendliness may itself be moot. It may be intrinsically unstable.
As far as “math teaching at a respected research university” goes, there are a few reasons. I have a high aesthetic preference for both mathematics and the human light-bulb-going-off effect when students overcome mathematical difficulties, so the job feels very rewarding to me without needing to offer me much in the way of money. I enjoy creating tools that can be used constructively to accomplish things, but I don’t enjoy being confined to a desk and needing to focus on a computer screen. The most rewarding experience I have found along these lines is developing novel applied mathematical tools that can then be leveraged by engineers and scientists who have less aversion to code writing. Moreover, I have found that I function much better in environments where there is a vigorous pace to publishing work. At slower places, I tend to chameleonize and become slower myself, but at vibrant, fast-paced places, I seem to function on all cylinders, so to speak. This is why a “respected research university” is much more appealing than a community college or smaller state level college.
I’m very disillusioned with the incentive scheme for academia as a whole. Applied mathematics with an emphasis on theoretical tools is one domain where a lot of the negative aspects have been kept at bay. Unfortunately, it’s also a field where statistically it is very hard to get a reasonably stable job. As far as areas of math go, I greatly enjoy theoretical computer science, probability theory, and continuous math that’s useful for signal processing (complex analysis, Fourier series, functional analysis, machine learning, etc.)
I had not seen the previous post on career choice and will look into it. But the main reason for this thread was that I think that as far as getting a job and sustaining myself goes, I’m better off trying to hack my preferences and causing myself to actually enjoy computer programming, instead of finding it loathsome as I do now. This is based on a non-trivial amount of interaction with people in the start-up community, in academia, and at government research labs.
In one of the previous discussions, I suggested taking a job as a database/web developer at a university department. I think you don’t actually need to hack yourself to enjoy computer programming to do this, because if you’re a fast programmer you can finish your assignments in a small fraction of the time that’s usually assigned, which leaves you plenty of time to do whatever else you want. So if you just want to get a job and sustain yourself, that seems like something you should consider.
But that advice doesn’t take into account your interest in FAI and “I have found that I function much better in environments where there is a vigorous pace to publishing work”. If you think you might have the potential to make progress in FAI-related research, you should check out whether that’s actually the case, and make further decisions based on that.
For one thing, I am not a very fast programmer. I only know Python, Matlab, and a tiny bit of C/C++. Most of the programming I do is rapid prototyping of scientific algorithms. The reason why I hate that sort of thing is that I feel more like I am just scanning the literature for any way to hack at an engineering solution that solves a problem in a glitzy way in the short term. Professors seem to need to do this because their careers rest on being able to attract attention to their work. Prototyping the state-of-the-art algorithms of your peers is an excellent way to do this since you end up citing a peer’s research without needing to develop anything fundamentally new on your own. If you can envision a zany new data set and spend a small amount of money to collect the zany data and have grad students or Mechanical Turkers annotate it for you, then you can be reasonably assured that you can crank out “state of the art” performance on this zany data set just by leveraging any recent advances in machine learning. Add a little twist by throwing in an algorithm from some tangentially related field, and presto, you’ve got a main event conference presentation that garners lots of media attention.
That cycle depresses me because it does not fundamentally lead to the generation of new knowledge or expertise. Machine learning research is a bit like a Chinese takeout menu. You pick a generic framework, a generic decision function class, some generic bootstrapping / cross-validation scheme, etc., and pull a lever, and out pops some new “state of the art” surveillance tool, or face recognition tool, or social network data mining tool, etc. None of this causes us to have more knowledge in a fundamental sense; but it does pander to short term commercial prospects.
Also, after working as a radar analyst at a government lab for two years, I don’t think that the suggestion of taking some kind of mindless programming day job just to fund my “research hobbie” is actually viable for very many people. When I developed algorithms all day, it zapped my creativity and it really felt soul crushingly terrible all day every day. The literal requirement that I sit in front of a computer and type code just killed all motivation. I was very lucky if I was able just to read interesting books when I went home at night. The idea that I could do my work quickly and eek out little bits of time to “do research” seems pretty naive to the actual task of research. To be effective, you’ve got to explore, to read, to muck around at a white board for two hours and be ready to pull your hair out over just not quite getting the result you anticipated, etc. I wouldn’t want to half-ass my passion and also half-ass my job. That would be the worst of both worlds.
As for FAI research, I feel that the rational thing to do is to not pursue it. Not because I am against it or disinterested, but because it is such a cloistered and closed-off field. As much as FAI researchers want to describe themselves as investing in long-term, high-risk ideas, they won’t do that for motivated potential researchers. There’s such little money in FAI research that it would be comparable to taking out a multi-hundred thousand dollar loan to self-fund a graduate degree in law from an obscure, rural university. Law degress do not allow you to feed yourself unless you leave the field of law and work very hard to gain skills in a different area, or you go to the best law schools in the country and ride the prestige, usually still into non-law jobs.
This is why I think the self-hacking is necessary. If I work for a startup company, a research lab, government research, etc., then I am only going to be paid to write computer code. Since tenure track faculty jobs are diminishing so rapidly, even being at a prestigious university does not give you much of a chance to obtain one. If you study science in grad school and you want to earn more than $30,000 per year, your primary job will most likely be writing computer code (or you can leave science entirely and do scummy things like corporate finance or consulting, but my aversion to those is so large that I can effectively ignore them as options).