(Wherein I seek advice on what may be a fairly important decision.)
Within the next week, I’ll most likely be offered a summer job where the primary project will be porting a space weather modeling group’s simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don’t take the job, the group’s efforts to take advantage of GPU computing will likely be delayed by another year or two. This would be a valuable educational opportunity for me in terms of learning about scientific computing and gaining general programming/design skill; as I hope to start contributing to FAI research within 5-10 years, this has potentially big instrumental value.
Moore’s Law does make it easier to develop AI without understanding what you’re doing, but that’s not a good thing. Moore’s Law gradually lowers the difficulty of building AI, but it doesn’t make Friendly AI any easier. Friendly AI has nothing to do with hardware; it is a question of understanding. Once you have just enough computing power that someone can build AI if they know exactly what they’re doing, Moore’s Law is no longer your friend. Moore’s Law is slowly weakening the shield that prevents us from messing around with AI before we really understand intelligence. Eventually that barrier will go down, and if we haven’t mastered the art of Friendly AI by that time, we’re in very serious trouble. Moore’s Law is the countdown and it is ticking away. Moore’s Law is the enemy.
Due to the quality of the models used by the aforementioned research group and the prevailing level of interest in more accurate models of solar weather, successful completion of this summer project will probably result in a nontrivial increase in demand for GPUs. It seems that the next best use of my time this summer would be to work full time on the expression-simplification abilities of a computer algebra system.
Given all this information and the goal of reducing existential risk from unFriendly AI, should I take the job with the space weather research group, or not? (To avoid anchoring on other people’s opinions, I’m hoping to get input from at least a couple of LW readers before mentioning the tentative conclusion I’ve reached.)
ETA: I finally got an e-mail response from the research group’s point of contact and she said all their student slots have been taken up for this summer, so that basically takes care of the decision problem. But I might be faced with a similar choice next summer, so I’d still like to hear thoughts on this.
The amount you could slow down Moore’s Law by any strategy is minuscule compared to the amount you can contribute to FAI progress if you choose. It’s like feeling guilty over not recycling a paper cup, when you’re planning to become a lobbyist for an environmentalist group later.
Uninformed opinion: space weather modelling doesn’t seem like a huge market, especially when you compare it to the truly massive gaming market. I doubt the increase in demand would be significant, and if what you’re worried about is rate of growth, it seems like delaying it a couple of years would be wholly insignificant.
I would say that there seem to be a lot of companies that are in one way or another trying to advance Moore’s law. For as long as it doesn’t seem like the one you’re working on has a truly revolutionary advantage as compared to the other companies, just taking the money but donating a large portion of it to existential risk reduction is probably an okay move.
(Full disclosure: I’m an SIAI Visiting Fellow so they’re paying my upkeep right now.)
Do you mean that he actively seeks to encourage young people to try and slow Moore’s Law, or that this is an unintentional consequence of his writings on AI risk topics?
I’m pretty sure that Roko means the second. If this idea got mentioned to Eliezer I’m pretty sure he’d point out the minimal impact that any single human can have on this, even before one gets to whether or not it is a good idea.
If you get an opportunity like that, take it. It’s one thing to gain emotional comfort from believing fantasies about computers with magical powers, but when fantasy is being used as a reason to close off real life opportunity, something is badly wrong.
(Wherein I seek advice on what may be a fairly important decision.)
Within the next week, I’ll most likely be offered a summer job where the primary project will be porting a space weather modeling group’s simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don’t take the job, the group’s efforts to take advantage of GPU computing will likely be delayed by another year or two. This would be a valuable educational opportunity for me in terms of learning about scientific computing and gaining general programming/design skill; as I hope to start contributing to FAI research within 5-10 years, this has potentially big instrumental value.
In “Why We Need Friendly AI”, Eliezer discussed Moore’s Law as a source of existential risk:
Due to the quality of the models used by the aforementioned research group and the prevailing level of interest in more accurate models of solar weather, successful completion of this summer project will probably result in a nontrivial increase in demand for GPUs. It seems that the next best use of my time this summer would be to work full time on the expression-simplification abilities of a computer algebra system.
Given all this information and the goal of reducing existential risk from unFriendly AI, should I take the job with the space weather research group, or not? (To avoid anchoring on other people’s opinions, I’m hoping to get input from at least a couple of LW readers before mentioning the tentative conclusion I’ve reached.)
ETA: I finally got an e-mail response from the research group’s point of contact and she said all their student slots have been taken up for this summer, so that basically takes care of the decision problem. But I might be faced with a similar choice next summer, so I’d still like to hear thoughts on this.
The amount you could slow down Moore’s Law by any strategy is minuscule compared to the amount you can contribute to FAI progress if you choose. It’s like feeling guilty over not recycling a paper cup, when you’re planning to become a lobbyist for an environmentalist group later.
Uninformed opinion: space weather modelling doesn’t seem like a huge market, especially when you compare it to the truly massive gaming market. I doubt the increase in demand would be significant, and if what you’re worried about is rate of growth, it seems like delaying it a couple of years would be wholly insignificant.
I would say that there seem to be a lot of companies that are in one way or another trying to advance Moore’s law. For as long as it doesn’t seem like the one you’re working on has a truly revolutionary advantage as compared to the other companies, just taking the money but donating a large portion of it to existential risk reduction is probably an okay move.
(Full disclosure: I’m an SIAI Visiting Fellow so they’re paying my upkeep right now.)
Personally trying to slow Moore’s Law down is the kind of foolishness that Eliezer seems to inspire in young people...
Do you mean that he actively seeks to encourage young people to try and slow Moore’s Law, or that this is an unintentional consequence of his writings on AI risk topics?
I’m pretty sure that Roko means the second. If this idea got mentioned to Eliezer I’m pretty sure he’d point out the minimal impact that any single human can have on this, even before one gets to whether or not it is a good idea.
If you get an opportunity like that, take it. It’s one thing to gain emotional comfort from believing fantasies about computers with magical powers, but when fantasy is being used as a reason to close off real life opportunity, something is badly wrong.