You’re providing no evidence that superintelligence is likely in the next 30 years other than a Yudkowsky tweet. I expect that 30 years later we will not have superintelligence (of the sort that can build the stack to run itself on, growing at a fast rate, taking over the solar system etc).
There has been enough discussion about timelines that it doesn’t make sense to provide evidence about it in a post like this. Most people on this site has already formed views about timelines, and for many, these are much shorter than 30 years.
Hopefully, readers of this site are ready to change their views if strong evidence in either direction appears, but I dont think it is fair to expect a post like this to also include evidence about timelines.
The post is phrased as “do you think it’s a good idea to have kids given timelines?”. I’ve said why I’m not convinced timelines should be relevant to having kids. I think if people are getting their views by copying Eliezer Yudkowsky and copying people who copy his views (which I’m not sure if OP is doing) then they should get better epistemology.
I didn’t provide any evidence because I didn’t make any claim (about timelines or otherwise). I’m trying to get my views by asking on lesswrong and I get something like “You have no right to ask this”.
I quoted Yudkowski because he asks a related question (whether you agree with his assessment or not).
“I’m not convinced timelines should be relevant to having kids”
The post’s starting point is “how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.)”. You don’t need concrete high-p-of-doom timelines for that, or even expect AGI at all. It is not necessary for “potential international conflict”, for example.
Oh, I thought this was mainly about x risk, especially due to the Yudkowsky reference. On the other points I think they’re not a huge change either. If you predict the economy will have lots of AI in the future then you can give your child an advantage by training them in relevant skills. Also, many jobs like service jobs are likely to be around, there are lots of things AI has trouble with or which humans generally prefer humans to do. AI would increase material productivity and that would be expected to decrease cost of living as well. See Yudkowsky’s post on AI unemployment.
Regarding international conflict, I haven’t seen a convincing model laid out for how AI would make international conflict worse. Drone warfare is a possibility, but would tend to concentrate military power in technical countries such as Taiwan, UK, USA, and Israel. I don’t know where OP lives but I don’t see how it would make things worse for USA/UK children. Drones would be expected to have a better civilian casualty ratio than other methods like conventional explosives, nukes, or bio-weapons.
Relevant skills for an AI economy would include mathematics, programming, ML, web development, etc.
It’s hard to extrapolate out that far, but AI still has a lot of trouble with robotics (e.g. we don’t have good dish washing household robots). So there will probably be e.g. construction jobs for a while. AI is helpful for programming but using AI to program relies on a lot of human support; I doubt programming will be entirely automated in 30 years. AI tends to have trouble with contextualized, embodied/embedded problems; it’s better at decontextualized, schoolwork-like problems. For example if you’re doing sales you need to manage a set of relationships whose data is gathered over a lot of contexts, mostly not recorded, and AI is going to have more trouble with parsing that context into something a transformer can operate on and give a good response to. Self-driving is an example of an embedded, though low-context, problem and progress on that has been slower than expected, although due to all the data from electric cars it’s possible to train a transformer to imitate humans using that data.
We assembled a list of major technical insights in the history of progress in AI and metadata on the discoverer(s) of each insight.
Based on this dataset, we developed an interactive model that calculates the time it would take to reach the cumulation of all AI research, based on a guess at what percentage of AI discoveries have been made.
Feasibility of Training an AGI using Deep Reinforcement Learning: A Very Rough Estimate
Several months ago, we were presented with a scenario for how artificial general intelligence (AGI) may be achieved in the near future. We found the approach surprising, so we attempted to produce a rough model to investigate its feasibility. The document presents the model and its conclusions.
The usual cliches about the folly of trying to predict the future go without saying and this shouldn’t be treated as a rigorous estimate, but hopefully it can give a loose, rough sense of some of the relevant quantities involved. The notebook and the data used for it can be found in the Median Group numbers GitHub repo if the reader is interested in using different quantities or changing the structure of the model.
(note: second has a hard-to-estimate “real life vs alphago” difficulty parameter that the result is somewhat dependent on, although this parameter can be adjusted in the model)
You’re providing no evidence that superintelligence is likely in the next 30 years other than a Yudkowsky tweet. I expect that 30 years later we will not have superintelligence (of the sort that can build the stack to run itself on, growing at a fast rate, taking over the solar system etc).
There has been enough discussion about timelines that it doesn’t make sense to provide evidence about it in a post like this. Most people on this site has already formed views about timelines, and for many, these are much shorter than 30 years. Hopefully, readers of this site are ready to change their views if strong evidence in either direction appears, but I dont think it is fair to expect a post like this to also include evidence about timelines.
The post is phrased as “do you think it’s a good idea to have kids given timelines?”. I’ve said why I’m not convinced timelines should be relevant to having kids. I think if people are getting their views by copying Eliezer Yudkowsky and copying people who copy his views (which I’m not sure if OP is doing) then they should get better epistemology.
I didn’t provide any evidence because I didn’t make any claim (about timelines or otherwise). I’m trying to get my views by asking on lesswrong and I get something like “You have no right to ask this”.
I quoted Yudkowski because he asks a related question (whether you agree with his assessment or not).
“I’m not convinced timelines should be relevant to having kids”
Thanks, this looks more like an answer.
The post’s starting point is “how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.)”. You don’t need concrete high-p-of-doom timelines for that, or even expect AGI at all. It is not necessary for “potential international conflict”, for example.
Oh, I thought this was mainly about x risk, especially due to the Yudkowsky reference. On the other points I think they’re not a huge change either. If you predict the economy will have lots of AI in the future then you can give your child an advantage by training them in relevant skills. Also, many jobs like service jobs are likely to be around, there are lots of things AI has trouble with or which humans generally prefer humans to do. AI would increase material productivity and that would be expected to decrease cost of living as well. See Yudkowsky’s post on AI unemployment.
Regarding international conflict, I haven’t seen a convincing model laid out for how AI would make international conflict worse. Drone warfare is a possibility, but would tend to concentrate military power in technical countries such as Taiwan, UK, USA, and Israel. I don’t know where OP lives but I don’t see how it would make things worse for USA/UK children. Drones would be expected to have a better civilian casualty ratio than other methods like conventional explosives, nukes, or bio-weapons.
For example, US-China conflict is fueled in part by the AI race dynamics.
Thanks. What are the things that AI will, in 10, 20 or 30 years, have “trouble with”, and want are the “relevant skills” to train your kids in?
Relevant skills for an AI economy would include mathematics, programming, ML, web development, etc.
It’s hard to extrapolate out that far, but AI still has a lot of trouble with robotics (e.g. we don’t have good dish washing household robots). So there will probably be e.g. construction jobs for a while. AI is helpful for programming but using AI to program relies on a lot of human support; I doubt programming will be entirely automated in 30 years. AI tends to have trouble with contextualized, embodied/embedded problems; it’s better at decontextualized, schoolwork-like problems. For example if you’re doing sales you need to manage a set of relationships whose data is gathered over a lot of contexts, mostly not recorded, and AI is going to have more trouble with parsing that context into something a transformer can operate on and give a good response to. Self-driving is an example of an embedded, though low-context, problem and progress on that has been slower than expected, although due to all the data from electric cars it’s possible to train a transformer to imitate humans using that data.
Jessica, do you have a post or sth that distills/summarizes your current views on this?
From 2018, AI timelines section of mediangroup.org/research.
(note: second has a hard-to-estimate “real life vs alphago” difficulty parameter that the result is somewhat dependent on, although this parameter can be adjusted in the model)
I recommend articles (not by me) Why I am not an AI doomer, Diminishing Returns in Machine Learning.
So your timelines are the same as in 2018?
Thanks for the article recommendations.