There has been enough discussion about timelines that it doesn’t make sense to provide evidence about it in a post like this. Most people on this site has already formed views about timelines, and for many, these are much shorter than 30 years.
Hopefully, readers of this site are ready to change their views if strong evidence in either direction appears, but I dont think it is fair to expect a post like this to also include evidence about timelines.
The post is phrased as “do you think it’s a good idea to have kids given timelines?”. I’ve said why I’m not convinced timelines should be relevant to having kids. I think if people are getting their views by copying Eliezer Yudkowsky and copying people who copy his views (which I’m not sure if OP is doing) then they should get better epistemology.
I didn’t provide any evidence because I didn’t make any claim (about timelines or otherwise). I’m trying to get my views by asking on lesswrong and I get something like “You have no right to ask this”.
I quoted Yudkowski because he asks a related question (whether you agree with his assessment or not).
“I’m not convinced timelines should be relevant to having kids”
The post’s starting point is “how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.)”. You don’t need concrete high-p-of-doom timelines for that, or even expect AGI at all. It is not necessary for “potential international conflict”, for example.
Oh, I thought this was mainly about x risk, especially due to the Yudkowsky reference. On the other points I think they’re not a huge change either. If you predict the economy will have lots of AI in the future then you can give your child an advantage by training them in relevant skills. Also, many jobs like service jobs are likely to be around, there are lots of things AI has trouble with or which humans generally prefer humans to do. AI would increase material productivity and that would be expected to decrease cost of living as well. See Yudkowsky’s post on AI unemployment.
Regarding international conflict, I haven’t seen a convincing model laid out for how AI would make international conflict worse. Drone warfare is a possibility, but would tend to concentrate military power in technical countries such as Taiwan, UK, USA, and Israel. I don’t know where OP lives but I don’t see how it would make things worse for USA/UK children. Drones would be expected to have a better civilian casualty ratio than other methods like conventional explosives, nukes, or bio-weapons.
Relevant skills for an AI economy would include mathematics, programming, ML, web development, etc.
It’s hard to extrapolate out that far, but AI still has a lot of trouble with robotics (e.g. we don’t have good dish washing household robots). So there will probably be e.g. construction jobs for a while. AI is helpful for programming but using AI to program relies on a lot of human support; I doubt programming will be entirely automated in 30 years. AI tends to have trouble with contextualized, embodied/embedded problems; it’s better at decontextualized, schoolwork-like problems. For example if you’re doing sales you need to manage a set of relationships whose data is gathered over a lot of contexts, mostly not recorded, and AI is going to have more trouble with parsing that context into something a transformer can operate on and give a good response to. Self-driving is an example of an embedded, though low-context, problem and progress on that has been slower than expected, although due to all the data from electric cars it’s possible to train a transformer to imitate humans using that data.
There has been enough discussion about timelines that it doesn’t make sense to provide evidence about it in a post like this. Most people on this site has already formed views about timelines, and for many, these are much shorter than 30 years. Hopefully, readers of this site are ready to change their views if strong evidence in either direction appears, but I dont think it is fair to expect a post like this to also include evidence about timelines.
The post is phrased as “do you think it’s a good idea to have kids given timelines?”. I’ve said why I’m not convinced timelines should be relevant to having kids. I think if people are getting their views by copying Eliezer Yudkowsky and copying people who copy his views (which I’m not sure if OP is doing) then they should get better epistemology.
I didn’t provide any evidence because I didn’t make any claim (about timelines or otherwise). I’m trying to get my views by asking on lesswrong and I get something like “You have no right to ask this”.
I quoted Yudkowski because he asks a related question (whether you agree with his assessment or not).
“I’m not convinced timelines should be relevant to having kids”
Thanks, this looks more like an answer.
The post’s starting point is “how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.)”. You don’t need concrete high-p-of-doom timelines for that, or even expect AGI at all. It is not necessary for “potential international conflict”, for example.
Oh, I thought this was mainly about x risk, especially due to the Yudkowsky reference. On the other points I think they’re not a huge change either. If you predict the economy will have lots of AI in the future then you can give your child an advantage by training them in relevant skills. Also, many jobs like service jobs are likely to be around, there are lots of things AI has trouble with or which humans generally prefer humans to do. AI would increase material productivity and that would be expected to decrease cost of living as well. See Yudkowsky’s post on AI unemployment.
Regarding international conflict, I haven’t seen a convincing model laid out for how AI would make international conflict worse. Drone warfare is a possibility, but would tend to concentrate military power in technical countries such as Taiwan, UK, USA, and Israel. I don’t know where OP lives but I don’t see how it would make things worse for USA/UK children. Drones would be expected to have a better civilian casualty ratio than other methods like conventional explosives, nukes, or bio-weapons.
For example, US-China conflict is fueled in part by the AI race dynamics.
Thanks. What are the things that AI will, in 10, 20 or 30 years, have “trouble with”, and want are the “relevant skills” to train your kids in?
Relevant skills for an AI economy would include mathematics, programming, ML, web development, etc.
It’s hard to extrapolate out that far, but AI still has a lot of trouble with robotics (e.g. we don’t have good dish washing household robots). So there will probably be e.g. construction jobs for a while. AI is helpful for programming but using AI to program relies on a lot of human support; I doubt programming will be entirely automated in 30 years. AI tends to have trouble with contextualized, embodied/embedded problems; it’s better at decontextualized, schoolwork-like problems. For example if you’re doing sales you need to manage a set of relationships whose data is gathered over a lot of contexts, mostly not recorded, and AI is going to have more trouble with parsing that context into something a transformer can operate on and give a good response to. Self-driving is an example of an embedded, though low-context, problem and progress on that has been slower than expected, although due to all the data from electric cars it’s possible to train a transformer to imitate humans using that data.