In this piece, we want to paint a picture of the possible benefits of AI, without ignoring the risks or shying away from radical visions.
Thanks for this piece! In my opinion you are still shying away from discussing radical (although quite plausible) visions. I expect the median good outcome from superintelligence involves everyone being mind uploaded / living in simulations experiencing things that are hard to imagine currently.
Even short of that, in the first year after a singularity, I would want to:
Use brain computer interfaces to play videogames / simulations that feel 100% real to all senses, but which are not constrained by physics.
Go to Hogwarts (in a 100% realistic simulation) and learn magic and make real (AI) friends with Ron and Hermione.
Visit ancient Greece or view all the most important events of history based on superhuman AI archeology and historical reconstruction.
Take medication that makes you always feel wide awake, focused etc. with no side effects.
Engineer your body / use cybernetics to make yourself never have to eat, sleep, wash, etc. and be able to jump very high, run very fast, climb up walls, etc.
Use AI as the best teacher ever to learn maths, physics and every subject and language and musical instruments to super-expert level.
Visit other planets. Geoengineer them to have crazy landscapes and climates.
Play God and oversee the evolution of life on other planets.
Design buildings in new architectural styles and have AI build them.
Genetically modify cats to play catch.
Listen to new types of music, perfectly designed to sound good to you.
Design the biggest roller coaster ever and have AI build it.
Modify your brain to have better short term memory, eidetic memory, be able to calculate any arithmetic super fast, be super charismatic.
Bring back Dinosaurs and create new creatures.
Ask AI for way better ideas for this list.
I expect UBI, curing aging etc. to be solved within a few days of a friendly intelligence explosion.
Although I think we also plausibly will see a new type of scarcity. There is limited amount of compute you can create using the materials / energy in the universe. And if in fact most humans are mind-uploaded / brains in vats living in simulations, we will have to divide this among ourselves in order to run the simulations. If you have twice as much compute, you can simulate your brain twice as fast (or run two of you in parallel?), and thus experience twice as much subjective time—and so live twice as long until the heat death of the universe.
On a meta level, I think there’s a difference in “model style” between your comment, some of which seems to treat future advances as a grab-bag of desirable things, and our post, which tries to talk more about the general “gears” that might drive the future world and its goodness. There will be a real shift in how progress happens when humans are no longer in the loop, as we argue in this section. Coordination costs going down will be important for the entire economy, as we argue here (though we don’t discuss things as galaxy-brained as e.g. Wei Dai’s related post). The question of whether humans are happy self-actualising without unbounded adversity cuts across every specific cool thing that we might get to do in the glorious transhumanist utopia.
Thinking about the general gears here matters. First, because they’re, well, general (e.g. if humans were not happy self-actualising without unbounded adversity, suddenly the entire glorious transhumanist utopia seems less promising). Second, because I expect that incentives, feedback loops, resources, etc. will continue mattering. The world today is much wealthier and better off than before industrialisation, but the incentives / economics / politics / structures of the industrial world let you predict the effects of it better than if you just modelled it as “everything gets better” (even though that actually is a very good 3-word summary). Of course, all the things that directly make industrialisation good really are a grab-bag list of desirable things (antibiotics! birth control! LessWrong!). But there’s structure behind that that is good to understand (mechanisation! economies of scale! science!). A lot of our post is meant to have the vibe of “here are some structural considerations, with near-future examples”, and less “here is the list of concrete things we’ll end up with”. Honestly, a lot of the reason we didn’t do the latter more is because it’s hard.
In my opinion you are still shying away from discussing radical (although quite plausible) visions. I expect the median good outcome from superintelligence involves everyone being mind uploaded / living in simulations experiencing things that are hard to imagine currently. [emphasis added]
I agree there’s a high chance things end up very wild. I think there’s a lot of uncertainty about what timelines that would happen under; I think Dyson spheres are >10% likely by 2040, but I wouldn’t put them >90% likely by 2100 even conditioning on no radical stagnation scenario (which I’d say are >10% likely on their own). (I mention Dyson spheres because they seem more a raw Kardashev scale progress metric, vs mind uploads which seem more contingent on tech details & choices & economics for whether they happen)
I do think there’s value in discussing the intermediate steps between today and the more radical things. I generally expect progress to be not-ridiculously-unsmooth, so even if the intermediate steps are speedrun fairly quickly in calendar time, I expect us to go through a lot of them.
I think a lot of the things we discuss, like lowered coordination costs, AI being used to improve AI, and humans self-actualising, will continue to be important dynamics even into the very radical futures.
Listen to new types of music, perfectly designed to sound good to you.
Design the biggest roller coaster ever and have AI build it.
Visit ancient Greece or view all the most important events of history based on superhuman AI archeology and historical reconstruction.
Bring back Dinosaurs and create new creatures.
Genetically modify cats to play catch.
Design buildings in new architectural styles and have AI build them.
Use brain computer interfaces to play videogames / simulations that feel 100% real to all senses, but which are not constrained by physics.
Go to Hogwarts (in a 100% realistic simulation) and learn magic and make real (AI) friends with Ron and Hermione.
These examples all seem to be about entertainment or aesthetics. Entertainment and aesthetics things are important to get right and interesting. I wouldn’t be moved by any description of a future that centred around entertainment though, and if the world is otherwise fine, I’m fairly sure there will be good entertainment.
To me, the one with the most important-seeming implications is the last one, because that might have implications for what social relationships exist and whether they are mostly human-human or AI-human or AI-AI. We discuss why changes there are maybe risky in this section.
Use AI as the best teacher ever to learn maths, physics and every subject and language and musical instruments to super-expert level.
Take medication that makes you always feel wide awake, focused etc. with no side effects.
Engineer your body / use cybernetics to make yourself never have to eat, sleep, wash, etc. and be able to jump very high, run very fast, climb up walls, etc.
Modify your brain to have better short term memory, eidetic memory, be able to calculate any arithmetic super fast, be super charismatic.
I think these are interesting and important! I think there isn’t yet a concrete story for why AI in particular enables these, apart from the general principle that sufficiently good AI will accelerate all technology. I think there’s unfortunately a chance that direct benefits to human biology lag other AI effects by a lot, because they might face big hurdles due to regulation and/or getting the real-world data the AI needs. (Though also, humans are willing to pay a lot for health, and rationally should pay a lot for cognitive benefits, so high demand might make up for this).
Thanks for this piece! In my opinion you are still shying away from discussing radical (although quite plausible) visions. I expect the median good outcome from superintelligence involves everyone being mind uploaded / living in simulations experiencing things that are hard to imagine currently.
Even short of that, in the first year after a singularity, I would want to:
Use brain computer interfaces to play videogames / simulations that feel 100% real to all senses, but which are not constrained by physics.
Go to Hogwarts (in a 100% realistic simulation) and learn magic and make real (AI) friends with Ron and Hermione.
Visit ancient Greece or view all the most important events of history based on superhuman AI archeology and historical reconstruction.
Take medication that makes you always feel wide awake, focused etc. with no side effects.
Engineer your body / use cybernetics to make yourself never have to eat, sleep, wash, etc. and be able to jump very high, run very fast, climb up walls, etc.
Use AI as the best teacher ever to learn maths, physics and every subject and language and musical instruments to super-expert level.
Visit other planets. Geoengineer them to have crazy landscapes and climates.
Play God and oversee the evolution of life on other planets.
Design buildings in new architectural styles and have AI build them.
Genetically modify cats to play catch.
Listen to new types of music, perfectly designed to sound good to you.
Design the biggest roller coaster ever and have AI build it.
Modify your brain to have better short term memory, eidetic memory, be able to calculate any arithmetic super fast, be super charismatic.
Bring back Dinosaurs and create new creatures.
Ask AI for way better ideas for this list.
I expect UBI, curing aging etc. to be solved within a few days of a friendly intelligence explosion.
Although I think we also plausibly will see a new type of scarcity. There is limited amount of compute you can create using the materials / energy in the universe. And if in fact most humans are mind-uploaded / brains in vats living in simulations, we will have to divide this among ourselves in order to run the simulations. If you have twice as much compute, you can simulate your brain twice as fast (or run two of you in parallel?), and thus experience twice as much subjective time—and so live twice as long until the heat death of the universe.
On a meta level, I think there’s a difference in “model style” between your comment, some of which seems to treat future advances as a grab-bag of desirable things, and our post, which tries to talk more about the general “gears” that might drive the future world and its goodness. There will be a real shift in how progress happens when humans are no longer in the loop, as we argue in this section. Coordination costs going down will be important for the entire economy, as we argue here (though we don’t discuss things as galaxy-brained as e.g. Wei Dai’s related post). The question of whether humans are happy self-actualising without unbounded adversity cuts across every specific cool thing that we might get to do in the glorious transhumanist utopia.
Thinking about the general gears here matters. First, because they’re, well, general (e.g. if humans were not happy self-actualising without unbounded adversity, suddenly the entire glorious transhumanist utopia seems less promising). Second, because I expect that incentives, feedback loops, resources, etc. will continue mattering. The world today is much wealthier and better off than before industrialisation, but the incentives / economics / politics / structures of the industrial world let you predict the effects of it better than if you just modelled it as “everything gets better” (even though that actually is a very good 3-word summary). Of course, all the things that directly make industrialisation good really are a grab-bag list of desirable things (antibiotics! birth control! LessWrong!). But there’s structure behind that that is good to understand (mechanisation! economies of scale! science!). A lot of our post is meant to have the vibe of “here are some structural considerations, with near-future examples”, and less “here is the list of concrete things we’ll end up with”. Honestly, a lot of the reason we didn’t do the latter more is because it’s hard.
Your last paragraph, though, is very much in this more gears-level-y style, and a good point. It reminds me of Eliezer Yudkowsky’s recent mini-essay on scarcity.
Regarding:
I agree there’s a high chance things end up very wild. I think there’s a lot of uncertainty about what timelines that would happen under; I think Dyson spheres are >10% likely by 2040, but I wouldn’t put them >90% likely by 2100 even conditioning on no radical stagnation scenario (which I’d say are >10% likely on their own). (I mention Dyson spheres because they seem more a raw Kardashev scale progress metric, vs mind uploads which seem more contingent on tech details & choices & economics for whether they happen)
I do think there’s value in discussing the intermediate steps between today and the more radical things. I generally expect progress to be not-ridiculously-unsmooth, so even if the intermediate steps are speedrun fairly quickly in calendar time, I expect us to go through a lot of them.
I think a lot of the things we discuss, like lowered coordination costs, AI being used to improve AI, and humans self-actualising, will continue to be important dynamics even into the very radical futures.
Re your specific list items:
These examples all seem to be about entertainment or aesthetics. Entertainment and aesthetics things are important to get right and interesting. I wouldn’t be moved by any description of a future that centred around entertainment though, and if the world is otherwise fine, I’m fairly sure there will be good entertainment.
To me, the one with the most important-seeming implications is the last one, because that might have implications for what social relationships exist and whether they are mostly human-human or AI-human or AI-AI. We discuss why changes there are maybe risky in this section.
We discuss this, though very briefly, in this section.
I think these are interesting and important! I think there isn’t yet a concrete story for why AI in particular enables these, apart from the general principle that sufficiently good AI will accelerate all technology. I think there’s unfortunately a chance that direct benefits to human biology lag other AI effects by a lot, because they might face big hurdles due to regulation and/or getting the real-world data the AI needs. (Though also, humans are willing to pay a lot for health, and rationally should pay a lot for cognitive benefits, so high demand might make up for this).
I think the general theme of having the AIs help us make more use of AIs is important! We talk about it in general terms in the “AI is the ultimate meta-technology” section.