I believe that true intrinsic motivation for learning is either very rare or requires a long, well-executed process of learning with positive feedback so that the brain literally rewires itself to self-sustain motivation for cognitive activity (see Domenico & Ryan, 2017).
A lot of what I found reading over this study suggests that this is already the case, not just in humans, but other mammals as well. Or take Dörner’s PSI-Theory (which I’m a proponent of). According to Dörner, uncertainty reduction and competence are the most important human drives, which must be satisfied on a regular basis and learning is one method of reducing uncertainty.
One might argue that in the “utopian” scenario you outlined, this need is constantly being satisfied, since we all welcome our AI overlords and therefore have no uncertainty. In that case, the competence drive would help us out.
Simplified, we can say that everything humans do has the end-goal of satisfying their competence drive and satisfying any other drive (e.g. by eating, sleeping, working/earning money, social interaction, uncertainty reduction) is only a sub-goal of that. With all physiological needs being taken care of by the AI overlords, the focus for satisfying the competence drive would shift more towards the “higher” drives (affiliation and uncertainty reduction), and direct displays of competence (e.g. through competition).
In the “Realistic futures of motivation and education” section, you mention some of the things humans could do in a utopian post-AGI scenario to satisfy their competence drive with the frustration path being reserved to those unfortunate souls who cannot find any other way to do so. Those people already exist today and it’s possible that their number will increase post-AGI, but I don’t think they will be the majority.
Just think about it in terms of mate-selection strategies. If providing resources disappears as a criterion for mate-selection because of AGI abundance, we will have to look for other criteria and that will naturally lead people to engage in one or several of the other activities you mentioned, including learning, as a means of increasing their sexual market value.
The question is what percent of people will go the learning route. I expect this percentage to decrease relative to the present level because presently, learning difficult disciplines and skills is still required for earning a stable living. Today, you cannot confidently go into body-building, or Counter-Strike gaming, or chess, because only a tiny minority of people earn money off this activity alone. For example, as far as I remember, only a few hundred top chess players earn enough prize money to sustain themselves: others need to also work as chess teachers, or do something else still, to sustain themselves. Same with fitness and body-building: only a minority of body-builders earn enough prize money, others need to work as personal trainers, do fitness blogging on the side, etc. Same story for surfing, too.
When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning.
As I also noted below in the comments, the fact that few people will choose to try to learn SoTA science is not necessarily “bad”. It just isn’t compatible with Altman’s emphasis. Remember, that he brought up excellent AI tutoring in response to the question like “Why do you build this AI? What do we need it for?” I think there are many honest answers that would be more truthful than “Because AI will teach our kids very well and they will exercise their endless creativity”. But maybe the public is less prepared to those more truthful answers, they are too far outside of the Overton window still.
When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning.
These activities aren’t mutually exclusive, you know. Even if you make mastering eSports or surfing your main goal in life, you’ll still engage in other activities in your “spare-time” and for a lot of people, that will include gaining basic scientific knowledge. Sure, that will be “armchair science” for most of these people, but that’s already the case today.
Those who study a scientific field in its entirety and become PhDs or professors today, rarely do so out of financial interest. For example, a mathematics professor could earn much more money by working in the free economy. As such, I would expect the number of people in the world with the proficiency of becoming a mathematics professor to even grow in a utopian post-AGI scenario. The same goes for other scientific fields as well.
Today, most academics are somewhere in between, for example a doctor, who has enough medical knowledge to practice as a surgeon, but not enough to teach medicine or write academic papers. These are likely the ones who are most influenced by extrinsic rewards, so let’s take a closer look at what happens to those in-betweeners in your scenario.
With AGI surgeons, the demand for human surgeons would dramatically decrease, so there is no financial incentive to become a better surgeon, or practice surgery at all. Some of the existing surgeons would likely follow the academic path and still increase their medical knowledge, out of intrinsic motivation. The remaining surgeons will lay their interest on, as you said, surfing, poker, eSports, etc, or other studies.
I think the most likely outcome for academia will be a strengthening of interdisciplinary sciences. Right now, academics can expect the highest salary by studying a scientific discpline in depth and becoming an in-betweener. When that incentive structure disappears because there is little need for in-betweeners post-AGI, they will either study science more broadly, or focus on other activities and study armchair science in their free time.
In both cases, AI tutoring can have practical applications, so Altman wasn’t lying. Anyway, I think he is referring to current practical AI use cases, which do include AI tutoring, and not a post-AGI future. So overall, I don’t think that he is somehow trying to suppress an inconvenient truth that is outside of the Overton window, but it’s definitely worthwhile to think about AGI implications from this angle.
“When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning.”
This strikes me similar to the death of the darkroom. Yeah, computers do it better, cheaper, etc. However, almost no one who has ever worked in a darkroom seriously producing photography is happy that this basically doesn’t exist at all anymore. The experience itself teaches a lot of skills in a very kinaesthetic and intuitive way (with saturation curves that are pretty forgiving, to boot).
But more than this, the simple pleasure of math, computer programming, and engineering skills are very worthwhile in themselves. However, in John S Mills style utilitarianism, you have to do a lot of work to get to enjoy those pleasures. Will the tingle of the lightbulb coming on when learning PDEs just die out in the next 20 years like the darkroom has in the past 20 years? Meanwhile maybe darkrooms will make a big comeback?
I guess people will always want to experience pleasures. Isn’t learning complex topics a uniquely human pleasure?
I believe that true intrinsic motivation for learning is either very rare or requires a long, well-executed process of learning with positive feedback so that the brain literally rewires itself to self-sustain motivation for cognitive activity (see Domenico & Ryan, 2017).
A lot of what I found reading over this study suggests that this is already the case, not just in humans, but other mammals as well. Or take Dörner’s PSI-Theory (which I’m a proponent of). According to Dörner, uncertainty reduction and competence are the most important human drives, which must be satisfied on a regular basis and learning is one method of reducing uncertainty.
One might argue that in the “utopian” scenario you outlined, this need is constantly being satisfied, since we all welcome our AI overlords and therefore have no uncertainty. In that case, the competence drive would help us out.
Simplified, we can say that everything humans do has the end-goal of satisfying their competence drive and satisfying any other drive (e.g. by eating, sleeping, working/earning money, social interaction, uncertainty reduction) is only a sub-goal of that. With all physiological needs being taken care of by the AI overlords, the focus for satisfying the competence drive would shift more towards the “higher” drives (affiliation and uncertainty reduction), and direct displays of competence (e.g. through competition).
In the “Realistic futures of motivation and education” section, you mention some of the things humans could do in a utopian post-AGI scenario to satisfy their competence drive with the frustration path being reserved to those unfortunate souls who cannot find any other way to do so. Those people already exist today and it’s possible that their number will increase post-AGI, but I don’t think they will be the majority.
Just think about it in terms of mate-selection strategies. If providing resources disappears as a criterion for mate-selection because of AGI abundance, we will have to look for other criteria and that will naturally lead people to engage in one or several of the other activities you mentioned, including learning, as a means of increasing their sexual market value.
The question is what percent of people will go the learning route. I expect this percentage to decrease relative to the present level because presently, learning difficult disciplines and skills is still required for earning a stable living. Today, you cannot confidently go into body-building, or Counter-Strike gaming, or chess, because only a tiny minority of people earn money off this activity alone. For example, as far as I remember, only a few hundred top chess players earn enough prize money to sustain themselves: others need to also work as chess teachers, or do something else still, to sustain themselves. Same with fitness and body-building: only a minority of body-builders earn enough prize money, others need to work as personal trainers, do fitness blogging on the side, etc. Same story for surfing, too.
When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning.
As I also noted below in the comments, the fact that few people will choose to try to learn SoTA science is not necessarily “bad”. It just isn’t compatible with Altman’s emphasis. Remember, that he brought up excellent AI tutoring in response to the question like “Why do you build this AI? What do we need it for?” I think there are many honest answers that would be more truthful than “Because AI will teach our kids very well and they will exercise their endless creativity”. But maybe the public is less prepared to those more truthful answers, they are too far outside of the Overton window still.
When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning.
These activities aren’t mutually exclusive, you know. Even if you make mastering eSports or surfing your main goal in life, you’ll still engage in other activities in your “spare-time” and for a lot of people, that will include gaining basic scientific knowledge. Sure, that will be “armchair science” for most of these people, but that’s already the case today.
Those who study a scientific field in its entirety and become PhDs or professors today, rarely do so out of financial interest. For example, a mathematics professor could earn much more money by working in the free economy. As such, I would expect the number of people in the world with the proficiency of becoming a mathematics professor to even grow in a utopian post-AGI scenario. The same goes for other scientific fields as well.
Today, most academics are somewhere in between, for example a doctor, who has enough medical knowledge to practice as a surgeon, but not enough to teach medicine or write academic papers. These are likely the ones who are most influenced by extrinsic rewards, so let’s take a closer look at what happens to those in-betweeners in your scenario.
With AGI surgeons, the demand for human surgeons would dramatically decrease, so there is no financial incentive to become a better surgeon, or practice surgery at all. Some of the existing surgeons would likely follow the academic path and still increase their medical knowledge, out of intrinsic motivation. The remaining surgeons will lay their interest on, as you said, surfing, poker, eSports, etc, or other studies.
I think the most likely outcome for academia will be a strengthening of interdisciplinary sciences. Right now, academics can expect the highest salary by studying a scientific discpline in depth and becoming an in-betweener. When that incentive structure disappears because there is little need for in-betweeners post-AGI, they will either study science more broadly, or focus on other activities and study armchair science in their free time.
In both cases, AI tutoring can have practical applications, so Altman wasn’t lying. Anyway, I think he is referring to current practical AI use cases, which do include AI tutoring, and not a post-AGI future. So overall, I don’t think that he is somehow trying to suppress an inconvenient truth that is outside of the Overton window, but it’s definitely worthwhile to think about AGI implications from this angle.
“When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning.”
This strikes me similar to the death of the darkroom. Yeah, computers do it better, cheaper, etc. However, almost no one who has ever worked in a darkroom seriously producing photography is happy that this basically doesn’t exist at all anymore. The experience itself teaches a lot of skills in a very kinaesthetic and intuitive way (with saturation curves that are pretty forgiving, to boot).
But more than this, the simple pleasure of math, computer programming, and engineering skills are very worthwhile in themselves. However, in John S Mills style utilitarianism, you have to do a lot of work to get to enjoy those pleasures. Will the tingle of the lightbulb coming on when learning PDEs just die out in the next 20 years like the darkroom has in the past 20 years? Meanwhile maybe darkrooms will make a big comeback?
I guess people will always want to experience pleasures. Isn’t learning complex topics a uniquely human pleasure?