AGI is the sweetest, most interesting, most exciting challenge in the world.
We usually concede this point and I don’t even think it’s true. Of course, even if I’m right, maybe we don’t want to push in this direction in dialogue, because it would set the bad precedent of not defending ethics over coolness (and sooner or later something cool will be unethical). But let me discuss anyway.
Of course building AGI is very exciting, it incentivizes some good problem-solving. But doing it through Deep Learning and like OpenAI does has an indeleble undercurrent of “we don’t exactly know what’s going on, we’re just stirring the linear algebra pile”. Of course that can already be an incredibly interesting engineering problem, and it’s not like you don’t need a lot of knowledge and good intuitions to make these hacky things work. But I’m sure the aesthetic predispositions of many (and especially the more mathematically oriented) will line up way better with “actually understanding the thing”. From this perspective Alignment, Deep Learning Theory, Decision Theory, understanding value formation, etc. feel fundamentally more interesting intellectual challenges. I share this feeling and I think many other people do. A lot of people have been spoiled by the niceness of math, and/or can’t stand the scientific shallowness of ML developments.
At the end of the day it has to work. And you need scale. Someone has to pay for larger scale research.
The best way to achieve the understanding you seek is to first expand the AI industry to hundreds of thousands of people and a trillion in annual r&d and everything has a tpu in it.
Note that historically things like a detailed understanding of aerodynamics were achieved many decades after exploitation. The original aviation pioneers were dead when humans built large enough computers to model aerodynamics. Humans had already built millions of aircraft and optimized to the sr-71.
Tangent:
We usually concede this point and I don’t even think it’s true. Of course, even if I’m right, maybe we don’t want to push in this direction in dialogue, because it would set the bad precedent of not defending ethics over coolness (and sooner or later something cool will be unethical). But let me discuss anyway.
Of course building AGI is very exciting, it incentivizes some good problem-solving. But doing it through Deep Learning and like OpenAI does has an indeleble undercurrent of “we don’t exactly know what’s going on, we’re just stirring the linear algebra pile”. Of course that can already be an incredibly interesting engineering problem, and it’s not like you don’t need a lot of knowledge and good intuitions to make these hacky things work. But I’m sure the aesthetic predispositions of many (and especially the more mathematically oriented) will line up way better with “actually understanding the thing”. From this perspective Alignment, Deep Learning Theory, Decision Theory, understanding value formation, etc. feel fundamentally more interesting intellectual challenges. I share this feeling and I think many other people do. A lot of people have been spoiled by the niceness of math, and/or can’t stand the scientific shallowness of ML developments.
At the end of the day it has to work. And you need scale. Someone has to pay for larger scale research.
The best way to achieve the understanding you seek is to first expand the AI industry to hundreds of thousands of people and a trillion in annual r&d and everything has a tpu in it.
Note that historically things like a detailed understanding of aerodynamics were achieved many decades after exploitation. The original aviation pioneers were dead when humans built large enough computers to model aerodynamics. Humans had already built millions of aircraft and optimized to the sr-71.