Maybe I got carried away with the whole “everything overhang” idea.
While I do think fast vs slow takeoff is an important variable that determines how safe a singularity is, it’s far from the only thing that matters.
If you were looking at our world today and asking “what obvious inefficiencies will an AGI exploit?” there are probably a lot of lower-hanging fruits (nuclear power, genetic engineering, zoning) that you would point to before getting to “we’re not building chip fabs as fast as physically possible”.
My actual views are probably closest to d/acc which is that there are a wide variety of directions we can chose when researching new technology and we ought to focus on the ones that make the world safer.
I do think that creating new obvious inefficiencies is a bad idea. For example, if we were to sustain a cap of 10**26 FLOPs on training runs for a decade or longer, that would make it really easy for a rouge actor/ai to suddenly build a much more powerful AI than anyone else in the world has.
As to the specific case of Sam/$7T, I think that it’s largely aspiration and to the extent that it happens it was going to happen anyway. I guess if I was given a specific counterfactual, like: TSMC is going to build 100 new fabs in the next 10 years, is it better that they be built in the USA or Taiwan? I would prefer they be built in the USA. If, on the other hand, the counterfactual was: the USA is going to invest $7T in AI in the next 10 years, would you prefer it be all spent on semiconductor fabs or half on semiconductor fabs and half on researching controllable AI algorithms, I would prefer the latter.
Basically, my views are “don’t be an idiot”, but it’s possible to be an idiot both by arbitrarily banning things and by focusing on a single line of research to the exclusion of all others.
idk.
Maybe I got carried away with the whole “everything overhang” idea.
While I do think fast vs slow takeoff is an important variable that determines how safe a singularity is, it’s far from the only thing that matters.
If you were looking at our world today and asking “what obvious inefficiencies will an AGI exploit?” there are probably a lot of lower-hanging fruits (nuclear power, genetic engineering, zoning) that you would point to before getting to “we’re not building chip fabs as fast as physically possible”.
My actual views are probably closest to d/acc which is that there are a wide variety of directions we can chose when researching new technology and we ought to focus on the ones that make the world safer.
I do think that creating new obvious inefficiencies is a bad idea. For example, if we were to sustain a cap of 10**26 FLOPs on training runs for a decade or longer, that would make it really easy for a rouge actor/ai to suddenly build a much more powerful AI than anyone else in the world has.
As to the specific case of Sam/$7T, I think that it’s largely aspiration and to the extent that it happens it was going to happen anyway. I guess if I was given a specific counterfactual, like: TSMC is going to build 100 new fabs in the next 10 years, is it better that they be built in the USA or Taiwan? I would prefer they be built in the USA. If, on the other hand, the counterfactual was: the USA is going to invest $7T in AI in the next 10 years, would you prefer it be all spent on semiconductor fabs or half on semiconductor fabs and half on researching controllable AI algorithms, I would prefer the latter.
Basically, my views are “don’t be an idiot”, but it’s possible to be an idiot both by arbitrarily banning things and by focusing on a single line of research to the exclusion of all others.