I think the bigger implication is that it would potentially create lots of room for innovation and progress and prosperity via means other than advancing AI capabilities, assuming the potential gains aren’t totally squandered and strangled by regulation.
If the discovery is real and the short to medium-term economic impact is massive, that might give people who are currently desperate to turn the Dial of Progress forward by any means an outlet other than pushing AI capabilities forward as fast as possible, and also just generally give people more slack and time to think and act marginally more sanely.
The question I’m most interested right now is, conditioned on this being a real scientific breakthrough in materials science and superconductivity, what are the biggest barriers and bottlenecks (regulatory, technical, economic, inputs) to actually making and scaling productive economic use of the new tech?
If the discovery is real and the short to medium-term economic impact is massive, that might give people who are currently desperate to turn the Dial of Progress forward by any means an outlet other than pushing AI capabilities forward as fast as possible, and also just generally give people more slack and time to think and act marginally more sanely.
Can you go into more details about this? As far as I’m aware, portable mass-producable fMRI machines, alone, will shorten AGI timelines far more than the effect from a big economic transformation diverting attention away from AI (e.g. by contributing valuable layers to foundation models).
Well, one question I’m interested in and don’t know the answer to is, given that the discovery is real, how easy is it to actually get to cheap portable fMRI machines, actually mass produced and not just mass produce-able in theory?
Also, people can already get a lot of fMRI data if they want to, I think? It’s not that expensive or inconvenient. So I’m skeptical that even a 10x or 100x increase in scale / quality / availability of fMRI data will have a particularly big or unique impact on AI or alignment research. Maybe you can build some kind of super-CFAR with them, and that leads to a bunch of alignment progress? But that seems kinda indirect, and something you could also do in some form if everyone is suddenly rich and prosperous and has lots of slack generally.
Oh, right, I should have mentioned that this is on the scale of a 10000-100000x increase in fMRI machines, such as one inside the notch of every smartphone, which is something that a ton of people have wanted to invest in for a very long time. The idea of a super-CFAR is less about extrapolating the 2010s CFAR upwards, and more about how CFAR’s entire existence was totally defined by the absense of fMRI saturation, making the fMRI saturation scenario pretty far out-of-distribution from any historical precedent. I definitely agree that effects from fMRI saturation would definitely be contingent on how quickly LK shortens the timeline for miniaturization of fMRI machines, and you’d need even more time to get useable results out of a super-CFAR(s).
Also, I now see your point with things like slack and prosperity and other macro-scale societal/civilizational upheavals being larger factors (not to mention siphoning substantial investment dollars away from AI which currently doesn’t have many better alternatives).
The question I’m most interested right now is, conditioned on this being a real scientific breakthrough in materials science and superconductivity, what are the biggest barriers and bottlenecks (regulatory, technical, economic, inputs) to actually making and scaling productive economic use of the new tech?
Well for starters, if it were only as difficult as graphene to manufacture in quantity, ambient condition superconductors would not see use yet. You would need better robots to mass manufacture them, and current robots are too expensive, and you’re right back to needing a fairly powerful level of AGI or you can’t use it.
Your next problem is ok, you can save 6% or more on long distance power transmission. But it costs an enormous amount of human labor to replace all your wires. See the above case. If merely humans have to do it, it could take 50 years.
There’s the possibility of new forms of compute elements, such as new forms of transistor. The crippling problem here is the way all technology is easiest to evolve from a pre-existing lineage, and it is very difficult to start fresh.
For example, I am sure you have read over the years how graphene or diamond might prove a superior substrate to silicon. Why don’t we see it used for our computer chips? The simplest reasons is that you’d be starting over. The first ICs on this process would be similar 1970s densities. The ‘catch up’ would go much faster than it did, but it still would take years, probably decades, meanwhile silicon is still improving. See how OLEDs still have not replaced LCD based displays despite being outright superior in most metrics.
Same would apply with fundamentally superior superconductor based ICs. At a minimum you’re starting over. Worst case, lithography processes may not work and you may need nanotechnology to actually efficiently construct these structures, if they are in fact superconducting in ambient conditions. To unlock nanotechnology you need to do a lot of experiments, and you need a lot of compute, and if you don’t want it to take 50 years you need some way to process all the data and choose the next experiment and we’re right back to wanting ASI.
Finally I might point out that while I sympathize with your desire—to not see everyone die from runway superintelligence—it’s simply orthogonal. There’s very few possible breakthroughs we could have that would suddenly make AGI/ASI not something worth investing in heavily. Breakthroughs like this one that would potentially make AGI/ASI slightly cheaper to build and robots even better actually causes there to be more potential ROI from investments in AGI. I can’t really think of any to be honest except some science fiction device that allows someone to receive data from our future, and with that data, avoid futures where we all die.
I think the bigger implication is that it would potentially create lots of room for innovation and progress and prosperity via means other than advancing AI capabilities, assuming the potential gains aren’t totally squandered and strangled by regulation.
If the discovery is real and the short to medium-term economic impact is massive, that might give people who are currently desperate to turn the Dial of Progress forward by any means an outlet other than pushing AI capabilities forward as fast as possible, and also just generally give people more slack and time to think and act marginally more sanely.
The question I’m most interested right now is, conditioned on this being a real scientific breakthrough in materials science and superconductivity, what are the biggest barriers and bottlenecks (regulatory, technical, economic, inputs) to actually making and scaling productive economic use of the new tech?
Can you go into more details about this? As far as I’m aware, portable mass-producable fMRI machines, alone, will shorten AGI timelines far more than the effect from a big economic transformation diverting attention away from AI (e.g. by contributing valuable layers to foundation models).
Well, one question I’m interested in and don’t know the answer to is, given that the discovery is real, how easy is it to actually get to cheap portable fMRI machines, actually mass produced and not just mass produce-able in theory?
Also, people can already get a lot of fMRI data if they want to, I think? It’s not that expensive or inconvenient. So I’m skeptical that even a 10x or 100x increase in scale / quality / availability of fMRI data will have a particularly big or unique impact on AI or alignment research. Maybe you can build some kind of super-CFAR with them, and that leads to a bunch of alignment progress? But that seems kinda indirect, and something you could also do in some form if everyone is suddenly rich and prosperous and has lots of slack generally.
Oh, right, I should have mentioned that this is on the scale of a 10000-100000x increase in fMRI machines, such as one inside the notch of every smartphone, which is something that a ton of people have wanted to invest in for a very long time. The idea of a super-CFAR is less about extrapolating the 2010s CFAR upwards, and more about how CFAR’s entire existence was totally defined by the absense of fMRI saturation, making the fMRI saturation scenario pretty far out-of-distribution from any historical precedent. I definitely agree that effects from fMRI saturation would definitely be contingent on how quickly LK shortens the timeline for miniaturization of fMRI machines, and you’d need even more time to get useable results out of a super-CFAR(s).
Also, I now see your point with things like slack and prosperity and other macro-scale societal/civilizational upheavals being larger factors (not to mention siphoning substantial investment dollars away from AI which currently doesn’t have many better alternatives).
Well for starters, if it were only as difficult as graphene to manufacture in quantity, ambient condition superconductors would not see use yet. You would need better robots to mass manufacture them, and current robots are too expensive, and you’re right back to needing a fairly powerful level of AGI or you can’t use it.
Your next problem is ok, you can save 6% or more on long distance power transmission. But it costs an enormous amount of human labor to replace all your wires. See the above case. If merely humans have to do it, it could take 50 years.
There’s the possibility of new forms of compute elements, such as new forms of transistor. The crippling problem here is the way all technology is easiest to evolve from a pre-existing lineage, and it is very difficult to start fresh.
For example, I am sure you have read over the years how graphene or diamond might prove a superior substrate to silicon. Why don’t we see it used for our computer chips? The simplest reasons is that you’d be starting over. The first ICs on this process would be similar 1970s densities. The ‘catch up’ would go much faster than it did, but it still would take years, probably decades, meanwhile silicon is still improving. See how OLEDs still have not replaced LCD based displays despite being outright superior in most metrics.
Same would apply with fundamentally superior superconductor based ICs. At a minimum you’re starting over. Worst case, lithography processes may not work and you may need nanotechnology to actually efficiently construct these structures, if they are in fact superconducting in ambient conditions. To unlock nanotechnology you need to do a lot of experiments, and you need a lot of compute, and if you don’t want it to take 50 years you need some way to process all the data and choose the next experiment and we’re right back to wanting ASI.
Finally I might point out that while I sympathize with your desire—to not see everyone die from runway superintelligence—it’s simply orthogonal. There’s very few possible breakthroughs we could have that would suddenly make AGI/ASI not something worth investing in heavily. Breakthroughs like this one that would potentially make AGI/ASI slightly cheaper to build and robots even better actually causes there to be more potential ROI from investments in AGI. I can’t really think of any to be honest except some science fiction device that allows someone to receive data from our future, and with that data, avoid futures where we all die.