My current LK99 questions
So this morning I thought to myself, “Okay, now I will actually try to study the LK99 question, instead of betting based on nontechnical priors and market sentiment reckoning.” (My initial entry into the affray, having been driven by people online presenting as confidently YES when the prediction markets were not confidently YES.) And then I thought to myself, “This LK99 issue seems complicated enough that it’d be worth doing an actual Bayesian calculation on it”—a rare thought; I don’t think I’ve done an actual explicit numerical Bayesian update in at least a year.
In the process of trying to set up an explicit calculation, I realized I felt very unsure about some critically important quantities, to the point where it no longer seemed worth trying to do the calculation with numbers. This is the System Working As Intended.
On July 30th, Danielle Fong said of this temperature-current-voltage graph,
‘Normally as current increases, voltage drop across a material increases. in a *superconductor*, voltage stays nearly constant, 0. that appears to be what’s happening here—up to a critical current. with higher currents available at lower temperatures deeply in the “fraud or superconduct” territory, imo. like you don’t get this by accident—you either faked it, or really found something.’
The graph Fong is talking about only appears in the initial paper put forth by Young-Wan Kwon, allegedly without authorization. A different graph, though similar, appears in Fig. 6 on p. 12 of the 6-author LK-endorsed paper rushed out in response.
Is it currently widely held by expert opinion, that this diagram has no obvious or likely explanation except “superconductivity” or “fraud”? If the authors discovered something weird that wasn’t a superconductor, or if they just hopefully measured over and over until they started getting some sort of measurement error, is there any known, any obvious way they could have gotten the same graph?
One person alleges an online rumor that poorly connected electrical leads can produce the same graph. Is that a conventional view?
Alternatively: If this material is a superconductor, have we seen what we expected to see? Is the diminishing current capacity with increased temperature usual? How does this alleged direct measurement of superconductivity square up with the current-story-as-I-understood-it that the material is only being very poorly synthesized, probably only in granules or gaps, and hence only detectable by looking for magnetic resistance / pinning?
This is my number-one question. Call it question 1-NO, because it’s the question of “How does the NO story explain this graph, and how prior-improbable or prior-likely was that story?”, with respect to my number one question.
Though I’d also like to know the 1-YES details: whether this looks like a high-prior-probability superconductivity graph; or a graph that requires a new kind of superconductivity, but one that’s theoretically straightforward given a central story; or if it looks like unspecified weird superconductivity, with there being no known theory that predicts a graph looking roughly like this.
What’s up with all the partial levitation videos? Possibilities I’m currently tracking:
2-NO-A: There’s something called “diamagnetism” which exists in other materials. The videos by LK and attempted replicators show the putative superconductor being repelled from the magnet, but not being locked in space relative to the magnet. Superconductors are supposed to exhibit Meissner pinning, and the failure of the material to be pinned to the magnet indicates that this isn’t a superconductor. (Sabine Hossenfelder seems to talk this way here. “I lost hope when I saw this video; this doesn’t look like the Meissner effect to me.”)
2-YES-Z: This is actually some totally expected thing called a “toroidal Meissner effect”, which is exactly what we should expect to see given the one-dimensional nature of the superconducting arrangement of quantum wells, and supposedly looks just like some previous video for carbon nanotubes. This was said by a Russian catgirl.
2-YES-Y: There is so much diamagnetism on display here that you wouldn’t see it without some superconductivity.
2-NO-B: There’s an unusual/unprecedented amount of diamagnetism here, but in a way that fits pretty well with “they found some magnetically weird material after screening for magnetic weirdness” better than “superconductivity”.
How much of a surprise was Sinead’s flat-band calculation? On the NO story, how likely or unlikely was the logical observation, “a result like this is calculated, for an actually-non-superconducting material, that was prescreened by the filters we already know about”?
Possibilities that are obvious and/or that I’ve seen claimed online:
3-NO-A: LK were deliberately seeking out materials like this, and the materials they screened were selected to all be the sort that would have a calculated flat band; therefore, arguendo, the result is not too surprising on NO.
3-NO-B: Most materials in the potential-superconductor class would have a calculation like this; it’s not a hard result to get by making weird unrealistic assumptions.
3-NO-C: LK found a weird material that isn’t a superconductor, and a weird material like this is much more likely to have a result like Sinead’s calculation be true about that material or some plausibly mistaken neighboring postulated structure.
3-NO-D: We understand so little about the origins of superconductivity that a calculation like this is not even probabilistic evidence about whether something is actually a superconductor. You could probably find a calculation like that for anything magnetically or electrically weird at all.
3-YES-Z: This is a fact we didn’t know from LK’s paper and LK didn’t do any prior selection against it; Sinead’s result is a previously unsuspected-given-NO calculation, that would be very unlikely as a calculation result unless the LK-postulated material was actually a superconductor. (This is how Andercot initially presented Sinead’s result, and there was a large prediction market jump immediately after; based on the subsequent crash, it looks like this was not the conventional reception after some review.)
3-YES-Y: LK already knew this fact, already did some calculation reflecting it, and this is how they found the material in the first place.
3-YES-X: LK didn’t know the Sinead-calculation, but the already-known weirdnesses of the material, and a very high false positive rate of calculations like these, mean the calculation wasn’t all that informative about YES and we don’t say “game won”.
Q4: Do any other results from the 6-person or journal-submitted LK papers stand out as having the property, “This is either superconductivity or fraud?” Like the stuff with thermal capacity or whatevs? Are there any claims—not replicated observations, just claims—that somebody wants to point to and say: “This seems quite improbable to get as an honest mistake, or as the result of other weirdness from this sort of material, absent superconductivity”?
Q5: When the early results hit, many online skeptics were scoffing about improbable or inconsistent results or points which supposedly showed the authors didn’t know physics. Are any standout nits like that still being picked with the 6-person or journal-submitted LK papers?
And finally, while it’s probably not that important, I would like to ask “What are we pretty sure we know and how do we think we know it?” with respect to the dramatic social story.
Supposedly, Lee and Kim have been researching LK-99 since 1999 as the last wish of their dying professor, albeit with a lot of breaks because they couldn’t get any funding and had to go on living their lives.
Q6: Is there any external verification, if we don’t trust LK’s word alone, that they’ve actually been working on this project for that long, and were on the tail of materials in this category from the start?
You may notice that I’m not trying to point to anything as a “prior”, as many layfolk incorrectly imagine to be the essence of Bayesianism. I don’t particularly think that pointing to any particular starting point and calling it a prior would be helpful in figuring things out, at this point.
What Bayesianism really has to say is more of a coherence constraint on how your beliefs should look; a reminder to ask questions like: “What’s the best explanation for Fig. 6 given NO, and how a-priori-probable would that explanation have sounded before we saw the graph?”
Bayesianism, or rather probability-theoretic coherence, inspires me to say that what I’d hope to see laid out by experts or collaborating Internet citizens, is a list of...
(1) Every observation, claim, fact, diagram from a paper, etcetera, which is supposed to be confusing or surprising given some possible states of the world, or bear in any way as evidence upon our probable beliefs;
(2) An account of the best ways to explain each such point, from both a YES or NO standpoint, as somebody who honestly believed YES or NO would give their own best account of it.
...in enough detail that you could notice if any parts of the best combined YES story or NO story stood out as a priori improbable, themselves surprising, or incoherent between points.
Possibly if we had that whole list, it would still seem worth the trouble of trying to yank out numbers, and attempt an explicit calculation of how probable the best YES story and the best NO story seemed. Or possibly it would seem like, as so many people on the Internet claim—and the more forthright ones have bet—that it’s time to call it, in one direction or another.
- How do you feel about LessWrong these days? [Open feedback thread] by 5 Dec 2023 20:54 UTC; 106 points) (
- 13 Sep 2023 19:36 UTC; 3 points) 's comment on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong by (
Me: PhD in condensed matter experiment, brief read-through of the 3-person paper a few days ago, went and checked out the 6-person paper just now, read some other links as needed.
EDIT: If I’m reading their figure 4 correctly, I missed how impossible their magnetic susceptibility data was if not superconducting. My bad—I’ve sprinkled in some more edits as necessary for questions 1, 2, and 4.
Q1
Electrical leads can explain almost arbitrary phenomena. They measured resistivity with a four point probe, where you flow a current between two outer wires and then check the voltage between two inner wires. If the inner wires for some reason don’t allow current to pass at small voltage (e.g. you accidentally made a schottky diode, a real thing that sometimes happens), that can cause a spurious dip in resistivity.
The data isn’t particularly clean, and there are several ways it differs from what you’d expect. Here’s what a nice clean I-V curve looks like—symmetrical, continuous, flat almost to the limit of measurement below Tc, all that good stuff. Their I-V data is messier in every way. It’s not completely implausible, but if it’s real, why didn’t they take some better-looking data?
Yes, critical current changing with temperature is normal. In fact, if this is a superconductor, we can learn interesting things about it from the slope of critical current as a function of temperature, near the critical temperature (does it look like √Tc−T?).
The resistivity and levitation might be possible if only a tiny fraction of the material is superconducting, so long as there are 2D superconducting planes (a pattern that seems likely in a high-temperature superconductor) that can percolate through the polycrystalline material. However, I don’t see how this would work with the apatite structure (also the Griffin DFT paper says the band structure is 3D, and the Cu-Pb chains of claimed importance are 1D), so I think it’s more likely you would indeed have to have a high fraction of superconductor.
EDIT: I think their magnetic susceptibility data for sample 2, if correct, implies that the sample is at least 20% superconductor.
Q2
The video shows a surprising amount of diamagnetism, but it doesn’t really look like the Meissner effect, and isn’t so strong that it’s impossible to explain without it (especially since most of the weight of the sample is resting on the magnet). Locking in place isn’t strictly necessary, but especially in an impure material we should see a lot of pinning that prevents it from easily rotating. Russian catgirls are often untrustworthy.
EDIT: Actually, if I’m reading this right, figure 4a actually is pretty impossible without superconductivity. Score one for YES-Y. Although the data looks very ugly (where’s the above-Tc region with no diamagnetism?)
The diamagentism is still evidence that it’s a superconductor! It’s just even better evidence that it’s a non-superconducting strong diamagnet. The moderate difference they show between field cooled and zero-field cooled magnetization curves is likewise evidence either that it’s a superconductor, or evidence that it’s an ordered diamagnetic material.
Q3
Somewhere between YES-X and NO-C. First, DFT calculations are a good starting point but always require a grain of salt. Second, I think calling this a “flat band” is overhyping it—the density of states enhancement that makes flat bands so hype-worthy isn’t there as far as I can tell. Third, the hints of charge and spin waves in the material bear further study (if this is a superconductor they almost certainly are doing something interesting) but aren’t all that surprising given that you’ve jammed a bunch of heavy atoms together in a nontrivial crystal structure.
Q4
If getting it to conduct current at 0 resistance is as easy as they make it sound, they’ve probably replicated it a hundred times in three different ways. However, what if it’s tricky to get it hooked up to show superconductivity—you have to put the leads on just right, in some hard-to-understand way, and usually it doesn’t look superconducting… then wishful thinking has a lot more room to operate.
EDIT: The extreme diamagnetism measurement for sample 2 could just be a calibration error on a sensitive measurement, requiring neither fraud nor superconductivity.
Q5
No idea. They clearly know physics. They’re not maximally clear about everything, and I think they sweep data issues under the rug, but not in a way that makes me more suspicious conditional on the data.
(Phd in condensed matter simulation) I agree with everything you wrote where I know enough (for readers, I don’t know anything about lead contacts and several other experimental tricky points, so my agreement should not be counted too much).
I just add on the simulation side (Q3): this is what you would expect to see in a room-T superconductor unless it relies on a completely new mechanism. But, this is something you see also in a lot of materials that superconduct at 20K or so. Even in some where the superconducting phase is completely suppressed by magetism or structural distortions or any other phase transition. In addition, DFT+U is a quick-and-dirty approach for this kind of problem, as fits the speed at which the preprint was put out. So from the simulation bayesian evidence in favor but very weak
If it’s possible that the polycrystalline structure is what determines superconductivity, and so this is a purity issue?
Could we perhaps find suitable alternative combinations of elements that are more inclined to form these ordered polycrystalline arrangements (superlattice)?
For example finding alloys that have atom A that attracts to atom B more than it attracts to atom A, and atom B that attracts to atom A more than it attracts to atom B, where these particular elements are also good candidates for materials that are likely to exhibit superconductivity, and are heavy elements so they’re likely to more stable at room-temperature, so they have higher Tc?
Or is this a dead-end way of trying to find a room temp superconductor?
Yeah, things are more complicated—atoms aren’t interchangeable, they have complicated effects on what the electrons are doing. If you want to understand, I can only recommend a series of textbooks (e.g. Marder’s Condensed matter physics, Phillips’ Advanced solid state physics, Tinkham’s Introduction to superconductivity).
I did a condensed matter experiment PhD, but high Tc is not my field and I haven’t spent much time thinking about this. [Edit: I didn’t see Charlie Steiner’s comment until I had written most of this comment and started editing for clarity. I think you can treat this as mostly independent.] Still, some thoughts on Q1, maybe starting with some useful references:
Bednorz and Müller, “Possible High Tc Superconductivity in the Ba—La—Cu—O System” (1986) was the inciting paper for the subsequent discoveries of high-Tc superconductors, notably the roughly simultaneous Wu, Ashburn, Torng, et al. (1987) and Cava et al. (1987) establishing superconductivity in YBCO with Tc > 90K. The first paper is more cautious in its claims on clearer evidence than the present papers. The latter two show dispositive measurements.
I might also recommend Ekin’s Experimental Techniques for Low-Temperature Measurements for background on how superconductors are usually measured (regardless of temperature). It discusses contacts, sample holders, instrumentation, procedures, and analysis for electrical transport measurements in superconductors, with a focus on critical current measurements. (I don’t think skimming it to find a graph or statement that you can apply to the present case will be very helpful, though.)
It’s well understood that a jump in the I-V curve does not imply superconductivity. Joule heating at high currents and thermal expansion, for example, can cause abrupt changes in contact resistance. I’m not sure exactly what it would take to reproduce that graph, but contact physics is gnarly enough that there’s probably a way, together with other experimental complications.
In the roughest sense, yes. The critical current density for a superconductor decreases as temperature increases and as magnetic field increases. Quantitatively, maybe. There are evidently other things going on, at least. (In another sense, the papers aren’t really what I’d expect a lab that thought they had a new superconductor to present. But I think that can be explained between reproducibility issues, the interpersonal issues rushing publication, and the fact that they’re somewhat outside the community that usually thinks about superconducting physics.)
In an impure sample you would see high residual resistance below Tc (I think the authors do, but I’m not confident at a glance particularly given paper quality problems) and broad transitions due to a spread of transition temperatures over superconducting domains (it seems to me that the authors see very sharp transitions, although data showing the width in temperature from the April paper is omitted from the arXiv papers). The worse these are, the more mundane explanations are viable (roughly speaking), which is part of why observing the Meissner effect is important. But this is a good question. To some extent people are “vibing” rather than getting a story straight.
Don’t the authors claim to have measured 0 resistivity (modulo measurement noise)?
From the six-author paper: “In the first region below red-arrow C (near 60 °C),
equivalent to region F in the inset of Fig. 5, the resistivity with noise signals can be regarded as
zero.” But by “noise signals” they don’t mean measurement noise (and their region C doesn’t look measurement-noise limited, unless their measurement apparatus is orders of magnitude less sensitive than it should be) but rather sample physics—later in that paragraph: “The presence of noise in the zero-resistivity region is often attributed to phonon vibrations at higher temperature.”
The other papers do seem to make that claim, but for example the April paper shows the same data but offset 0.02 Ohm-cm on the y-axis (that is, the April version of the plot [Fig. 6a] goes to zero just below “Tc”, but the six-author arXiv version [Fig. 5] doesn’t). So whatever’s going on there, it doesn’t look like they hooked up their probes and saw only the noise floor of their instrument.
Curated.
Although the LK-99 excitement has cooled off, this post stands as an excellent demonstration of why and how Bayesian reasoning is helpful: when faced with surprising or confusing phenomena, understanding how to partition your model of reality such that new evidence would provide the largest updates, is quite valuable. Even if the questions you construct are themselves confused or based on invalid premises, they’re often confused in a much more legible way, such that domain experts can do a much better job of pointing to that and saying something like “actually, there’s a third alternative”, or “A wouldn’t imply B in any situation, so this provides no evidence”.
Me—Ph.D. in solid state materials chemistry. Been out of the game for a while. Less understanding of physics than some other commenters but have a different perspective that might be useful.
My first thought is that they have a minority phase; the samples are likely ~99% LK99 and ~1% unknown phase with weird properties. You can see it in the video; part of the specimen is levitating but a corner of it isn’t.
The second thing I would do is try to make a bunch of variants with slightly different compositions to identify the minority phase.
The first thing I would do is try to make versions with different amounts of hydrogen. Hydrogen is ubiquitous, diffuses readily into and out of most materials, and is invisible to most materials analysis techniques, but it can have a profound effect on a material’s properties. If you get different properties by annealing the sample under high-pressure hydrogen*, you’re on the right track.
*For safety’s sake you would typically use forming gas, a non-flammable mixture of hydrogen with nitrogen (or sometimes argon). Ammonia is also sometimes used but is more dangerous.
My background: educated amateur. I can design simple to not-quite-simple analog circuits and have taken ordinary but fiddly material property measurements with electronics test equipment and gotten industrially-useful results.
I’m not seeing it. With a bad enough setup, poor technique can do almost anything. I’m not seeing the authors as that awful, though. I don’t think they’re immune from mistakes, but I give low odds on the arbitrarily-awful end of mistakes.
You can model electrical mistakes as some mix of resistors and switches. Fiddly loose contacts are switches, actuated by forces. Those can be magnetic, thermal expansion, unknown gremlins, etc. So “critical magnetic field” could be “magnetic field adequate to move the thing”. Ditto temperature. But managing both problems at the same time in a way that looks like a plausible superconductor critical curve is… weird. The gremlins could be anything, but gremlins highly correlated with interesting properties demand explanation.
Materials with grains can have conducting and not-conducting regions. Those would likely have different thermal expansion behaviors. Complex oxides with grain boundaries are ripe for diode-like behavior. So you could have a fairly complex circuit with fairly complex temperature dependence.
I think this piece basically comes down to two things:
Can you get this level of complex behavior out of a simple model? One curve I’d believe, but the multiple curves with the relationship between temperature and critical current don’t seem right. The level of mistake to produce this seems complicated, with very low base rate.
Did they manage to demonstrate resistivity low enough to rule out simple conduction in the zero-voltage regime? (For example, lower resistivity than copper by an order of magnitude.) The papers are remarkably short on details to this effect. They claim yes, but details are hard to come by. (Copper has resistivity ~ 1.7e-6 ohm*cm, they claim < 10^-10 in the 3-author paper for the thin-film sample, but details are in short supply.) Four point probe technique to measure the resistivity of copper in a bulk sample is remarkably challenging. You measure the resistivity of copper with thin films or long thin wires if you want good data. I’d love to see more here.
If the noise floor doesn’t rule out copper, you can get the curves with adequately well chosen thermal and magnetic switches from loose contacts. But there are enough graphs that those errors have to be remarkably precisely targeted, if the graphs aren’t fraud.
Another thing I’d love to see on this front: multiple graphs of the same sort from the same sample (take it apart and put it back together), from different locations on the sample, from multiple samples. Bad measurement setups don’t repeat cleanly.
My question for the NO side: what does the schematic of the bad measurement look like? Where do you put the diodes? How do you manage the sharp transition out of the zero-resistance regime without arbitrarily-fine-tuned switches?
The field-cooled vs zero-field-cooled magnetization graph (1d in the 3-author paper, 4a in the 6-author paper). I’m far less confident in this than the above; I understand the physics much less well. I mostly mention it because it seems under-discussed from what I’ve seen on twitter and such. This is an extremely specific form of thermal/magnetic hysteresis that I don’t know of an alternate explanation for. I suspect this says more about my ignorance than anything else, but I’m surprised I haven’t seen a proposed explanation from the NO camp.
Reading this post was one of the triggers for me mostly exiting the Manifold market (at a loss) - the trading is getting more serious, and I’m out of my depth.
It is still fun to spectate, though.
I have some very basic questions about the figure on display. It is the first time I look at this matter.
Is this the DC current-voltage curve of a sample of the material in question, without other shenanigans?
Is the “M” phase ohmic, i.e. current is proportional to voltage? It seems so to me by squinting at the graph, because it’s 2-3 mV and 150-250 mA, but I’m not confident because it’s in log scale without a grid.
Is the “N” phase ohmic too?
Are then the I, J, K, L phases superconducting?
Is also M superconducting? If it’s a very short piece of metal, it seems realistic to me to get a resistance of 2 mV/150 mA = 13 mΩ (left M tip) without superconducting properties. So my impression is M = conductor, N = poor conductor, but I’m in doubt it could be M superconductor, N conductor.
purported video of a fully levitating sample from a replication effort, sorry I do not have any information beyond this twitter link. But if it is not somehow faked or misrepresented it seems a pretty clear demonstration of flux-pinned Meissner effect with no visible evidence of cooling. [Edit] slightly more detail on video “Laboratory of Chemical Technology”
This video is widely believed to be a CGI fake.
Perpetuating this point, I and many acquaintances have looked into the origins of this video over the past two weeks and came up with no substantial proof of it’s validity.
Had to look up what LK-99 is. Now I wonder, was this inspiration for
the supercriminal motive in “aviation is the most dangerous routine activity”?
Such comments should be under heavy spoilers.
test
Oh my gosh, you’re absolutely right! My apologies! But now that I’m trying to add them, they’re an option that isn’t showing up in the editor! Do you know how I can add them?
(And if I spoiled that for you, I’m seriously really really really sorry. I hate spoilers, and I’m always riding people about being too loose with them. I can’t believe I did that. Was not thinking. I’m sorry.)
Fortunately, no! But I would be really upset if I didn’t read it earlier.
Spoilers can be put in markdown using syntax like this:
Thanks. Unfortunately that didn’t work when I tried it. Edit: Googled it. “>!” in front worked.
Yeah, I don’t blame you! I’m really glad I didn’t spoil it for you, and sorry again for being careless.
Almost certainly, given the timing, how LK-99 and the story-superconductor were published ahead of schedule, and the focus on prediction markets.
For those of you who don’t know, room-temperature semiconductors would be a really big deal for all kinds of things, including AI alignment.
Among many other applications, there is also a decent possibility of building affordable helmet-sized fMRI machines (the currently-giant machines that 3d-map electrical brain activity), enough for the living rooms of hundreds of alignment researchers (or even every smartphone), and get large sample sizes of hard data about what it physically looks like when humans succeed and fail at thinking about alignment. As in, hundreds of thousands of hours, of hundreds of people thinking about AI alignment, billions of 3d frames per year, including the hours and minutes leading up to every breakthrough that happened while someone was wearing the device (perhaps there will be a ding whenever a researcher is on the right track).
That’s the best thing I can currently think of. I have no idea what will be discovered by the thousands of actual ML engineers and neurologists who actually get to look at that kind of data (and see what they can do with it).
I think the bigger implication is that it would potentially create lots of room for innovation and progress and prosperity via means other than advancing AI capabilities, assuming the potential gains aren’t totally squandered and strangled by regulation.
If the discovery is real and the short to medium-term economic impact is massive, that might give people who are currently desperate to turn the Dial of Progress forward by any means an outlet other than pushing AI capabilities forward as fast as possible, and also just generally give people more slack and time to think and act marginally more sanely.
The question I’m most interested right now is, conditioned on this being a real scientific breakthrough in materials science and superconductivity, what are the biggest barriers and bottlenecks (regulatory, technical, economic, inputs) to actually making and scaling productive economic use of the new tech?
Can you go into more details about this? As far as I’m aware, portable mass-producable fMRI machines, alone, will shorten AGI timelines far more than the effect from a big economic transformation diverting attention away from AI (e.g. by contributing valuable layers to foundation models).
Well, one question I’m interested in and don’t know the answer to is, given that the discovery is real, how easy is it to actually get to cheap portable fMRI machines, actually mass produced and not just mass produce-able in theory?
Also, people can already get a lot of fMRI data if they want to, I think? It’s not that expensive or inconvenient. So I’m skeptical that even a 10x or 100x increase in scale / quality / availability of fMRI data will have a particularly big or unique impact on AI or alignment research. Maybe you can build some kind of super-CFAR with them, and that leads to a bunch of alignment progress? But that seems kinda indirect, and something you could also do in some form if everyone is suddenly rich and prosperous and has lots of slack generally.
Oh, right, I should have mentioned that this is on the scale of a 10000-100000x increase in fMRI machines, such as one inside the notch of every smartphone, which is something that a ton of people have wanted to invest in for a very long time. The idea of a super-CFAR is less about extrapolating the 2010s CFAR upwards, and more about how CFAR’s entire existence was totally defined by the absense of fMRI saturation, making the fMRI saturation scenario pretty far out-of-distribution from any historical precedent. I definitely agree that effects from fMRI saturation would definitely be contingent on how quickly LK shortens the timeline for miniaturization of fMRI machines, and you’d need even more time to get useable results out of a super-CFAR(s).
Also, I now see your point with things like slack and prosperity and other macro-scale societal/civilizational upheavals being larger factors (not to mention siphoning substantial investment dollars away from AI which currently doesn’t have many better alternatives).
Well for starters, if it were only as difficult as graphene to manufacture in quantity, ambient condition superconductors would not see use yet. You would need better robots to mass manufacture them, and current robots are too expensive, and you’re right back to needing a fairly powerful level of AGI or you can’t use it.
Your next problem is ok, you can save 6% or more on long distance power transmission. But it costs an enormous amount of human labor to replace all your wires. See the above case. If merely humans have to do it, it could take 50 years.
There’s the possibility of new forms of compute elements, such as new forms of transistor. The crippling problem here is the way all technology is easiest to evolve from a pre-existing lineage, and it is very difficult to start fresh.
For example, I am sure you have read over the years how graphene or diamond might prove a superior substrate to silicon. Why don’t we see it used for our computer chips? The simplest reasons is that you’d be starting over. The first ICs on this process would be similar 1970s densities. The ‘catch up’ would go much faster than it did, but it still would take years, probably decades, meanwhile silicon is still improving. See how OLEDs still have not replaced LCD based displays despite being outright superior in most metrics.
Same would apply with fundamentally superior superconductor based ICs. At a minimum you’re starting over. Worst case, lithography processes may not work and you may need nanotechnology to actually efficiently construct these structures, if they are in fact superconducting in ambient conditions. To unlock nanotechnology you need to do a lot of experiments, and you need a lot of compute, and if you don’t want it to take 50 years you need some way to process all the data and choose the next experiment and we’re right back to wanting ASI.
Finally I might point out that while I sympathize with your desire—to not see everyone die from runway superintelligence—it’s simply orthogonal. There’s very few possible breakthroughs we could have that would suddenly make AGI/ASI not something worth investing in heavily. Breakthroughs like this one that would potentially make AGI/ASI slightly cheaper to build and robots even better actually causes there to be more potential ROI from investments in AGI. I can’t really think of any to be honest except some science fiction device that allows someone to receive data from our future, and with that data, avoid futures where we all die.
I don’t believe there is that much you can do with MRI data, to develop treatments, on relevant timescales? Like, we’ll probably have the compute advancement long before we have the cognitive enhancement?
Can you explain further? A lot of comments here have already gone into the weeds about how large amounts of fMRI data can contribute heavily to cognitive enhancement.
I see none. Wait, you mean this one?
At minimum, large amounts of fMRI data make it easier to conduct longitudinal investigations of what accelerates/reduces the rate of brain mass decline with age after age ~20 (eg would plasmalogens help? would taurine help? what are the associated metabolomics? What is an ANOVA of white matter hyperintensities with each of the metabolites in iollo? a mass-parallel study of all of this is important [cf marton m from LBF2]), and this would help improve the clarity that experienced people have with thinking + get people to better vet accuracy/helpfulness/informativeness of AI models over their lifetime + reduce fluid intelligence decline with age—relevant for helping humans keep up with machines, especially in a world where average age [esp the age of people who have stayed in alignment for longer] is increasing to rates where their decrease becomes relevant.
Humans have phenomenally poor memory (worsens with age) and this causes MANY testimonies to be wrong, and many people to say things that aren’t true (and for alignment to happen we NEED people to be as truthful as possible, and especially not inaccurate due to dumb things like brain decline from excess blood glucose due to not combining acarbose/taurine with the shitty ultraprocessed food they do eat...
RELEVANT:
https://www.frontiersin.org/articles/10.3389/fnagi.2022.895535/full
https://qualiacomputing.com/2022/10/27/on-rhythms-of-the-brain-jhanas-local-field-potentials-and-electromagnetic-theories-of-consciousness/
https://www.sciencedirect.com/science/article/pii/S0035378721006974
https://www.frontiersin.org/articles/10.3389/fnhum.2023.1123014/full
https://advancedconsciousness.org/protocol-003b-preparation-materials/
BTW all these threads are worth discussing on augmentationlab.org (and its discord!)
https://foresight.org/summary/owen-phillips-brain-aging-is-the-key-to-longevity/
More exciting IMO isn’t so much the big data aspect, but just the opportunity for “big individual data”: people getting to watch their own brain state for many hours. E.g. learning when you’re rationalizing, when you’re avoiding something, when you’re deluded, when you’re tired, when you’re really thinking about something else, etc.
Yes, this is exactly the innovation I was thinking about. With superconductors that fit in hats, you can also combine that self-observation with big data, predictive analytics, and thousands of neurologists/ML engineers/psychologists to identify trends and formulate standard strategies, to get people get themselves on the right track. You can basically open-source the research, Auto-GPT-style.
A billion 3d frames per year per 300 people will make a lot of internal phenomena stick out like a sore thumb, especially the internal phenomena that typically leads up to/away from peak alignment thoughtflow. Just have a “ding” sound when someone’s mind is going in the right direction, and a “dong” sound for the wrong directions.
functional Machine Intelligence Research Imaging
I’d definitely like to try that. The right UX would be a number that goes up as you get closer to the target headspace, with milestone numbers along the way, which each give you a reward. It should possibly be coupled with a puzzle game or a set of creative exercises or something. (Games are good because they can provide reward. If a person isn’t already productive it may be because they didn’t find practicing engineering deeply rewarding so this part of it might be important.)
It seems extremely unlikely that these things could be seen in fMRI data.
GPT4 confirms for me that the Meissner effect does not require flux pinning: “Yes, indeed, you’re correct. Flux pinning, also known as quantum locking or quantum levitation, is a slightly different phenomenon from the pure Meissner effect and can play a crucial role in the interaction between a magnet and a superconductor.
In the Meissner effect, a superconductor will expel all magnetic fields, creating a repulsive effect. However, in type-II superconductors, there are exceptions where some magnetic flux can penetrate the material in the form of tiny magnetic vortices. These vortices can become “pinned” in place due to imperfections in the superconductor’s structure.
This flux pinning is the basis of quantum locking, where the superconductor is ‘locked’ in space relative to the magnetic field. This can create the illusion of levitation in any orientation, depending on how the flux was pinned. For instance, a superconductor could be pinned in place above a magnet, below a magnet, or at an angle.
So, yes, it is indeed important to consider flux pinning when discussing the behavior of superconductors in a magnetic field. Thanks for pointing out this nuance!”
I think Sabine is just not used to seeing small pieces of superconductor floating over large magnets. Every Meissner effect video that I can find shows the reverse: small magnets floating on top of pieces of cooled superconductor. This makes sense because it is hard to cool something that is floating in the air.
I would have considered fact-checking to be one of the tasks GPT is least suited to, given its tendency to say made-up things just as confidently as true things. (And also because the questions it’s most likely to answer correctly will usually be ones we can easily look up by ourselves.)
edit: whichever very-high-karma user just gave this a strong disagreement vote, can you explain why? (Just as you voted, I was editing in the sentence ‘Am I missing something about GPT-4?’)