In an impure sample you would see high residual resistance below Tc
Don’t the authors claim to have measured 0 resistivity (modulo measurement noise)?
In an impure sample you would see high residual resistance below Tc
Don’t the authors claim to have measured 0 resistivity (modulo measurement noise)?
In the MIRI dialogues from 2021/2022 I thought you said you would update to 40% of AGI by 2040 if AI got an IMO gold medal by 2025? Did I misunderstand or have you shifted your thinking (if so, how?)
What do you think are the strongest arguments in that list, and why are they weaker than a vague “oh maybe we’ll figure it out”?
It seems like something has to be going wrong if the model output has higher odds that TAI is already here (~12%) than TAI being developed between now and 2027 (~11%)? Relatedly, I’m confused by the disclaimer that “we are not updating on the fact that TAI has obviously not yet arrived”—shouldn’t that fact be baked into the distributions for each parameter (particularly the number of FLOPs to reach TAI)?
Well....Eliezer does think we’re doomed so doesn’t necessarily contradict his worldview
Minor curiosity: What was the context behind Asimov predicting in 1990 that permanent space cities would be built within 10 years? It seems like a much wilder leap than any of his other predictions.
Would be very curious to hear thoughts from the people that voted “disagree” on this post
Maybe you could measure how effectively people pass e.g. a multiple choice version of an Intellectual Turing Test (on how well they can emulate the viewpoint of people concerned by AI safety) after hearing the proposed explanations.
[Edit: To be explicit, this would help further John’s goals (as I understand them) because it ideally tests whether the AI safety viewpoint is being communicated in such a way that people can understand and operate the underlying mental models. This is better than testing how persuasive the arguments are because it’s a) more in line with general principles of epistemic virtue and b) is more likely to persuade people iff the specific mental models underlying AI safety concern are correct.
One potential issue would be people bouncing off the arguments early and never getting around to building their own mental models, so maybe you could test for succinct/high-level arguments that successfully persuade target audiences to take a deeper dive into the specifics? That seems like a much less concerning persuasion target to optimize, since the worst case is people being wrongly persuaded to “waste” time thinking about the same stuff the LW community has been spending a ton of time thinking about for the last ~20 years]
I strongly prefer the “dying with dignity” mentality for 3 basic reasons:
as other people have mentioned, “playing to your outs” is too easy to misinterpret as conditioning on comfortable improbabilities no matter how much you try to draw the distinctions
relatedly, focusing on “playing to your outs” (especially if you do so for emotional reasons) may make it harder to stay grounded in accurate models of reality (that may mostly output “we will die soon”)
Operating under the mindset that death is likely when AGI is still some ways around the corner and easy to ignore seems like it ought to make it easier to stay emotionally resilient and ready to exploit miracle opportunities if/when AGI is looming and impossible to ignore
Of these, the 3rd feels the most important to me, partly because I’ve seen it discussed least. It seems like if Eliezer’s basic model is right, a significant portion of the good outcomes require some kind of miracle occuring at crunch time, which will presumably be easier to obtain if key players are emotionally prepared and not suddenly freaking out for the first time (on an emotional/subconscious level). I know basically nothing about psychology, but isn’t it a bad sign if you retreat to “oh death with dignity is unmotivating, let’s just focus on our outs” when AGI is less salient?
Wait why are your predictions for Brazil so far from the market? As of right now, there are 180,000 shares of Bolsonaro on the orderbook under 50c on FTX (avg price of 44c if you buy them all).
Yeah it’s definitely against poly’s terms of service but not against US law (otherwise they wouldn’t be complying with the prohibition on offering their services to US customers)
FWIW it is totally legal for Americans to trade on polymarket via a VPN or similar; it’s just not legal for polymarket itself to offer services to people with US IP addresses
Is there currently a supply shortage of vaccines?
Yep, I wanted to experiment with a central example of a comment that should be in the “downvote/agree” quadrant, since that seemed like the least likely to occur naturally. It’s nice to see the voting system is working as intended.
Yudkowsky is so awesome!!
I haven’t done much research on this, but from a naive perspective, spending 4 billion dollars to move up vaccine access by a few months sounds incredibly unlikely to be a good idea? Is the idea that it is more effective than standard global health interventions in terms of QALYs or a similar metric, or that there’s some other benefit that is incommensurable with other global health interventions? (This feels like asking the wrong question but maybe it will at least help me understand your perspective)
Wait, how do you get to 17%-25% chance of a crisis situation if there’s only a 2.5% chance of omicron causing severe disease in vaccinated/previously infected people? Isn’t that the vast majority of people in the US?
My uninformed impression is that an “adjuvant” is just something that stimulates increased immune response, which has historically been an additional chemical (such as an aluminum salt) added to vaccines. The mRNA vaccines do not contain these chemicals, but some people confusingly refer to the lipid nanoparticles that surround and protect the mRNA as adjuvants (because they also help increase immune response). I haven’t seen any evidence that these lipid nanoparticles are the kind of adjuvants that might mitigate OAS, but that doesn’t mean much because I’m a total nonexpert.
hopefully fixed?
What do they have against AI? Seems like the impact on regular people has been pretty minimal. Also, if GPT4 level technology ws allowed to fully mature and diffuse to a wide audience without increasing in base capability, it seems like the impact on everyone would be hugely beneficial