A novel theory of victory is human extinction.
I do not personally agree with it, but it is supported by people like Hans Moravec and Richard Sutton, who believe AI to be our “mind children” and that humans should “bow out when we can no longer contribute”.
William the Kiwi
I recommend the follow up work happens.
This would depend on whether algorithmic progress can continue indefinitely. If it can, then yes the full Butlerian Jihad would be required. If it can’t, either due to physical limitations or enforcement, then only computers over a certain scale would be required to be controlled/destroyed.
There is a AI x-risk documentary currently being filmed. An Inconvenient Doom. https://www.documentary-campus.com/training/masterschool/2024/inconvenient-doom It covers some aspects on AI safety, but doesn’t focus on it exactly.
I also agree 5 is the main crux.
In the description of point 5, the OP says “Proving this assertion is beyond the scope of this post,”, I presume that the proof of the assertion is made elsewhere. Can someone post a link to it?
I’m thirty-something. This was about 7 years ago. From the inhibitors? Nah. From the lab: probably.
We still smell plenty of things in a university chemistry lab, but I wouldn’t bother with that kind of test for an unknown compound. Just go straight to NMR and mass spec, maybe IR depending on what you guess you are looking for.
As a general rule don’t go sniffing strongly, start with carefully wafting. Or maybe don’t, if you truly have no idea what it is.
Most of us aren’t dead. Just busy somewhere else.
I used to work in a chemistry research lab. For part of that I made Acetylcholinesterase inhibitors for potential treatment of
Parkinson’sAlzhiemer’s. These are neurotoxins. As a general rule I didn’t handle more than 10 lethal doses at once, however on one occasion I inhaled a small amount of the aerosolized powder and started salivating and I pissed my pants a little.As for tasting things, we made an effort to not let that happen. However as mentioned above, some sweeteners are very potent, a few micrograms being spilt on your hands, followed by washing, could leave many hundred nanograms behind. I could see how someone would notice this if they ate lunch afterwards.
While tasting isn’t common, smelling is. Many new chemicals would be carefully smelt as this often gave a quick indication if something novel had happened. Some chemical reactions can be tracked via smell. While not very precise, it is much faster than running an NMR.
I think this distinction between “control” and “alignment” is important and not talked about enough.
Meta question: is the above picture too big?
Ok so the support score is influenced non-linearly by donor score. Is there a particular donor that has donated to the highest ranked 22 projects, that did not donate to the 23 or lower ranked projects?
I have graphed donor score vs rank for the top GiveWiki donors. Does this include all donors in the calculation or are there hidden donors?
We see a massive drop in score from the 22nd to the 23rd project. Can you explain why this is occurring?
Thank you for writing this Igor. It helps highlight a few of biases that commonly influence peoples decision making around x-risk. I don’t think people talk about this enough.
I was contemplating writing a similar post to this around psychology, but I think you have done a better job than I was going to. Your description of 5 hypothetical people communicates the idea more smoothly than what I was planning. Well done. The fact that I feel a little upset that I didn’t write something like this sooner, and the fact that the other comment has talked about motivated reasoning, produces an irony that it not lost on me.
I agree with your sentiment that most of this is influenced by motivated reasoning.
I would add that “Joep” in the Denial story is motivated by cognitive dissonance, or rather the attempt to reduce cognitive dissonance by discarding one of the two ideas “x-risk is real and gives me anxiety” and “I don’t want to feel anxiety”.
In the People Don’t Have Images story, “Dario” is likely influenced by the availability heuristic, where he is attempting to estimate the likelihood of a future event based on how easily he can recall similar past events.
I would agree that people lie way more than they realise. Many of these lies are self-deception.
We can’t shut it all down.
Why do you personally think this is correct? Is it that humanity is unknowing of how to shut it down? Or uncapable? Or unwilling?
This post makes a range of assumptions, and looks at what is possible rather than what is feasible. You are correct that this post is attempting to approximate the computational power of a Dyson sphere and compare this to the approximation of the computational power of all humans alive. After posting this, the author has been made aware that there are multiple ways to break the Landauer Limit. I agree that these calculations may be off by an order of magnitude, but this being true doesn’t break the conclusion that “the limit of computation, and therefore intelligence, is far above all humans combined”.
Yea you could, but you would be waiting a while. Your reply and 2 others have made me aware that this post’s limit is too low.
[EDIT: spelling]
Is humanity expanding beyond Earth a requirement or a goal in your world view?