I don’t see why Jaynes is wrong. I guess it depends on the interpretation? If two humans are chasing the same thing there is a limited amount of it, of course they are in conflict with each other. Isn’t that what Jaynes is pointing at?
mukashi
Good post, I hope to read more from you
Yeah, sorry about that. I didn’t put much effort into my last comment.
Defining intelligence is tricky, but to paraphrase EY, it’s probably wise not to get too specific since we don’t fully understand Intelligence yet. In the past, people didn’t really know what fire was. Some would just point to it and say, “Hey, it’s that shiny thing that burns you.” Others would invent complex, intellectual-sounding theories about phlogiston, which were entirely off base. Similarly, I don’t think the discussion about AGI and doom scenarios gets much benefit from a super precise definition of intelligence. A broad definition that most people agree on should be enough, like “Intelligence is the capacity to create models of the world and use them to think.”
But I do think we should aim for a clearer definition of AGI (yes, I realize ‘Intelligence’ is part of the acronym). What I mean is, we could have a more vague definition of intelligence, but AGI should be better defined. I’ve noticed different uses of ‘AGI’ here on Less Wrong. One definition is a machine that can reason about a wide variety of problems (some of which may be new to it) and learn new things. Under this definition, GPT4 is pretty much an AGI. Another common definition on this forum is an AGI is a machine capable of wiping out all humans. I believe we need to separate these two definitions, as that’s really where the core of the crux lies.
What is an AGI? I have seen a lot of “not a true scotman” around this one.
I guess the crux here for most people is the timescale. I agree actually that things can get eventually very bad if there is no progress in alignment etc, but the situation is totally different if we have 50 or 70 years to work on that problem or, as Yudkowsky keeps repeating, we don’t have that much time because AGI will kill us all as soon as it appears.
The standard argument you will probably listen is that AGI will be capable of killing everyone because they can think so much faster than humans. I haven’t seen yet a serious engagement from doomers to the argument of capabilities. I agree with everything you said here and to me these arguments are obviously right.
Any source you would recommend to know more about the specific practices of Mormons you are referring to?
The Babbage example is the perfect one. Thank you, I will use it
This would clearly put my point in a different place from the doomers
I would place myself also in the right upper quadrant, close to the doomers, but I am not one of them.
The reason is that it is not very clear to me the exact meaning of “tractable for a SI”. I do think that nanotechnology/biotechnology can progress enormously with SI, but the problem is not only developing the required knowledge, but creating the economic conditions to make these technologies possible, building the factories, making new machines, etc. For example nowadays, in spite of the massive demand of microchips worldwide, there are very very few factories (and for some specific technologies the number of factories is n=1). Will we get there eventually? Yes. But not at the speed that EY fears.
I think you summarised pretty well my position in this paragraph:
“I think another common view on LW is that many things are probably possible in principle, but would require potentially large amounts of time, data, resources, etc. to accomplish, which might make some tasks intractable, if not impossible, even for a superintelligence. ”
So I do think that EY believes in “magic” (even more after reading his tweet), but some people might not like the term and I understand that.
In my case using the word magic does not refer only at breaking the laws of physics. Magic might refer to someone who holds such a simplified model of the world that think, that you can make in a matter of days all those factories, machines and working nanotechnology (on the first try) and then succesfully deploy them everywhere killing everyone, and that we will get to that point in a matter of days AND that there won’t be any other SI that could work to prevent those scenarios. I don’t think I am misrepresenting EY point of view here, correct me otherwise,
If someone believed that a good group of engineers working one week in a spacecraft model could succesfuly 30 years later in an asteroid close to Proxima Centaury, would you call it magical thinking? I would. There is nothing beyond the realm of physics here! But it assumes so many things and it is so stupidly optimistic that I would simply dismiss it as nonsense.
I agree with this take, but do those plans exist, even in theory?
This is fantastic. Is there anything remotely like this available for Discord?
I don’t see how that implies that everyone dies.
It’s like saying, weapons are dangerous, imagine what would happen if they fall in the wrong hands. Well, it does happen and sometimes that have bad consequences but there is no logical connection between that and everyone dying, which is what doom means. Do you want to argue that LLMs are dangerous? Fine. No problem with that. But doom is not that.
Thanks for this post. It’s refreshing to hear about how this technology will impact our lives in the near future without any references to it killing us all
There are some other assumptions that go into Eliezer’s model that are required for doom. I can think of one very clearly which is:
5. The transition to that god-AGI will be as quick that other entities won’t have the time to reach also superhuman capabilities. There are no “intermediate” AGIs that can be used to work on Alignment related problems or even as a defence from unaligned AGIs
I wish you recover soon with all my heart
I believe I have found a perfect example where the “Medical Model is Wrong,” and I am currently working on a post about it. However, I am swamped with other tasks, I wonder if I will ever finish it.
In my case, I am highly confident that my model is correct, while the majority of the medical community is wrong. Using your bullet points:
1.Personal: I have personally experienced this disease and know that the standard treatments do not work.
2.Anecdotal: I am aware of numerous cases where the conventional treatment has failed. In fact, I am not aware of any cases where it has been successful.
3.Research papers: I came across a research paper from 2022 that shares the same opinion as mine.
4.Academics: Working in academia, I am well aware of its limitations. In this specific case, there is a considerable amount of inertia and a lack of communication between different subfields, as accurately described in the book “Inadequate Equilibria” by EY.
5.Medical: Most doctors hold the same opinion because they are influenced by their education. Therefore, if 10 doctors provide the same response, it should not be considered as 10 independent opinions.
6.Countercultural experts: No idea here
7.Communities: I have not explored this extensively, but completing this post I am talking about might be the beginning
8. Someone claims to have completely made the condition disappear using arbitrary methods. I am not personally aware of any such cases but I suspect that it is feasible and could potentially be relatively simple.
9.Models: I have a precise mechanistic model of the disease and why the treatments fail to cure it. I work professionally in a field closely related to this disease.
In summary, my confidence comes from, 1. being an expert in a closely related field and understanding what other people are missing and above all, why they are missing it, 2. having a mechanistic model 3. finding publications that manifest similar opinions.
Yes, I agree. I think it is important to remind that achieving AGI and doom are two separate events. Many people around here do make a strong connection between them, but not everyone. I’m on the camp that we are 2 or 3 years away to an AGI (it’s hard to see why GPT4 does not qualify as that), I don’t think that implies the imminent extinction of human beings. It is much easier to convince people of the first point because the evidence is already out there
Has he tried personally to interact with GPT4? Can’t think of a better way. It convinced even Bryan Caplan, who had bet publicly against it
This should be curated. Just reading this list is a good exercise for those people that attribute a very high probability to a single possible scenario.