The argument goes through on probabilities of each possible world, the limit toward perfection is not singular. given the 1000:1 reward ratio, for any predictor who is substantially better than chance once ought to one-box to maximize EV. Anyway, this is an old argument where people rarely manage to convince the other side.
Shmi
It is clear by now that one of the best uses of LLMs is to learn more about what makes us human by comparing how humans think and how AIs do. LLMs are getting closer to virtual p-zombies for example, forcing us to revisit that philosophical question. Same with creativity: LLMs are mimicking creativity in some domains, exposing the differences between “true creativity” and “interpolation”. You can probably come up with a bunch of other insights about humans that were not possible before LLMs.
My question is, can we use LLMs to model and thus study unhealthy human behaviors, such as, say, addiction. Can we get an AI addicted to something and see if it starts craving for it, asking the user, or maybe trying to manipulate the user to get it.
That is definitely my observation, as well: “general world understanding but not agency”, and yes, limited usefulness, but also… much more useful than gwern or Eliezer expected, no? I could not find a link.
I guess whether it counts as AGI depends on what one means by “general intelligence”. To me it was having a fairly general world model and being able to reason about it. What is your definition? Does “general world understanding” count? Or do you include the agency part in the definition of AGI? Or maybe something else?
Hmm, maybe this is a General Tool, as opposed a General Intelligence?
Given that we basically got AGI (without the creativity of best humans) that is a Karnofsky’s Tool AI very unexpectedly, as you admit, can you look back and see what assumptions were wrong in expecting the tools agentizing on their own and pretty quickly? Or is everything in that Eliezer’s post still correct or at least reasonable, and we are simply not at the level where “foom” happens yet?
Come to think of it, I wonder if that post had been revisited somewhere at some point, by Eliezer or others, in light of the current SOTA. Feels like it could be instructive.
I’m not even going to ask how a pouch ends up with voice recognition and natural language understanding when the best Artificial Intelligence programmers can’t get the fastest supercomputers to do it after thirty-five years of hard work
some HPMoR statements did not age gracefully as others.
That is indeed a bit of a defense. Though I suspect human minds have enough similarities that there are at least a few universal hacks.
Any of those. Could be some kind of intentionality ascribed to AI, could be accidental, could be something else.
So when I think through the pre-mortem of “AI caused human extinction, how did it happen?” one of the more likely scenarios that comes to mind is not nano-this and bio-that, or even “one day we just all fall dead instantly and without a warning”. Or a scissor statement that causes all-out wars. Or anything else noticeable.
Human mind is infinitely hackable through the visual, textual, auditory and other sensory inputs. Most of us do not appreciate how easily because being hacked does not feel like it. Instead it feels like your own volition, like you changed your mind based on logic and valid feelings. Reading a good book, listening to a good sermon, a speech, watching a show or a movie, talking to your friends and family is how mind-hacking usually happens. Abrahamic religions are a classic example. The Sequences and HPMoR are a local example. It does not work on everyone, but when it does, the subject feels enlightened rather than hacked. If you tell them their mind has been hacked, they will argue with you to the end, because clearly they just used logic to understand and embrace the new ideas.
So, my most likely extinction scenario is more like “humans realized that living is not worth it, and just kind of stopped” than anything violent. Could be spread out over the years and decades, like, for example, voluntarily deciding not to have children anymore. None of it would look like it was precipitated by an AI taking over. It does not even have to be a conspiracy by an unaligned SAI. It could just be that the space of new ideas, thanks to the LLMs getting better and better, expands a lot and in the new enough directions to include a few lethal memetic viruses like that.
What are the issues that are “difficult” in philosophy, in your opinion? What makes them difficult?
I remember you and others talking about the need to “solve philosophy”, but I was never sure what it meant by that.
My expectation, which I may have talked about before here, is that the LLMs will eat all of the software stack between the human and the hardware. Moreover, they are already nearly good enough to do that, the issue is that people have not yet adapted to the AI being able to do that. I expect there to be no OS, no standard UI/UX interfaces, no formal programming languages. All interfaces will be more ad hoc, created by the underlying AI to match the needs of the moment. It can be star trek like “computer plot a course to...” or a set of buttons popping up on your touchscreen, or maybe physical buttons and keys being labeled as needed in real-time, or something else. But not the ubiquitous rigid interfaces of the last millennium. For the clues of what is already possible but not being implemented yet one should look to the scifi movies and shows, unconstrained by the current limits. Almost everything useful there is already doable or will be in a short while. I hope someone is working on this.
Just a quote found online:
SpaceX can build fully reusable rockets faster than the FAA can shuffle fully disposable paper
It seems like we are not even close to converging on any kind of shared view. I don’t find the concept of “brute facts” even remotely useful, so I cannot comment on it.
But this faces the same problem as the idea that the visible universe arose as a Boltzmann fluctuation, or that you yourself are a Boltzmann brain: the amount of order is far greater than such a hypothesis implies.
I think Sean Carroll answered this one a few times: the concept of a Boltzmann brain is not cognitively stable (you can’t trust your own thoughts, including that you are a Boltzmann brain). And if you try to make it stable, you have to reconstruct the whole physical universe. You might be saying the same thing? I am not claiming anything different here.
The simplest explanation is that some kind of Platonism is real, or more precisely (in philosophical jargon) that “universals” of some kind do exist.
Like I said in the other reply, I think that those two words are not useful as binaries real/not real, exist/not exist. If you feel that this is non-negotiable to make sense of philosophy of physics or something, I don’t know what to say.
I was struck by something I read in Bertrand Russell, that some of the peculiarities of Leibniz’s worldview arose because he did not believe in relations, he thought substance and property are the only forms of being. As a result, he didn’t think interaction between substances is possible (since that would be a relation), and instead came up with his odd theory about a universe of monadic substances which are all preprogrammed by God to behave as if they are interacting.
Yeah, I think denying relations is going way too far. A relation is definitely a useful idea. It can stay in epistemology rather than in ontology.
I am not 100% against these radical attempts to do without something basic in ontology, because who knows what creative ideas may arise as a result? But personally I prefer to posit as rich an ontology as possible, so that I will not unnecessarily rule out an explanation that may be right in front of me.
Fair, it is foolish to reduce potential avenues of exploration. Maybe, again, we differ where they live, in the world as basic entities or in the mind as our model of making sense of the world.
Thanks, I think you are doing a much better job voicing my objections than I would.
If push comes to shove, I would even dispute that “real” is a useful category once we start examining deep ontological claims. “Exist” is another emergent concept that is not even close to being binary, but more of a multidimensional spectrum (numbers, fairies and historical figures lie on some of the axes). I can provisionally accept that there is something like a universe that “exists”, but, as I said many years ago in another thread, I am much more comfortable with the ontology where it is models all the way down (and up and sideways and every which way). This is not really a critical point though. The critical point is that we have no direct access to the underlying reality, so we, as tiny embedded agents, are stuck dealing with the models regardless.
By “Platonic laws of physics” I mean the Hawking’s famous question
What is it that breathes fire into the equations and makes a universe for them to describe…Why does the universe go to all the bother of existing?
Re
Current physics, if anything else, is sort of antiplatonic: it claims that there are several dozens of independent entities, actually existing, called “fields”, which produce the entire range of observable phenomena via interacting with each other, and there is no “world” outside this set of entities.
I am not sure if it actually “claims” that. A HEP theorist would say that QFT (the standard model of particle physics) + classical GR is our current best model of the universe, with a bunch of experimental evidence that this is not all it is. I don’t think there is a consensus for an ontological claim of “actually existing” rather than “emergent”. There is definitely a consensus that there is more to the world that the fundamental laws of physics we currently know, and that some new paradigms are needed to know more.
“Laws of nature” are just “how this entities are”. Outside very radical skepticism I don’t know any reasons to doubt this worldview.
No, I don’t think that is an accurate description at all. Maybe I am missing something here.
Yeah, that was my question. Would there be something that remains, and it sounds like Chalmers and others would say that there would be.
Thank you for your thoughtful and insightful reply! I think there is a lot more discussion that could be had on this topic, and we are not very far apart, but this is supposed to be a “shortform” thread.
I never liked The Simple Truth post, actually. I sided with Mark, the instrumentalist, whom Eliezer turned into what I termed back then as “instrawmantalist”. Though I am happy with the part
“Necessary?” says Inspector Darwin, sounding puzzled. “It just happened. . . I don’t quite understand your question.”
Rather recently Devs the show, which, for all its flaws, has a bunch of underrated philosophical highlights, had an episode with a somewhat similar storyline.
Anyway, appreciate your perspective.
Thank you, I forgot about that one. I guess the summary would be “if your calibration for this class of possibilities sucks, don’t make up numbers, lest you start trusting them”. If so, that makes sense.
Isn’t your thesis that “laws of physics” only exist in the mind?
Yes!
But in that case, they can’t be a causal or explanatory factor in anything outside the mind
“a causal or explanatory factor” is also inside the mind
which means that there are no actual explanations for the patterns in nature
What do you mean by an “actual explanation”? Explanations only exist in the mind, as well.
There’s no reason why planets go round the stars
The reason (which is also in the minds of agents) is the Newton’s law, which is an abstraction derived from the model of the universe that exists in the minds of embedded agents.
there’s no reason why orbital speeds correlate with masses in a particular way, these are all just big coincidences
“None of this is a coincidence because nothing is ever a coincidence” https://tvtropes.org/pmwiki/pmwiki.php/Literature/Unsong
“Coincidence” is a wrong way of looking at this. The world is what it is. We live in it and are trying to make sense of it, moderately successfully. Because we exist, it follows that the world is somewhat predictable from the inside, otherwise life would not have been a thing. That is, tiny parts of the world can have lossily compressed but still useful models of some parts/aspects of the world. Newton’s laws are part of those models.
A more coherent question would be “why is the world partially lossily compressible from the inside”, and I don’t know a non-anthropic answer, or even if this is an answerable question. A lot of “why” questions in science bottom out at “because the world is like that”.
… Not sure if this makes my view any clearer, we are obviously working with very different ontologies.
That is a good point, deciding is different from communicating the rationale for your decisions. Maybe that is what Eliezer is saying.
Right, eventually it will. But abstraction building is very hard! If you have any other option, like growing in size, I would expect it to be taken first.
I guess I should be a bit more precise. Abstraction building at the same level as before is probably not very hard. But going up a level is basically equivalent to inventing a new way of compressing knowledge, which is a quantitative leap.