Thank you, I forgot about that one. I guess the summary would be “if your calibration for this class of possibilities sucks, don’t make up numbers, lest you start trusting them”. If so, that makes sense.
Shmi
Isn’t your thesis that “laws of physics” only exist in the mind?
Yes!
But in that case, they can’t be a causal or explanatory factor in anything outside the mind
“a causal or explanatory factor” is also inside the mind
which means that there are no actual explanations for the patterns in nature
What do you mean by an “actual explanation”? Explanations only exist in the mind, as well.
There’s no reason why planets go round the stars
The reason (which is also in the minds of agents) is the Newton’s law, which is an abstraction derived from the model of the universe that exists in the minds of embedded agents.
there’s no reason why orbital speeds correlate with masses in a particular way, these are all just big coincidences
“None of this is a coincidence because nothing is ever a coincidence” https://tvtropes.org/pmwiki/pmwiki.php/Literature/Unsong
“Coincidence” is a wrong way of looking at this. The world is what it is. We live in it and are trying to make sense of it, moderately successfully. Because we exist, it follows that the world is somewhat predictable from the inside, otherwise life would not have been a thing. That is, tiny parts of the world can have lossily compressed but still useful models of some parts/aspects of the world. Newton’s laws are part of those models.
A more coherent question would be “why is the world partially lossily compressible from the inside”, and I don’t know a non-anthropic answer, or even if this is an answerable question. A lot of “why” questions in science bottom out at “because the world is like that”.
… Not sure if this makes my view any clearer, we are obviously working with very different ontologies.
That is a good point, deciding is different from communicating the rationale for your decisions. Maybe that is what Eliezer is saying.
I think you are missing the point, and taking cheap shots.
So, is he saying that he is calibrated well enough to have a meaningful “action-conditional” p(doom), but most people are not? And that they should not engage in “fake Bayesianism”? But then, according to the prevailing wisdom, how would one decide how to act if they cannot put a number on each potential action?
I notice my confusion when Eliezer speaks out against the idea of expressing p(doom) as a number: https://x.com/ESYudkowsky/status/1823529034174882234
I mean, I don’t like it either, but I thought his whole point about Bayesian approach was to express odds and calculate expected values.
Hmm, I am probably missing something. I thought if a human honestly reports a feeling, we kind of trust them that they felt it? So if an AI reports a feeling, and then there is a conduit where the distillate of that feeling is transmitted to a human, who reports the same feeling, it would go some ways toward accepting that the AI had qualia? I think you are saying that this does not address Chalmers’ point.
I am not sure why you are including the mind here, maybe we are talking at cross purposes. I am not making statements about the world, only about the emergence of the laws of physics as written in textbooks, which exist as abstractions across human minds. If you are the Laplace’s demon, you can see the whole world, and if you wanted to zoom into the level of “planets going around the sun”, you could, but there is no reason for you to. This whole idea of “facts” is a human thing. We, as embedded agents, are emergent patterns that use this concept. I can see how it is natural to think of facts, planets or numbers as ontologically primitive or something, not as emergent, but this is not the view I hold.
Well, what happens if we do this and we find out that these representations are totally different? Or, moreover, that the AI’s representation of “red” does not seem to align (either in meaning or in structure) with any human-extracted concept or perception?
I would say that it is a fantastic step forward in our understanding, resolving empirically a question we did not known an answer to.
How do we then try to figure out the essence of artificial consciousness, given that comparisons with what we (at that point would) understand best, i.e., human qualia, would no longer output something we can interpret?
That would be a great stepping stone for further research.
I think it is extremely likely that minds with fundamentally different structures perceive the world in fundamentally different ways, so I think the situation in the paragraph above is not only possible, but in fact overwhelmingly likely, conditional on us managing to develop the type of qualia-identifying tech you are talking about.
I’d love to see this prediction tested, wouldn’t you?
The testing seems easy, one person feels the quale, the other reports the feeling, they compare, what am I missing?
Thanks for the link! I thought it was a different, related but a harder problem than what is described in https://iep.utm.edu/hard-problem-of-conciousness. I assume we could also try to extract what an AI “feels” when it speaks of redness of red, and compare it with a similar redness extract from the human mind. Maybe even try to cross-inject them. Or would there be still more to answer?
How to make dent in the “hard problem of consciousness” experimentally. Suppose we understand brain well enough to figure out what makes one experience specific qualia, then stimulate the neurons in a way that makes the person experience them. Maybe even link two people with a “qualia transducer” such that when one person experiences “what it’s like”, the other person can feel it, too.
If this works, what would remain from the “hard problem”?
Chalmers:
To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?
If you can distill, store and reproduce this experience on demand, what remains? Or, at least, what would/does Chalmers say about it?
- Aug 15, 2024, 8:58 AM; 1 point) 's comment on shminux’s Shortform by (
There is an emergent reason, one that lives in the minds of the agents. The universe just is. In other words, if you are a hypothetical Laplace’s demon, you don’t need the notion of a reason, you see it all at once, past, present and future.
I think I articulated this view here before, but it is worth repeating. It seems rather obvious to me that there are no “Platonic” laws of physics, and there is no Platonic math existing in some ideal realm. The world just is, and everything else is emergent. There are reasonably durable patterns in it, which can sometimes be usefully described as embedded agents. If we squint hard, and know what to look for, we might be able to find a “mini-universe” inside such an agent, which is a poor-fidelity model of the whole universe, or, more likely, of a tiny part of it. These patterns we call agents appear to be fairly common and multi-level, and if we try to generalize the models they use across them, we find that something like “laws of physics” is a concise description. In that sense the laws of physics exist in the universe, but only as an abstraction over embedded agents of a certain level of complexity.
It is not clear whether any randomly generated world would necessarily get emergent patterns like that, but the one we live in does, at least to a degree. It is entirely possible that there is a limit to how accurate a model a tiny embedded agent can contain. For example, if most of the universe is truly random, we would never be able to understand those parts, and they would look like miracles to us, just something that pops up without any observable cause. Another possibility that we might find some patterns that are regular but defy analysis. These would look to us like “magic”: something we know how to call into being, but that defies any rational explanation.
We certainly hope that the universe we live in does not contain either miracles or magic, but it is, in the end, an open empirical question, and does not require any kind of divine power or dualism, it might just be the feature of our world.
Hence the one tweak I mentioned.
Ancient Greek Hell is doing fruitless labor over and over, never completing it.
Christian Hell is boiling oil, fire and brimstone.
The Good Place Hell is knowing you are not deserving and being scared of being found out.
Lucifer Hell is being stuck reliving the day you did something truly terrible over and over.
Actual Hell does not exist. But Heaven does and everyone goes there. The only difference is that the sinners feel terrible about what they did while alive, and feel extreme guilt for eternity, with no recourse. That’s the only brain tweak God does.
No one else tortures you, you can sing hymns all infinity long, but something is eating you inside and you can’t do anything about it. Sinners would be like everyone else most of the time, just subdued, and once in a while they would start screaming and try to self-harm or suicide, to no avail. “Sorry, no pain for you except for the one that is eating you from inside. And no reprieve, either.”
As Patrick McKenzie has been saying for almost 20 years, “you can probably stand to charge more”.
Yeah, I think this is exactly what I meant. There will still be boutique usage for hand-crafted computer programs just like there is now for penpals writing pretty decorated letters to each other. Granted, fax is still a thing in old-fashioned bureaucracies like Germany, so maybe there will be a requirement for “no LLM” code as well, but it appears much harder to enforce.
I think your point on infinite and cheap UI/UX customizations is well taken. The LLM will fit seamlessly one level below that. There will be no “LLM interface” just interface.
Consider moral constructivism.
Thank you for your thoughtful and insightful reply! I think there is a lot more discussion that could be had on this topic, and we are not very far apart, but this is supposed to be a “shortform” thread.
I never liked The Simple Truth post, actually. I sided with Mark, the instrumentalist, whom Eliezer turned into what I termed back then as “instrawmantalist”. Though I am happy with the part
Rather recently Devs the show, which, for all its flaws, has a bunch of underrated philosophical highlights, had an episode with a somewhat similar storyline.
Anyway, appreciate your perspective.