There is one problem with this. It is not entirely clear whether an ordinary living person will talk about consciousness if he is brought up accordingly his whole life (not given any literature that mentions consciousness, never talking to him about qualia, et cetera...).
quwgri
Causal interactions? The answer is rather trivial. In order for the separate meaningless Planck moments of the brain’s existence to be able to combine into “granules of qualia” that have integrity in time, they must be connected by something. It is usually assumed that there are causal relationships behind this, which can be likened to computational processes.
But many transhumanists, it seems to me, show some duality of thinking here. They agree that two adjacent computational cycles of the brain’s work can be combined into one sensation. But they refuse to assume the existence of more extended configurations of this kind.
Why? Well, there may be two motives here: correct and not quite correct.
On the one hand, we have here a special case of the anthropic principle. It can be argued that the anthropic principle inevitably forces the individual Planck moments of my brain’s existence to merge into my current sense of self. But the anthropic principle will not necessarily force my current sense of self to merge with my future sense of self in the same way.
On the other hand, many transhumanists want to believe in the ease of implementing projects like “mind uploading.” If the life of consciousness does not represent a single track, then “mind uploading” will be much easier to implement. Therefore, many people like to believe in a kind of Buddhism, where I-now exists, but I-chronoblock does not exist.
Which of the motives drives you more is up to you to decide.
I don’t quite understand how actual infinity differs from potential infinity in this context. Time in ToR is considered one of the dimensions of space. How can space be considered “potential infinity”? It subjectively looks like that to a forward-traveling observer. But usually we use the paradigm of objective reality, where everything is assumed to exist equally. Together with the past and the future, if we recall ToR again. Are we supposed to have a special case here, where we need to switch to the paradigm of subjective reality?
I am familiar with the idea that “the information that enables us to act best is true”, but it seems to me to be just a beautiful phrase, because in most cases, in order to develop a model that enables us to act best, we still have to be guided by “truth” in the old, ordinary sense. That is, we obtain some initial “atoms of truth” through experience, but later we have to take care of their logical consistency. And we are not quite right to call some high-level construction “truth”, even if it works well, if it does not logically agree with the “atoms” we used to create it.
This case is free from this problem, since practical verification in this area is impossible. But still, the feeling of some hypocrisy in front of oneself does not disappear. To admit at least in the edge of consciousness the possibility that “the Universe is not X” and at the same time use only “the Universe is X” in calculations—there is some kind of contradiction in this. This is either an act of doublethink (for an agnostic) or an act of politeness (for an ultrafinitist).
The difference between repeating patterns could manifest itself if there were interactions between them. But here reality rather speaks in favor of ultrafinitism. If the hierarchy of complication of structures with interactions could continue to infinity (even at the cost of slowing down the interactions), then theoretically we could find ourselves at any level of the hierarchy. Then we most likely would not see the “bottom of the hierarchy” (Planck’s limit). However, we see it. Therefore, either the hierarchy is finite, or something prevents interactions between levels, or some factor prevents the emergence of too high-level observers.
However, only the first option directly speaks in favor of ultrafinitism. The second and third options—like your reasoning—are valid only in the paradigm of subjective reality.
“Heat death” is also the end of time only in the paradigm of subjective reality. Moreover, only for an anthropocentrically minded observer, from whose point of view one state of “white noise” is no different from another.
Before Einstein, in the era of Newtonian ideas about time, it was believed that the magnitude of the past that had already taken place could be infinitely large. St. Augustine disagreed with this, but he had rather religious reasons.
We can leave theology. It is not so important. I am more concerned with the questions of finitism and infinitism in relation to paradox of sets.
Finitism is logically consistent. However, it seems to me that it suffers from the same problem as the ontological proof of the existence of God. It is an attempt to make a global prediction about the nature of the Universe based on a small thought experiment. Predictions like “Time cannot be infinite”, “Space cannot be infinite” follow directly from finitism. It turns out that we make these predictions based on our mathematical problems with the paradox of sets. At the same time, the paradox of sets itself resembles the paradox “I’m telling a lie now”. and, it seems, should look for a solution somewhere in the same area. If we think off the cuff, it seems to me naively that the very concept of “ordinary set” is composed in such a way as to lead to paradoxes. This is the problem of the concept of “ordinary set”. This is not the problem of the existence/non-existence of physical infinity.
Oh, okay. I don’t really understand this topic. But as far as I know, not all mathematicians are finitists. So it seems that the proofs of finitism are not flawless.
On the other hand, how is the problem of the set paradox solved in cosmological infinitism? Something like “The Infinite Universe may exist, but it is forbidden to talk about it as an object”? Because any attempt to do so will bring you back to the set paradox, if you take it seriously. “Talk about any particular part of the Universe as much as you like, but don’t even think about the Universe as a whole”? This risks forming a somewhat patchwork model of the worldview. “It may exist, but you cannot think about it intelligently and rationally.” One is reminded of Zeno’s attempts to prove that one cannot think about motion without contradictions.
“An infinite universe can exist.”
″A greatest infinity cannot exist.”
I think there is some kind of logical contradiction here. If the Universe exists and if it is infinite, then it must correspond to the concept of “the greatest infinity.” True, Bertrand Russell once expressed doubt that one can correctly reason about the “Universe as a whole.” I don’t know. It seems strange to me. As if we recognize the existence of individual things, but not of all things as a whole. It seems like some kind of arbitrary crutch, a private “ad hoc” solution, conditioned by the weakness of our brain.
As for God or Gods, then, hypothetically, in the case of the coincidence of their value systems and the mental interaction between them according to a common agreed protocol, these problems should not be very important.
It seems to me that this is an attempt to sit on two chairs at once.
On the one hand, you assume that there are some discrete moments of our experience. But what could such a moment be equal to? It is unlikely to be equal to Planck’s time. This means that you assume that different chronoquanta of the brain’s existence are connected into one “moment of experience”. You postulate the existence of “granules of qualia” that have internal integrity and temporal extension.
On the other hand, you assume that these “granules of qualia” are separated from each other and are not connected into a single whole.
Why?
The first and second are weakly connected to each other.
If you believe that there is a mysterious “temporal mental glue” that connects Planck’s moments of the brain’s existence into “granules of qualia” the length of a split second, then it is logical to assume that the “granules of qualia” in turn are connected by this glue into a single stream of consciousness.
No?
Sorry, I feel a little like a bitter cynic and a religious fundamentalist. It seems to me that behind this kind of reasoning there often lies an unconscious desire to maintain faith in the possibility of “mind uploading” or similar operations. If our life is not a single stream, then mind uploading would be much easier to implement. That is why many modern intellectuals prefer such theories.
You can say that the connection of “granules of qualia” into a single stream of the observer’s existence does not make evolutionary sense. This is true. But the connection of individual Planck moments of the brain’s existence into “granules of qualia” also does not make evolutionary sense. If you assume that the first is somehow an illusion, then you can assume the same about the second.
There are several different aspects to this that I have different attitudes towards.
The multi-agent theory of consciousness is plausible. In fact, it is almost tautological. Any complex object can be considered “multi-agent”. “Agent” is not necessarily identical to “consciousness”. Otherwise, you know, you get homunculus recursion.
But there is another side to the issue.
The idea “You should force your brain to create new characters. You should mentally talk to these new characters. This will help you solve your psychological problems.”
There are not really many logical connections between the first and second.
People do often feel better doing this. But people also feel good when they read sci-fi and fantasy. People also feel good when they smoke weed.
Personally, this approach stimulates paranoia in me.
It seems to me that the modern intellectual part of humanity is abnormally keen on potentially dangerous psychological practices. Such as meditation, lucid dreaming, spiritualistic sessions, channeling, Hellinger constellations.
What is the danger of these practices? Well, I have no serious proven grounds for this. Just a hypothetical wave of the hands in the air: “If there really existed some sinister non-human forces that have always secretly manipulated humanity through a system of secret signs, earlier through religions and prophetic dreams, later through abductions and channeling, then the modern fascination of intellectuals with certain things would fit perfectly into this conspiracy scenario.”
Good evening. Sorry to bring up this old thread. Your discussion was very interesting. Specifically regarding this comment, one thing confuses me. Isn’t “the memory of an omniscient God” in this thought experiment the same as “the set of all existing objects in all existing worlds”? If your reasoning about the set paradox proves that “the memory of an omniscient God” cannot exist, doesn’t that prove that “an infinite universe” cannot exist either? Or is there a difference between the two? (Incidentally, I would like to point out that the universe and even the multiverse can be finite. Then an omniscient monotheistic God would not necessarily have infinite complexity. But for some reason many people forget this.)
Pascal’s Mugging.
The problem is that the probability “if I don’t pay this person five dollars, there will be a zillion sufferings in the world” existed before this person told you about it.
This probability has always existed.
Just as the probability “if I pay this person five dollars, there will be a zillion sufferings in the world” has always existed.
Just as the probability “if I raise my right hand, the universe will disappear” has always existed.
Just as the probability “if I don’t raise my right hand, the universe will disappear” has always existed.
You can justify absolutely any action in this way.
This is how obsessive-compulsive disorders work.
What equally strongly supports any strategy actually supports no strategy.
These probabilities cancel each other out. And the fact that we know the possible pragmatic reason for the words of the person who asks us for five dollars makes the probability of his words being true lower than the opposite probability.
This is a logical vicious circle. Morality itself is the handmaiden of humans (and similar creatures in fantasy and SF). Morality has value only insofar as we find it important to care about human and quasi-human interests. This does not answer the question “Why do we care about human and quasi-human interests?”
One could try to find an answer in the prisoner’s dilemma. In the logic of Kant’s categorical imperative. Cooperation of rational agents and the like. Then I should sympathize with any system that cares about my interests, even if that system is otherwise like the Paperclipmaker and completely devoid of “unproductive” self-reflection. Great. There is some cynical common sense in this, but I feel a little disappointed.
The holy problem of qualia may actually be close to the question at hand here.
What do you mean when you ask yourself: “Does my neighbor have qualia?”
Do you mean: “Does my neighbor have the same experiences?” No. You know for sure that the answer is “No.” Your brains and minds are not connected. What’s going on in your neighbor’s head will never be your experiences. It doesn’t matter whether it’s (ontologically) magical blue fire or complex neural squiggles. Your experiences and your neighbor’s brain processes are different things anyway.
What do you mean when you ask yourself: “Are my neighbor’s brain processes similar to my experiences?” What degree of similarity or resemblance do you mean?
Some people think that this is purely a value question. It is an arbitrary decision by a piece of the Universe about which other pieces of the Universe it will empathize with.
Yes, some people try to solve this question through Advaita. One can try to view the Universe as a single mind suffering from dissociative disorder. I know that if my brain and my neighbor’s brain are connected in a certain way, then I will feel his suffering as my suffering. But I also know that if my brain and an atomic bomb are connected in a certain way, then I will feel the thermonuclear explosion as an orgasm. Should I empathize with atomic bombs?
We can try to look at the problem a little differently. The main difference between my sensation of pain and my neighbor’s sensation of pain is the individual neural encoding. But I do not sense the neural encoding of my sensations. Or I do not sense that I sense it. If you make a million copies of me, whose memories and sensations are translated into different neural encodings (while maintaining informational identity), then none of them will be able to say with certainty what neural encoding it currently has. Perhaps, when analyzing the question “what is suffering”, we should discard the aspect of individual neural encoding. That is, suffering is any material process that would become suffering for me if it were translated into my neural encoding within the framework of certain translation technologies.
But the devil is in the details. Again, “certain translation technologies” could make me perceive the explosion of an atomic bomb as an orgasm. On the other hand, an atomic bomb is something that I could not be, even hypothetically (unlike the thought experiment with a million copies). But, if you look from a third side, then I can’t be my neighbor either (we have different memories).
This is a very difficult and subtle question indeed. I do not want to appear to be an advocate of egoism and loneliness (I have personal reasons not to be). But this, in my opinion, is an aspect of the question that cannot be ignored.
The results of these tests have a much simpler explanation. Let’s say we played a prank on all of humanity. We slipped each person a jar of caustic bitter quinine under the guise of delicious squash caviar. A week later, we conduct a mass social survey: “How much do such pranks irritate you?” It is natural to expect that the people who tend to eat any food quickly, without immediately paying attention to its smell and taste, will show the strongest hatred for such things. This will not mean that they are quinine lovers. But it will mean that they mistakenly managed to eat some quinine before their body detected the substitution. Therefore, they became especially angry and became “quininephobes”.
You seem like a very honest and friendly person, as do most of the people in this thread. I would just say, “What difference does it make whether it’s a bug or a feature? Maybe the admins themselves haven’t agreed on this. Maybe some admins think it’s a bug, and some admins think it’s a feature. It’s a gray area. But in any case, I’d rather not draw the admins’ attention to what’s going on, because then their opinion might be determined in a way that’s not favorable to us. We’re not breaking any rules while this is a gray area. But our actions will become a violation of the rules if the gray area is no longer a gray area.”
I have the opposite opinion regarding human motives. Ten years ago, I was thinking about this while corresponding with an acquaintance. I came to this conclusion: “Maybe what she tells me about her relationship with her boyfriend is a false representation. But what I say and think can also be a false representation. If we are all false representations, then what is truth and what is false? It would be better for me, as a liar and as an incarnate lie, to support my own kind. At least it would be an act of solidarity.”
The Hume’s quote (or rather the way you use it) has nothing to do with models of reality. Your post is not about the things Scott was talking about from the very beginning.
Suppose I say “Sirius is a quasar.” I am relying on the generally accepted meaning of the word “quasar.” My words suggest that the interlocutor change the model of reality. My words are a hypothesis. You can accept this hypothesis or reject it.
Suppose the interlocutor says “Sirius cannot be considered a quasar because it would have very bad social consequences.” Perhaps he is making a mistake. For the reasons you described in your text. (To be honest, I am not sure that this is a mistake. But I realize that I am writing this text on a resource for noble crazy idealists, so I will not delve deeply into this issue. Let’s assume that this is indeed a mistake.)
Suppose I say “Let’s consider stars like Sirius to be quasars.” Is this sentence similar to the previous one? No. I am not suggesting that the other person change their model of reality. My words are not a hypothesis. They are just a project. They are just a proposal for certain actions.
Suppose the other person says “If we use the word ‘quasar’ in this way, it will have very bad social consequences.” Is his logic sound? In my opinion, yes. My proposal does not suggest that anyone change their model of reality. It is a proposal for a specific practical action. It is as if I suggested: “Let’s sing the National Anthem while walking.” If the other person says: “If you sing the National Anthem while walking, it will lead to terrible consequences” (and if he can prove it), is he wrong?
Sorry for the possible broken language.
I write through a online-translator.
The described world causes mixed impressions. The ability to get rid of the unsolicited influence of time is very valuable. But at the same time, there is an aspect of deceptiveness here. When reading, I felt the bitter laughter of a religious fundamentalist inside. You know, there are people who constantly accuse the modern Western technocratic civilization of hypocritical infantilism and of trying to forget about the existence of death.
”These naive hedonists try to forget about the Grim Reaper. But they didn’t really beat him. If you get into a car accident and die of wounds, then at that moment you will know that your current consciousness will disappear forever. Then another person wakes up in the hospital who does not remember the current moment. And there will continue to be many such deaths. “
That’s the trouble. Redaction Machines do not destroy death and suffering. It just becomes invisible. There’s a catch here. These machines gave humanity the dangerous illusion of immortality. As a result, humanity has even ceased to develop normal gerontology. The heroine of the story moves to an increasingly distant future, but medicine seems to be almost not developing. Naturally, why should people develop it? After all, all responsible decisions are made by people whose memory does not preserve death and suffering. Humanity is essentially divided into two factions − 1) those who think that everything is fine; 2) those who suffer and die, but their memories will disappear, so these versions of consciousness will not be able to influence the policy of distributing the state budget. It’s like in the movie “The Prestige,” where the decision to “Repeat the trick with drowning?” each time was made only by the surviving copy.
If we talk about the quote at the beginning, then its final conclusion seems to me not entirely correct.
What the vast majority of people mean by “emotions” is different from “rational functions of emotions”. Yudkowsky in his essay on emotions is playing with words, using terms that are not quite traditional.
Fear is not “I calmly foresee the negative consequences of some actions and therefore I avoid them.”
Fear is rather “The thought of the possibility of some negative events makes me tremble, I have useless reflections, I have cognitive distortions that make me unreasonably overestimate (or, conversely, sometimes underestimate) the probability of these negative events, I begin to feel aggression towards sources of information about the possibility of these negative events (and much more in the same spirit).”
Emotions in the human understanding are not at all the same as the rational influence of basic values on behavior in Yudkowsky’s interpretation.
Emotions in the human understanding are, first of all, a mad hodgepodge of cognitive distortions.
Therefore, when Yudkowsky says something like “Why do you think that AI will be emotionless? After all, it will have values!”, I even see some manipulation here. Well, yes, AI will have values influencing behavior. But at the same time, it will not be nervous, freak out, or experience the halo effect. This is absolutely not what a normal ordinary person would call emotions. In fact, here Yudkowsky’s imaginary opponents are closer to the truth, depicting AI as dispassionate and emotionless (because the uniform influence of values on behavior without peaks and troughs should look exactly like that).
Does it matter?
It depends. When communicating with ordinary people, we are used to using their and our cognitive distortions. When talking to a person, you know that you can suddenly change the topic of conversation and influence the emotions of the interlocutor. In communication with AI (powerful enough and having managed to modify itself well), all this will not work. It is like trying to outsmart God.
Therefore, it seems to me that a person who tunes himself to thought “I am communicating with an impassive inhuman being” will in some sense be closer to the truth (at least, will have fewer false subconscious hopes) than a person who tunes himself to thought “I am communicating with the same living emotional sympathetic subject that I am.” But this is context-dependent.