Scientist by training, coder by previous session,philosopher by inclination, musician against public demand.
TAG
And one of Wallace’s axioms, which he calls ‘branching indifference’, essentially says that it doesn’t matter how many branches there are, since macroscopic differences are all that we care about for decisions..
The macroscopically different branches and their weights?
Focussing on the weight isn’t obviously correct , ethically. You cant assume that the answer to “what do I expect to see” will work the same as the answer to “what should I do”. Is-ought gap and all that.
Its tempting to think that you can apply a standard decision theory in terms of expected value to Many Worlds, since it is a matter of multiplying subjective value by probability. It seems reasonable to assess the moral weight of someone else’s experiences and existence from their point of view. (Edit: also, our experiences seem fully real to us, although we are unlikely to be in a high measure world) That is the intuition behind the common rationalist/utilitarian/EA view that human lives don’t decline in moral worth with distance. So why should they decline with lower quantum mechanical measure?
There is quandary here: sticking to the usual “adds up to normality” principle,as an apriori axiom means discounting the ethical importance of low-measure worlds in order to keep your favourite decision theory operating in the usual single-universe way...even if you are in a multiverse. But sticking to the equally usual universalist axiom, that you dont get to discount someone’s moral worth on the basis of factors that aren’t intrinsic to them, means you should not discount..and that the usual decision theory does not apply.
Basically, there is a tension between four things Rationalists are inclined to believe in:-
-
Some kind of MWI is true.
-
Some utilitarian and universalist ethics is true.
-
Subjective things like suffering are ethically relevant. It’s not all about number of kittens
-
It’s all business as normal...orchards up to normality.. fundamental ontological differences should not affect your decision theory.
-
According the many-worlds interpretation (MWI) of quantum mechanics, the universe is constantly splitting into a staggeringly large number of decoherent branches containing galaxies, civilizations, and people exactly like you and me
There is more than one many worlds interpretation. The version stated above is not known to be true.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible. Coherent splitting gives you the very large numbers of “worlds”..except that they are not worlds, conceptually.
Many worlders are pointing at something in the physics and saying “that’s a world”....but whether it qualifies as a world is a separate question , and a separate kind of question, from whether it is really there in the physics. One would expect a world, or universe, to be large, stable, non-interacting, objective and so on . A successful MWI needs to jump three hurdles: mathematical correctness, conceptual correctness, and empirical correctness.
Decoherent branches are expected to be large, stable, non interacting, objective and irreversible...everything that would be intuitively expected of a “world”. But there is no empirical evidence for them , nor are they obviously supported by the core mathematics of quantum mechanics, the Schrödinger equation. Coherent superpositions are small scale , down to single particles, observer dependent, reversible, and continue to interact (strictly speaking , interfere) after “splitting”.
(Note that Wallace has given up on the objectivity of decoherent branches. That’s another indication that MWI is not a single theory).
There isn’t the slightest evidence that irrevocable splitting, splitting into decoherent branches occurs at every microscopic event—that would be combining the frequency of coherent style splitting with the finality of decoherent splitting. We dont know much about decoherence , but we know it is a multi-particle process that takes time, so decoherent splitting, if there is such a thing, must be rarer than the frequency of single particle interactions. ( And so decoherence isn’t simple ). As well as the conceptual incoherence, there is In fact plenty of evidence—eg. the existence of quantum computing—that it doesnt work that way
Also see
I’m not going to argue for this view as that was done very well by Eliezer in his Quantum Physics.
Which view? Everetts view? DeWitts view? Deutsch’s Zeh’s view? Wallace’s view? Saunders view?
I feel like branches being in fact an uncountable continuum is essentially a given
Decoherent branches being being countable, uncountable, or anything else is not given, since there is no established theory of of decoherence.
It’s a given that some observables have continuous spectra..but what’s that got to do with splitting? A observed state that isn’t sharp (in some basis) can get entangled with an apparatus, which then goes into a non-sharp state, and so on. And the whole shebang never splits , or becomes classically sharp.
I mean that the amount of universes that is created will be created anyway, just as a consequence of time passing. So it doesn’t matter anyway. If your actions e.g. cause misery in 20% of those worlds, then the fraction is all that matters; the worlds will exist anyway, and the total amount is not something you’re affecting or controlling.
That’s a special case of “no moral responsibility under determinism”. which might be true , but it’s very different from “utilitarianism works fine under MWI”.
**Enough of the physics confusions—onto the ethics confusions!”″
As well as confusion over the correct version of many worlds, there is of course confusion about which theory of ethics is correct.
There’s broadly three areas where MWI has ethical implications. One is concerned with determinism, freedom of choice, and moral responsibility. One is over the fact that MW means low probability events have to happen every time—as opposed to single universe physics, where they usually don’t. The other is over whether they are discounted in moral significance for being low in quantum mechanical measure or probability
MWI and Free Will
MWI allows probabilities of world states to change over time, but doesn’t allow them to be changed, in a sense amounting to libertarian free will. Agents are just part of the universal wave function, not anything outside the system, or operating by different rules.MWI is, as it’s proponents claim, a deterministic theory, and it only differs from single world determinism in that possible actions can’t be refrained from, and possible futures can’t be avoided. Alternative possibilities are realities, in other words.
MWI, Moral Responsibility, and Refraining.
A standard argument holds that causal determinism excludes libertarian free will by removing alternative possibilities. Without alternative possibilities, you could but have done other than you did, and , the argument goes, you cannot be held responsible for what you had no choice but to do.
Many world strongly implies that you make all possible decisions: according to David Deutsch’s argument that means it allows alternative possibilities, and so removes the objection from moral responsibility despite being a basically deterministic theory.
However, deontology assumed that performing a required act involves restraining from alternatives.. and that it is possible to retain from forbidden acts. Neither is possible under many worlds. Many worlds creates the possibility, indeed the necessity, of doing otherwise, but removes the possibility of refraining from an act. Even though many worlds allows Alternative Possibilities, unfortunately for Deutschs argument, that other aspects create a similar objection on the basis of moral responsibility: why would you hold someone morally responsible for an act if they could not refrain from it?
MWI, Probability, and Utilitarian Ethics
Its tempting to think that you can apply a standard decision theory in terms of expected value to Many Worlds, since it is a matter of multiplying subjective value by probability. One wrinkle is that QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI. Another is that it is reasonable to assess the moral weight of someone else’s experiences and existence from their point of view. That is the intuition behind the common rationalist/utilitarian/EA view that human lives don’t decline in moral worth with distance. So why should they decline with lower quantum mechanical measure? There is quandary here: sticking to the usual “adds up to normality” principle,as an apriori axiom means discounting the ethical importance of low-measure worlds in order to keep your favourite decision theory operating in the usual single-universe way, even if you are in a multiverse. But sticking to the equally usual universalist axiom, that that you dont get to discount someone’s moral worth on the basis of factors that aren’t intrinsic to them, means you should not
Measure is not probability.
Mathematically, Quantum mechanical measure—amplitude—isn’t ordinary probability, which is why you need the Born rule.The point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Ontologcally, it also not probability, because it does not represent the likelihood of one happening instead of another. And it has its own role, unlike that if ordinary probability, which is explaining how much contribution to a coherent superposition each component state makes (although what that means in the case of irrevocably decohered branches is unclear)
Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.
The Ethical Weight or Low Measure Worlds
MWI creates the puzzle that low probability outcomes still happen, and have to be taken into account ethically. Many rationalists assume that they simply matter less, because that is the only way to restore anything like a normal view of ethical action—but one should not assume something merely because it is convenient.
It can be argued that most decision theoretic calculations come out the same under different interpretations of QM...but altruistic ethics is different. In standard decision theory, you can tell directly how much utility you are getting; but in altruistic ethics , you are not measuring your suffering/happiness, you are assessing someone else’s...and in the many worlds setting, that means solving the problem of how they are affected by their measure. It is not clear how low measure worlds should be considered in utilitarian ethics. It’s tempting to ethically discount low measure worlds in some way, because that most closely approximates conventional single world utilitarianism. The alternative might force one to the conclusion that overall good outcomes are impossible to attain , so long as one cannot reduce the measures of worlds full of suffering zero. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn’t comes with built in ethics
One part of the problem is that QM measure isn’t probability, because all possible branches occur in MWI. Another stems from the fact that what other people experience is relevant to them, wheareas for a probability calculation, I only need to be able to statistically predict my own observations.. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds. However, these are not necessarily.equivalent ethically.
Suppose they low measure worlds are discounted ethically. If people in low measure worlds experience their suffering fully, then a 1%, of creating a hell-world would be equivalent in suffering to a 100% chance, and discount is unjustified. But if people in low measure worlds are like philosophical zombies, with little or no phenomenal consciousness, so that their sensations are faint or nonexistent, the moral hazard is much lower, and the discount is justified. A point against discounting is that our experiences seem fully real to us, although we are unlikely to be in a high measure world
A similar, but slightly less obvious argument applies to causing death. Causing the “death” of a complete zombie is presumably as morally culpable as causing the death of a character in a video game...which, by common consent, is not problem at all. So… causing the death of a 50% zombie would be only half as bad as killing a real person...maybe.
Classical Measure isn’t Quantum Mechanical Measure
A large classical universe is analogous to Many Worlds in that the same structures—the same people and planets—repeat over long distances. It’s even possible to define a measure, by counting repetitions up to a certain level of similarity. And one has the option if thinking about Quantum Mechanical measure that way,as a “head count”....but one is not forced to do so. On one hand, it features normality, on the other hand It is not “following the maths” ,because there’s nothing in the formalism to suggest summing a number of identical low measure states is the only way to get a high measure one. So, again, it’s an extraneous assumption, and circular reasoning .
Ethical Calculus is not Decision Theory
Of course, MWI doesn’t directly answer the question about consciousness and zombiehood .You can have objective information about observations, and if your probability calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows physics to be less wrong. And you can have subjective information about your own mental states, and if your personal calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows personal decision theory to be less wrong.
Altruistic ethics is different. You don’t have either kind of direct evidence, because you are concerned with other people’s subjective sensations , not objective evidence , or your own subjectivity. Questions about ethics are downstream of questions about qualia, and qualia are subjective, and because they are subjective, there is no reason to expect them to behave like third person observations.
“But it all adds up to normality!”
If “it all” means every conjecture you can come up with, no It doesn’t. Most conjectures are wrong. The point of empirical testing is to pick out the right ones—the ones that make correct predictions, save appearances, add up to normality That’s a difficult process, not something you get for free.
So “it all adds up to normality” is not some universal truth And ethical theories relating to someone else’s feelings are difficult to test, especially if someone else is in the far future, or an unobservable branch of the multiverse. Testability isn’t an automatic given either.
There are no major ethical implications at all...Wallace makes a similar claim in his book: “But do [the many worlds in MWI] matter to ordinary, banal thought, action and language? Friendship is still friendship. Boredom is still boredom. Sex is still sex
That’s very narrow circle ethics, if it’s ethics at all—he just likes a bunch of things that impact him directly And it’s rather obvious that small circle ethical theories have the least interaction with large universe physical theories. So it likely he hasn’t even considered the question of altruistic ethics in many worlds, and is therefore coming to the conclusion that it all adds up to normality rather cheaply. It’s his ethical outlook that is the structural element , not his take on MWI.
Every quantum event splits the multiverse, so my measure should decline by 20 orders of magnitude every second.
There isn’t the slightest evidence that irrevocable splitting, splitting into decoherent branches occurs at every microscopic event—that would be combining the frequency of coherentism style splitting with the finality of decoherent splitting. As well as the conceptual incoherence, there is In fact plenty of evidence—eg. the existence of quantum computing—that it doesnt work that way
“David Deutsch, one of the founders of quantum computing in the 1980s, certainly thinks that it would. Though to be fair, Deutsch thinks the impact would “merely” be psychological – since for him, quantum mechanics has already proved the existence of parallel uni- verses! Deutsch is fond of asking questions like the following: if Shor’s algorithm succeeds in factoring a 3000-digit integer, then where was the number factored? Where did the computational resources needed to factor the number come from, if not from some sort of “multiverse” exponentially bigger than the universe we see? To my mind, Deutsch seems to be tacitly assuming here that factoring is not in BPP – but no matter; for purposes of argument, we can certainly grant him that assumption. It should surprise no one that Deutsch’s views about this are far from universally accepted. Many who agree about the possibil- ity of building quantum computers, and the formalism needed to describe them, nevertheless disagree that the formalism is best inter- preted in terms of “parallel universes.” To Deutsch, these people are simply intellectual wusses – like the churchmen who agreed that the Copernican system was practically useful, so long as one remembers that obviously the Earth doesn’t really go around the sun. So, how do the intellectual wusses respond to the charges? For one thing, they point out that viewing a quantum computer in terms of “parallel universes” raises serious difficulties of its own. In particular, there’s what those condemned to worry about such things call the “preferred basis problem.” The problem is basically this: how do we define a “split” between one parallel universe and another? There are infinitely many ways you could imagine slic- ing up a quantum state, and it’s not clear why one is better than another! One can push the argument further. The key thing that quan- tum computers rely on for speedups – indeed, the thing that makes quantum mechanics different from classical probability theory in the first place – is interference between positive and negative amplitudes. But to whatever extent different “branches” of the multiverse can usefully interfere for quantum computing, to that extent they don’t seem like separate branches at all! I mean, the whole point of inter- ference is to mix branches together so that they lose their individual identities. If they retain their identities, then for exactly that reason we don’t see interference. Of course, a many-worlder could respond that, in order to lose their separate identities by interfering with each other, the branches had to be there in the first place! And the argument could go on (indeed, has gone on) for quite a while. Rather than take sides in this fraught, fascinating, but perhaps ultimately meaningless debate...”..Scott Aaronson , QCSD, p148
Also see
It seems common for people trying to talk about AI extinction to get hung up on whether statements derived from abstract theories containing mentalistic atoms can have objective truth or falsity values. They can. And if we can first agree on such basic elements of our ontology/epistemology as that one agent can be objectively smarter than another, that we can know whether something that lives in a physical substrate that is unlike ours is conscious, and that there can be some degree of objective truth as to what is valuable [not that all beings that are merely intelligent will necessarily pursue these things], it in fact becomes much more natural to make clear statements and judgments in the abstract or general case, about what very smart non-aligned agents will in fact do to the physical world.
Why does any of that matter for AI safety? AI safety is a matter of public policy. In public policy making, you have a set of preferences, which you get from votes or surveys, and you formulate policy based on your best objective understanding of cause and effect. The preferences don’t have to be objective, because they are taken as given. It’s quite different to philosophy, because you are trying to achieve or avoid something, not figure out what something ultimately is. You do t have to answer Wolfram’s questions in their own terms, because you can challenge the framing.
And if we can first agree on such basic elements of our ontology/epistemology as that one agent can be objectively smarter than another,
It’s not all that relevant to AI safety, because an AI only needs some potentially dangerous capabilities. Admittedly, a lot of the literature gives the opposite impression.
that we can know whether something that lives in a physical substrate that is unlike ours is conscious,
You haven’t defined consciousness and you haven’t explained how . It doesn’t follow automatically from considerations about intelligence. And it doesn’t follow from having some mentalistic terms in our theories.
and that there can be some degree of objective truth as to what is valuable
there doesn’t need to be. You don’t have to solve ethics to set policy.
Arguably, “basic logical principles” are those that are true in natural language.
That’s where the problem starts, not where it stops. Natural language supports a bunch of assumptions that are hard to formally reconcile: if you want your strict PNC, you have to give up on something else. The whole 2500 yeah history of logic has been a history of trying to come up with formal systems that fulfil various desiderata. It is now formally proven that you can’t have all of them at once, and it’s not obvious what to keep and what to ditch. (Godelian problems can be avoided with lower power systems, but that’s another tradeoff, since high power is desirable).
Formalists are happy to pick a system that’s appropriate for a practical domain, and to explore the theoretical properties of different systems in parallel.
Platonists believe that only one axiom system has truth in addition to usefulness, but can’t agree which one it is, so it makes no difference in practice
I’m not seeing a specific problem with sets—you can avoid some of the problems of naive self theory by adding limitations, but that’s tradeoffs again.
Otherwise nothing stops us from considering absurd logical systems where “true and true” is false, or the like.
“You can’t have all the intuitive principles in full strength in one system”
doesn’t imply
“adopt unintuitive axioms”.
Even formalists don’t believe all axiomisations are equally useful.
Likewise, “one plus one is two” seems to be a “basic mathematical principle” in natural language.
What’s 12+1?
Any axiomatization which produces “one plus one is three” can be dismissed on grounds of contradicting the meanings of terms like “one” or “plus” in natural language.
They’re ambiguous in natural language, hence the need for formalisation.
The trouble with set theory is that, unlike logic or arithmetic, it often doesn’t involve strong intuitions from natural language.
It involves some intuitions . It works like clubs. Being a senator is being a member of a set, not exemplifying a universal.
Sets are a fairly artificial concept compared to natural language collections (empty sets, for example, can produce arbitrary nestings), especially when it comes to infinite sets.
If you want finitism, you need a principled way to select a largest finite number.
However, I find myself appealing to basic logical principles like the law of non-contradiction.
The law of non contradiction isn’t true in all “universes” , either. It’s not true in paraconsistent logic, specifically.
Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them “weak”.
@Logan Zoellner being wrong doesn’t make anyone else right. If the actual argument is conjunctive and complex, then all the component claims need to be high probability. That is not the case. So Logan is right for not quite the right reasons—it’s not length alone.
That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.
Many possible objections here, but of course spelling everything out would violate Logan’s request for a short argument.
And it wouldn’t help anyway. I have read the Sequences , and there is nothing resembling a proof , or even strong argument, for the claim about coherent human values. Ditto the standard claims about utility functions, agency , etc. Reading the sequence would allow him to understand the LessWrong collective, but should not persuade him.
Whereas the same amount of time could, more reasonably, be spent learning how AI actually works.
Needless to say, that request does not have anything to do with effectively tracking reality,
Tracking reality is a thing you have to put effort into, not something you get for free, by labelling yourself a rationalist.
The original Sequences have did not track reality , because they are not evidence based—they are not derived from academic study or industry experience. Yudkowsky is proud that they are “derived from the empty string”—his way of saying that they are armchair guesswork.
His armchair guesses are based on Bayes,von Neumann rationality, utility maximisation, brute force search etc, which isnt the only way to think about AI, or particularly relevant to real world AI. But it does explain many doom arguments, since they are based on the same model—the kinds of argument that immediately start talking about values and agency. But of course that’s a problem in itself. The short doomer arguments use concepts from the Bayes/VonNeumann era in a “sleepwalking” way, out of sheer habit, given that the basis is doubtful. Current examples of AIs aren’t agents, and it’s doubtful whether they have values. It’s not irrational to base your thinking on real world examples, rather than speculation.
In addition , they haven’t been updated in the light of new developments , something else you have to do to track reality. tracking reality has a cost—you have to change your mind and admit you are wrong. If you don’t experience the discomfort of doing that, you are not tracking reality.
People other than Yudkowsky have written about AI safety from the perspective of how real world AIs work, but adding that injust makes the overall mass of information larger and more confusing.
where there is no “platonic” argument for any non-trivial claim describable in only two sentence, and yet things continue to be true
You are confusing truth and justification.
You need to say something about motivation.
There are dozens of independent ways in which AI can cause a mass extinction event at different stages of its existence.
While each may have around a 10 percent chance a priori, cumulatively there is more than a 99 percent chance that at least one bad thing will happen.
Same problem. Yes, there’s lots of means. That’s not the weak spot. The weak spot is motivation.
Same problem. You’ve done nothing to fill the gap between “ASI will happen” and “ASI will kill us all”.
As other people have said, this is a known argument; specifically, it’s in The Generalized Anti-Zombie Principle in the Physicalism 201 series. From the very early days of LessWrong
Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.”
I think this proof relies on three assumptions. The first (which you address in the post) is that consciousness must happen within physics. (The opposing view would be substance dualism where consciousness causally acts on physics from the outside.) The second (which you also address in the post) is that consciousness and reports about consciousness aren’t aligned by chance. (The opposing view would be epiphenomenalism, which is also what Eliezer trashes extensively in this sequence.) physical duplicate might do the same, although. that would imply the original’s consciousness is epiphenomenal. Which is itself a reason to disbelieve in p-zombies , although not an impossibility proof.
This of course contradicts the Generalised Anti Zombie Principle announced by Eliezer Yudowsky. The original idea was that in a zombie world, it would be incredibly unlikely for an entity’s claims of consciousness to be caused by something other than consciousness. ”
Excluding coincidence doesn’t proved that an entity’s reports of consciousness are directly caused by its own consciousness. Robo-Chalmers will claim to be conscious because Chalmers does. It might actually be conscious, as an additional reason, or it might not. The fact that the claim is made does not distinguish the two cases. Yudkowsky makes much of the fact that Robo-Chalmers claim.would be caused indirectly by consciousness—Chalmers has to be conscious in order to make a computational duplicate of his consciousness—but at best that refutes the possibility of a zombie world, where entities claim to be conscious, although consciousness has never existed. Robo-Chalmers would still be possible in this world for reasons Yudkowsky accepts. So there is one possible kind of zombie, even given physicalism so the Generalised Anti Zombie Principle is false
(Note that I am talking about computational zombies, or c-zombies, not p-zombies
Computationalism isn’t a direct consequence of physicalism. Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That’s the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because there CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn’t imply computationalism, and arguments against p-zombies don’t imply the non existence of c-zombies-duplicates that are identical computationally, but not physically).
That sounds like a Chalmers paper. https://consc.net/papers/qualia.html
Argument length is substantially a function of shared premises
A stated argument could have a short length if it’s communicated between two individuals who have common knowledge of each others premises..as opposed to the “Platonic” form, where every load bearing component is made explicit, and there is noting extraneous.
But that’s a communication issue....not a truth issue. A conjunctive argument doesn’t become likelier because you don’t state some of the premises. The length of the stated argument has little to do with its likelihood.
How true an argument is, how easily it persuades another person, how easy it is to understand have little to do with each other.
The likelihood of an ideal argument depends in the likelihood of it’s load bearing premises...both how many there are, and their individual likelihoods.
Public communication, where you have no foreknowledge of shared premises, needs to keep the actual form closer to the Platonic form.
Public communication is obviously the most important kind when it comes to avoiding AI doom.
This is important, because the longer your argument, the more details that have to be true, and the more likely that you have made a mistake
Correct. The fact that you don’t have to explicitly communicate every step of an argument to a known recipient, doesnt stop the overall probability of a conjunctive argument from depending on the number, and individual likelihood, of the steps of the Platonic version, where everything necessary is stated and nothing unnecessary is stated
Argument strength is not an inverse function with respect to argument length, because not every additional “piece” of an argument is a logical conjunction which, if false, renders the entire argument false.
Correct. Stated arguments can contain elements that are explanatory, or otherwise redundant for an ideal recipient.
Nonetheless, there is a Platonic form, that does not contain redundant elements or unstated, load bearing steps.
Anyways, the trivial argument that AI doom is likely [...]s that it’s not going to have values that are friendly to humans
That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.
This is a classic example of failing to communicate with people outside the bubble. Your assumptions about values and agency just aren’t shared by the general public or political leaders.
PS .
A fact cannot be self evidently true if many people disagree with it.
That’s self evidently true. So why does it have five disagreement downvotes ?
I mean that if turing machine is computing universe according to the laws of quantum mechanics,
I assume you mean the laws of QM except the collapse postulate.
observers in such universe would be distributed uniformly,
Not at all. The problem is that their observations would mostly not be in a classical basis.
not by Born probability.
Born probability relates to observations, not observers.
So you either need some modification to current physics, such as mangled worlds,
Or collapse. Mangled worlds is kind of a nothing burger—its a variation on the idea than interference between superposed states leads to both a classical basi and the Born probabilities, which is an old idea, but wihtout making it any more quantiative.
or you can postulate that Born probabilities are truly random.
??
One might be determined to throw in the towel on cognitive effort if they were to take a particular interpretation of determinism, and they, and the rest of us, would be worse off for it.
Determinists are always telling each other to act like libertarians. That’s a clue that libertarianism is worth wanting. @James Stephen Brown
Compatibilist free will has all the properties worth wanting: your values and beliefs determine the future, to the extent you exert the effort to make good decisions.
No it doesn’t, because it doesn’t have the property of being able to shape the future, or steer towards a future that wasn’t inevitable. Which is pretty important if you are trying to avoid the AI kills everyone future
Libertarian free will is able to do that.
Naturalistic libertarianism appeals to some form of indeterminism, or randomness, inherent in physics rather than a soul or ghost-in-the-machine unique to humans, , that overrides the physical behaviour of the brain. The problem is to explain how indeterminism does not undermine other features of a kind free will “worth wanting”—purposiveness, rationality and so on.
Randomness is not what we want
Explaining NLFW in terms of “randomness” is difficult, because the word has connotations of purposelessness , meaninglessness, and so on. But these are only connotations, not strict implications. “Not deteminism” doesn’t imply lack of reason , purpose , or control. It doesn’t have to separate your from from your beliefs and values. Therefore,I prefer the term “indeterminism” over the term “randomness”.
So, how to explain indeterminism does not undermine other features of a kind free will “worth wanting”.
Part of the answer is to note that mixtures of indeterminism and determinism are possible, so that libertarian free will is not just pure randomness, where any action is equally likely.
Another part is proposing a mechanism , with indeterminism occurring at different places and times, rather than being slathered evenly over neural activity.
Another part is noting that control doesn’t have to mean predetermination.
Another part is that notice that a choice between things you wish to do cannot leave you doing something you do not wish to do, something unconnected to your desires and beliefs.
The basic mechanism is that the unconscious mind proposes various ideas and actions , which a the conscious mind decides between. Thus is similar to the mechanism provided by the determinist Sam Harris. He makes much of the fact that the conscious mind, the executive function, does not predetermined the suggestions: I argue that the choice between them, the decision to act in one rather than another, *is* conscious control. -- and conscious control clearly exists in health adults.
I noticed the same thing—even Scott Alexander dropped a reference to it without explaining it. Anyway, here what I came up with:-
https://www.reddit.com/r/askphilosophy/s/lVNnjhTurI
(That’s me done for another two days)
You are a subject, and you determine your own future
Not so much , given deteminism.
Determinism allows you to cause the future in a limited sense. Under determinism, events still need to be caused,and your (determined) actions can be part of the cause of a future state that is itself determined, that has probability 1.0. Determinism allows you to cause the future ,but it doesn’t allow you to control the future in any sense other than causing it. (and the sense in which you are causing the future is just the sense in which any future state depends on causes in he past—it is nothing special and nothing different from physical causation). It allows, in a purely theoretical sense “if I had made choice b instead of choice a, then future B would have happened instead of future A” … but without the ability to have actually chosen b. You are a link in a deterministic chain that leads to a future state, so without you, the state will not happen … not that you have any choose use in the matter. You can’t stop or change the future because you can’t fail to make your choices, or make them differently. You can’t anything of your own, since everything about you and your choices was determined by at the time of the Big Bang. Under determinism , you are nothing special...only the BB is special.
Tthis is still true under many worlds. even though MWI implies that there is not a single inevitable future, it doesn’t allow you to influence the future in a way that makes future A more likely than future B , as a result of some choice you make now. Under MW determinism, the probabilities of A and B are what they are, and always were—before you make a decision, after you make a decision , and before you were born. You can’t choosee between them, even in the sense of adjusting the probabilities.
Libertarian free will, by contrast, does allow the future to depend on decisions which are not themselves determined. That means there are valid statements of the form “if I had made choice b instead of choice a, then future B would have happened instead of future A”. And you actually could have made choice a or choice b....these are real possibilities, not merely conceptual or logical ones.
Your model of muon decay doesn’t conserve charge—you start with −1e , then have −2e and finally have zero. Also, the second electron is never observed.
What I have noticed is that while there are cogent overviews of AI safety that don’t come to the extreme conclusion that we all going to be killed by AI with high probability....and there are articles that do come to that conclusion without being at all rigorous or cogent....there aren’t any that do both. From that I conclude there aren’t any good reasons to believe in extreme AI doom scenarios, and you should disbelieve them. Others use more complicated reasoning, like “Yudkowsky is too intelligent to communicate his ideas to lesser mortals, but household believe him anyway”.
(See @DPiepgrass saying something similar and of course getting downvoted).
@MitchellPorter supplies us with some examples of gappy arguments.
human survival and flourishing require specific complex values that we don’t know how to specify
There ’s no evidence that “human values” are even a coherent entity , and no reason to believe that any AI of any architecture would need them.
But further pitfalls reveal themselves later, e.g. you may think you have specified human-friendly values correctly, but the AI may then interpret the specification in an unexpected way.
What is clearer than doom, is that creation of superintelligent AI is an enormous gamble, because it means irreversibly handing control of the world
Hang on a minute. Where does control of the come from? Do we give it to the AI? Does it take it?
to something non-human. Eliezer’s position is that you shouldn’t do that unless you absolutely know what you’re doing. The position of the would-be architects of superintelligent AI is that hopefully they can figure out everything needed for a happy ending, in the course of their adventure.
One further point I would emphasize, in the light of the last few years of experience with generative AI, is the unpredictability of the output of these powerful systems. You can type in a prompt, and get back a text, an image, or a video, which is like nothing you anticipated, and sometimes it is very definitely not what you want. “Generative superintelligence” has the potential to produce a surprising and possibly “wrong” output that will transform the world and be impossible to undo.
Current generative AI has no ability to directly affect anything. Where would that come from?
Large: economies of scale; need to coordinate many specialised skills. ( Factories were developed before automation)
Hierarchical: Needed because Large. It’s how you co-ordinate a.much greater than Dunbar number of people. (Complex software is also hierarchical).
Bureaucratic: Hierarchical subdivision by itself is necessary but insufficient...it makes organisations manageable but not managed. Reports create legibility and Rules ensure that units are contributing to the whole, not pursuing their own ends.
I don’t see what Wentworld is:
Are you giving up on scale per se?
Are you accepting scale but giving up on hierarchy—If so, how do a thousand people in a flat structure co-ordinate?
Are you accepting scale and hierarchy , but giving up on bureaucracy?
Are you accepting scale, hierarchy, and bureaucracy, but...the right kind that doesn’t come from the Will to Power?
Its easy to imagine a Dunbar number of grad student types all getting along very well with each other...but it isn’t a world.l, its a senior common room, or boutique R&D department.
The trick of hierarchy is to divide a large amount of information about the whole organisation into a manageable amount of coarse grained information about the whole organisation (for senior managers) … and a manageable amounts of fine grained information about sub-units (for middle managers)
From a super intelligent POV there is probably a ton of identifiable waste, but from a merely intelligent POV, you still have the problem of trading off globality against granularity. Its much easier to prove waste exists than come up with a practical solution for eliminating it.
Which, of course, is not to say that waste doesn’t exist, or that there is no negative-sum status-seeking.
I really don’t understand what “best explanation”, “true”, or “exist” mean, as stand-alone words divorced from predictions about observations we might ultimately make about them.
Nobody is saying that anything has to be divorced from prediction , in the sense that emperical evidence is ignored: the realist claim is that empirical evidence should be supplemented by other epistemic considerations.
Best explanation:- I already pointed out that EY is not an instrumentalist. For instance, he supports the MWI over the CI, although they make identical predictions. Why does he do that? For reasons of explanatory simplicity , consilience with the rest of physics, etc .as he says. That gives you a clue as to what “best explanation” is. (Your bafflement is baffling...it sometimes sounds like you have read the sequences, it sometimes sounds like you haven’t. Of course abduction, parsimony, etc are widely discussed in the mainstream literature as well ).
True:- mental concept corresponds to reality.
Exists:- You can take yourself as existing , and you can regard other putative entieties as existing if they gave some ability to causally interact with you. That’s another baffling one, because you actually use something like that definition in your argument against mathematical realism below.
This isn’t just a semantic point, I think. If there are no observations we can make that ultimately reflect whether something exists in this (seems to me to be) free-floating sense, I don’t understand what it can mean to have evidence for or against such a proposition.
Empirical evidence doesn’t exhaust justification. But you kind of know that, because you mention “good argument” below.
So I don’t understand how I am even supposed to ever justifiably change my mind on this topic, even if I were to accept it as something worth discussing on the object-level.
Apriori necessary truths can be novel and surprising to an agent, in practice, even though they are apriori and necessary in principle… because a realistic agent can’t instantaneously and perfectly correlate their mental contents, and don’t have an oracular list of every theory in their head. You are not a perfect Bayesian. You can notice a contradiction that you haven’t noticed before. You can be informed of a simpler explanation that you hadn’t formulated yourself.
What can possibly sway me one way or another when all variables X that I appear to be able to observe (or think about, etc.) are in the concrete realm, which is defined to be entirely non-intersecting with the Platonic realm?
Huh? I was debating nomic realism. Mathematical realism is another thing. Objectively existing natural laws obviously intersect with concrete observations , because if Gravity worked on an inverse cube law (etc), everything looked very different.
You don’t have to buy into realism about all things, or anti realism about all things. You can pick and choose. I don’t personally believe in Platonic realism about mathematics, for the same reasons you don’t. I believe Nomic realism is another question...its logically possible for physical laws to have been different.
@shminux defined the the thing he is arguing against as “Platonic” .. I don’t have to completely agree with that, nor do you. Maybe it’s just a mistake to think of nomic realism as Platonism. Platonism marries the idea of non-mental existence and the idea of non causality...But they can be treated separately.
What can that possibly mean in this context?”
what context? You’re talking about mathematical realism, I’m talking about nomic realism.
as lines of logic and reasoning, whose validity and soundness implies we are more likely to be in a world where certain possibilities are true rather than others (when mulling over multiple hypotheses
What have I said that makes you think I have departed from that?
@Shminux
If push comes to shove, I would even dispute that “real” is a useful category once we start examining deep ontological claims
Useful for what? If you terminally value uncovering the true nature of reality, as most scientists and philosophers do, you can hardly manage without some concept of “real”. If you only value making predictions, perhaps you don’t need the concept....But then the instrumentalist/realist divide is a difference in values, as I previously said, not a case of one side being wrong and the other side being right.
“Exist” is another emergent concept that is not even close to being binary, but more of a multidimensional spectrum (numbers, fairies and historical figures lie on some of the axes).
“Not a binary” is a different take from “not useful”.
The critical point is that we have no direct access to the underlying reality, so we, as tiny embedded agents, are stuck dealing with the models regardless.
“No direct access to reality” is a different claim to “no access to reality” is a different claim to “there is no reality” is a different to “the concept of reality is not useful”.l
I can provisionally accept that there is something like a universe that “exists”, but, as I said many years ago in another thread, I am much more comfortable with the ontology where it is models all thea way down (and up and sideways and every which way).
It’s incoherent. What are these models, models of?
Is there anything different about the orld that I should expect to observe depending on whether Platonic math “exists” in some ideal realm? If not, why would I care about this topic once I have already dissolved my confusion about what beliefs are meant to refer to?
Word of Yud is that beliefs aren’t just about predicting experience. While he wrote Beliefs Must Pay Rent, he also wrote No Logical Positivist I.
(Another thing that has been going on for years is people quoting VBeliefs Must Pay Rent as though it’s the whole story).
Maybe you are a logical positivist, though....you’re allowed to be , and the rest of us are allowed not to be. It’s a value judgement: what doesn’t have instrumental value toward predicting experience can still.have terminal value.
If you are not an LP,.idealist, etc, you are interested in finding the best explanation for.your observations—that’s metaphysics. Shminux seems.sure that certain negative metaphysical claims are true—there are No Platonic numbers, objective laws,.nor real probabilities. LP. does not allow such conclusions: it rejects both positive and negative metaphysical claim as meaningless.
The question is what would support the dogmatic version of nomic antirealism, as.opposed to the much more defensible claim that we don’t know one way or the other (irrealism)
Later on in the thread, you talked about “laws of physics” as abstractions written in textbooks, made so they can be understandable to human minds. But, as a terminological matter, I think it is better to think of the laws of physics as the rules that determine how the territory functions, i.e., the structured, inescapable patterns guiding how our observations come about, as opposed to the inner structure of our imperfect maps that generate our beliefs.
The term can be used in either sense. Importantly, it can be used in both senses: the existence of in-the-mind sense doesn’t preclude the existence of the in—reality sense. Maps dont necessarily correspond to reality, but they can. “Doesn’t necessarily correspond ” doesnt mean the same thing as necessarily doesn’t correspond”.
@Shminux
It is not clear whether any randomly generated world would necessarily get emergent patterns like that, but the one we live in does, at least to a degree
And maybe there is a.reason for that...and maybe the reason is the existence of Platonic in -the-territory physical laws. So there .s an argument for nomic realism. Is there an argument against? You haven’t given one, just “articulated a claim”.
So in your opinion, there is no reason why anything happens?
There is an emergent reason, one that lives in the minds of the agents.
But that’s not the kind of reason that makes anything happen—it’s just a passive model.
The universe just is.
That isn’t an argument against or for Platonic laws. Maybe it just is in a way that includes Platonic laws, maybe it isn’t.
In other words, if you are a hypothetical Laplace’s demon, you don’t need the notion of a reason, you see it all at once, past, present and future.
I think you mean a hypothetical God with a 4D view of spacetime. And LD only has the ability to work out the future from a 3D snapshot. Yes, if you could see past present , you wouldn’t need in-the-mind laws to.make predictions..but, again that says nothing about in-the-territory, Platonic laws. Even if God doesn’t need in-the-mind laws, it’s still possible that reality needs in-the-territory laws to make things happen.
“a causal or explanatory factor” is also inside the mind
Anthropics and Boltzmann brains are also in the mind. As concepts.
What’s in the mind has to make sense, to fit together. Even if maths is all in the mind, maths problems still need to be solved. Saying maths is all in the mind does not tell you whether a particular theorem is true or false. Likewise , saying metaphysics is all in the mind does bot tell you that nomic realism is false, and anthropics true.
We have a meta map of the mind world relation, and if we assume a causal relation from the world to the mind, we can explain where new information comes from, and if we assume lawful behaviour in the world, we can explain regularities. Maybe these are all concepts we have, but we still need to fit them.together in a way that reduces the overall mystery, just as we still need to solve maths problems.
What do you mean by an “actual explanation”?
Fitting them.together in a way that reduces the overall mystery.
We live in it and are trying to make sense of it
And if you want us to believe that the instrumentalist picture makes the most sense, you need to argue for it. The case for realism.l, by contrast, has been made.
A more coherent question would be “why is the world partially lossily compressible from the inside”, and I don’t know a non-anthropic answer
The objective existence of physical laws, nomic realism, is a non anthropic answer which has already been put to you.
ETA
Maybe, again, we differ where they live, in the world as basic entities or in the mind as our model of making sense of the world.
...or both, since...
it is foolish to reduce potential avenues of exploration.
Yudowsky’s argument that probability is subjective is flawed, because it rests on assumption that the existence of subjective probability implies the non existence of objective probabilty but the assumption is never justified. But you seem to buy into it anyway. And you seem to be basing your anti realism o n a similar unargued assumption.
the ‘instantaneous’ mind (with its preferences etc., see post) is*—if we look closely and don’t forget to keep a healthy dose of skepticism about our intuitions about our own mind/self*—sufficient to make sense of what we actually observe
Huh? If you mean my future observations, then you are assuming a future self, and therefore temporally extended self. If you mean my present observations, then they include memories of past observations.
in fact I’ve defended some strong computationalist position in the past
But a computation is an series of steps over time, so it is temporarily extended
No, identity theory and illusionism are competitors. And epiphenenomenalism is dualism, not physicalism. As I have pointed out before.